url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/3013 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3013/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3013/comments | https://api.github.com/repos/huggingface/transformers/issues/3013/events | https://github.com/huggingface/transformers/issues/3013 | 570,709,317 | MDU6SXNzdWU1NzA3MDkzMTc= | 3,013 | Generation with gpt-2 | {
"login": "simonefrancia",
"id": 7140210,
"node_id": "MDQ6VXNlcjcxNDAyMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7140210?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonefrancia",
"html_url": "https://github.com/simonefrancia",
"followers_url": "https://api.github.com/users/simonefrancia/followers",
"following_url": "https://api.github.com/users/simonefrancia/following{/other_user}",
"gists_url": "https://api.github.com/users/simonefrancia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonefrancia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonefrancia/subscriptions",
"organizations_url": "https://api.github.com/users/simonefrancia/orgs",
"repos_url": "https://api.github.com/users/simonefrancia/repos",
"events_url": "https://api.github.com/users/simonefrancia/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonefrancia/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"There is an example: https://github.com/huggingface/transformers/blob/master/examples/run_generation.py",
"Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,587 | 1,582 | CONTRIBUTOR | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hi @julien-c and all,
we are developing a training with GPT-2 language model based on our custom dataset.
Firstly, we train a custom Tokenizer with tokenizers ( as described [here](https://github.com/huggingface/tokenizers/issues/166) ) .
Now I have a dummy GPT-2 model trained on 1 epoch but my problem is on inference.
The result is always the same when I feed the model with the starting text.
Code (is the same at link )
```
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch
tokenizer = GPT2Tokenizer.from_pretrained("my-gpt2-tokenizer")
model = GPT2LMHeadModel.from_pretrained('my-gpt2-model')
generated = tokenizer.encode("The Manhattan bridge")
context = torch.tensor([generated])
past = None
for i in range(100):
print(i)
output, past = model(context, past=past)
token = torch.argmax(output[..., -1, :])
generated += [token.tolist()]
context = token.unsqueeze(0)
sequence = tokenizer.decode(generated)
print(sequence)
```
Using this code, the output is repeated:
```
The Manhattan bridge<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit<nl> I'm a little bit
```
I think that problems are two basically:
1. the first is that for every inference we take only the argument that maximizes probability, and I don't use any type of sampler based on some type of distribution.
2. the second is that the content fed after the first step is always the next word predicted , so it goes in loop.
I don't have any other ideas. Is there an example of inference with huggingface lib?
Thanks
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3013/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3012 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3012/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3012/comments | https://api.github.com/repos/huggingface/transformers/issues/3012/events | https://github.com/huggingface/transformers/issues/3012 | 570,706,290 | MDU6SXNzdWU1NzA3MDYyOTA= | 3,012 | batch_encode_plus with pad_to_max_length is not padding the output | {
"login": "dipanjannag",
"id": 1206413,
"node_id": "MDQ6VXNlcjEyMDY0MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1206413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dipanjannag",
"html_url": "https://github.com/dipanjannag",
"followers_url": "https://api.github.com/users/dipanjannag/followers",
"following_url": "https://api.github.com/users/dipanjannag/following{/other_user}",
"gists_url": "https://api.github.com/users/dipanjannag/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dipanjannag/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dipanjannag/subscriptions",
"organizations_url": "https://api.github.com/users/dipanjannag/orgs",
"repos_url": "https://api.github.com/users/dipanjannag/repos",
"events_url": "https://api.github.com/users/dipanjannag/events{/privacy}",
"received_events_url": "https://api.github.com/users/dipanjannag/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
},
{
"id": 1843377584,
"node_id": "MDU6TGFiZWwxODQzMzc3NTg0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Version%20mismatch",
"name": "Version mismatch",
"color": "ddea7c",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"The code snippet prints `True` for me.\r\nI have the versions:\r\ntransformers version: 2.5.1\r\nPlatform: Linux\r\nPython version: 3.6.9\r\nPyTorch version: 1.4.0+cpu\r\nCan you try `pip install --upgrade transformers` and see whether the error is still there?\r\n\r\n(Posted on behalf of @patrickvonplaten who's having issues with Github)",
"Thanks @TevenLeScao it seems the issue was only for transformers ~2.3"
] | 1,582 | 1,582 | 1,582 | NONE | null | # 🐛 Bug
## Information
Model I am using (BertTokenizer,):
Language I am using the model on: English
The problem arises when using:
* [ ] the official example scripts.
The tasks I am working on is:
* [ ] Simple batch tokenization task
## To reproduce
```
from transformers import BertTokenizer, BertForQuestionAnswering
import torch
import numpy as np
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad')
qt1 = ('What is the name of the repository?', 'pipeline have been included in the huggingface/transformers repository')
qt2 = ('What is the name of the repository?', 'What can be done with this???')
inp_raw = [qt1, qt2]
tck_temp = tokenizer.batch_encode_plus(inp_raw, max_length=20, pad_to_max_length=True)
print(tck_temp)
inp_ids = tck_temp['input_ids']
tck_type_ids = tck_temp['token_type_ids']
print(len(inp_ids[0]) == len(inp_ids[1]))
## This is coming false
```
## Expected behavior
The code snippet should print True
But it prints False
## Environment info
- `transformers` version: 2.3
- Platform: Linux
- Python version: 3.6
- PyTorch version (1.4CPU)
[EDIT] : correct transformers version | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3012/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3011 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3011/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3011/comments | https://api.github.com/repos/huggingface/transformers/issues/3011/events | https://github.com/huggingface/transformers/pull/3011 | 570,660,178 | MDExOlB1bGxSZXF1ZXN0Mzc5NjU4MDYw | 3,011 | Add models special tokens to its pretrained configs | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3011?src=pr&el=h1) Report\n> Merging [#3011](https://codecov.io/gh/huggingface/transformers/pull/3011?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e693cd1e877aa191d3317faed33e87d1558c9406?src=pr&el=desc) will **decrease** coverage by `1.02%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3011?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3011 +/- ##\n==========================================\n- Coverage 77.25% 76.22% -1.03% \n==========================================\n Files 98 98 \n Lines 16040 16048 +8 \n==========================================\n- Hits 12392 12233 -159 \n- Misses 3648 3815 +167\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3011?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.05% <ø> (-0.33%)` | :arrow_down: |\n| [src/transformers/configuration\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/3011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbS5weQ==) | `96.36% <100%> (+0.13%)` | :arrow_up: |\n| [src/transformers/configuration\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.22% <100%> (+0.16%)` | :arrow_up: |\n| [src/transformers/configuration\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3RyYW5zZm9feGwucHk=) | `92.59% <100%> (+0.13%)` | :arrow_up: |\n| [src/transformers/configuration\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbmV0LnB5) | `93.87% <100%> (+0.39%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3011/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/3011/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3011?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3011?src=pr&el=footer). Last update [e693cd1...f5b50c6](https://codecov.io/gh/huggingface/transformers/pull/3011?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Isn’t this duplicating info that’s already in the tokenizers?",
"@julien-c yeah it's duplicating information. \r\n\r\nThe main reason I added them is because when calling the function `model.generate()` one always has to put the special token ids into the function with:\r\n`model.generate(pad_token_id=tokenizer.pad_token_id, ...)` \r\n\r\nThe second reason was that every pretrained model already has the attributes `pad_token_id`, `eos_token_id` and `bos_token_id` which are just set to None no matter which pretrained model is loaded. \r\n\r\nOr we could just delete the attributes pad_token_id from the config - I think the generate() function is anyways the only function using self.config.pad_token_id\r\n",
"maybe the `generate()` method should actually be a pipeline? cf. the `FillMaskPipeline`?",
"Ok, I see your point! Should we maybe then also delete the self.pad_token_id, self.eos_token_id and self.bos_token_id in https://github.com/huggingface/transformers/blob/5bc99e7f33c83b23b88740877283098ef7964b73/src/transformers/configuration_utils.py#L78\r\n\r\nSo that it is clear that the tokens are no attribute of the models at all. @julien-c & @LysandreJik ",
"Yes, I think that would make sense. Are these used anywhere?",
"They are only used in the generate() function, but since there is also a pad_token_id argument to the function, they are not needed at all. I will open a new PR to delete them. ",
"As I mentioned before, I feel like we should go toward having a single configuration file for both models and tokenizers (at least for pretrained model, for newly initialized model this may imply forcing the user to supply a configuration object when creating the model/tokenizer).\r\n\r\nIn this case I don't see any problem with having token_id attributes in this configuration file that the model could use, this doesn't means we are gathering tokenizer and model, just that they are depending on a single configuration object.\r\n\r\nI do agree that we need to think this carefully though."
] | 1,582 | 1,583 | 1,583 | MEMBER | null | I think the token_ids for each specific model should also be added to their pretrain configs. This would also make the function generate() much easier to use for the user.
If ok, I can also add these tokens for models not having a LMHead. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3011/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3011",
"html_url": "https://github.com/huggingface/transformers/pull/3011",
"diff_url": "https://github.com/huggingface/transformers/pull/3011.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3011.patch",
"merged_at": 1583447209000
} |
https://api.github.com/repos/huggingface/transformers/issues/3010 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3010/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3010/comments | https://api.github.com/repos/huggingface/transformers/issues/3010/events | https://github.com/huggingface/transformers/pull/3010 | 570,647,752 | MDExOlB1bGxSZXF1ZXN0Mzc5NjQ3NDcx | 3,010 | Possible improvement of padding logic in generate | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,582 | 1,582 | 1,582 | MEMBER | null | I wanted to try out a approach to the padding logic (previously discussed with @thomwolf and @mfuntowicz) which is **different** to what is currently done (see #2885 description).
Instead of setting the `pad_token_id` to `eos_token_id` if `pad_token_id` is not defined and `eos_token_id` is defined, the following logic could be applied:
If there is no `pad_token_id`, the user has to add the `pad_token_id` to the tokenizer and resize the model token embedding matrix to add an additional token vector to the weight matrix.
During my PR today, I encountered two problems:
1. Adding a token embedding vector causes some pretrained model (`gpt2` and `openai-gpt`) often generate the newly added token embedding vector (since input & output token embeddings are tied). A remedy for this is to simply always set the produced logits of the pad_token_id to -Inf. This works as it doesn't generate the `pad_token`, but it changes the internal behavior of `gpt2` and `openai-gpt` however.
2. TransfoXL uses an adaptive token embedding matrix, which means that the function `resize_embedding_matrix` produces an error when used with `TransfoXLHeadModel`. This means the `token_embedding_matrix` of TransfoXL cannot be changed! Therefore, this approach doesn't work at all for TransfoXL (which is a pity as TransfoXL generates very good language).
In the current PR, some tests fail (TransfoXL and the slow tests of gpt2 and openai-gpt). I wanted to hear your opinion on this logic vs. the logic that is implemented now (see #2885) @thomwolf, @LysandreJik and @mfuntowicz
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3010/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3010",
"html_url": "https://github.com/huggingface/transformers/pull/3010",
"diff_url": "https://github.com/huggingface/transformers/pull/3010.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3010.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3009 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3009/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3009/comments | https://api.github.com/repos/huggingface/transformers/issues/3009/events | https://github.com/huggingface/transformers/pull/3009 | 570,492,033 | MDExOlB1bGxSZXF1ZXN0Mzc5NTE1NDQz | 3,009 | add crf layer to BERT, RoBERTa, XLMR | {
"login": "mezig351",
"id": 10896185,
"node_id": "MDQ6VXNlcjEwODk2MTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/10896185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mezig351",
"html_url": "https://github.com/mezig351",
"followers_url": "https://api.github.com/users/mezig351/followers",
"following_url": "https://api.github.com/users/mezig351/following{/other_user}",
"gists_url": "https://api.github.com/users/mezig351/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mezig351/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mezig351/subscriptions",
"organizations_url": "https://api.github.com/users/mezig351/orgs",
"repos_url": "https://api.github.com/users/mezig351/repos",
"events_url": "https://api.github.com/users/mezig351/events{/privacy}",
"received_events_url": "https://api.github.com/users/mezig351/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"What do you think about just using https://github.com/harvardnlp/pytorch-struct (I'm biased)? It should remove the need for most of this code. \r\n\r\nAlso we now have a pytorch-lightning NER example which should make this much simpler. \r\n\r\nWould also be helpful to know if any of this improves prediction accuracy.",
"@srush thanks for the comment, I'll definitely check out the repo. Regarding accuracy, there are studies that suggest using CRF yields better results in some languages, for example in Portugese and Slavic languages:\r\n\r\narticle{souza2019portuguese,\r\n title={Portuguese Named Entity Recognition using BERT-CRF},\r\n author={Souza, F{\\'a}bio and Nogueira, Rodrigo and Lotufo, Roberto},\r\n journal={arXiv preprint arXiv:1909.10649},\r\n year={2019}\r\n}\r\n\r\narticle{arkhipov2019tuning,\r\n title={Tuning Multilingual Transformers for Named Entity Recognition on Slavic Languages},\r\n author={Arkhipov, Mikhail and Trofimova, Maria and Kuratov, Yuri and Sorokin, Alexey},\r\n journal={BSNLP’2019},\r\n pages={89},\r\n year={2019}\r\n}\r\n",
"this looks promising. is this going to be merged?",
"So as per my above comment, I'm not going to merge this example which feels too complex to me. But I will start an issue to put together some work `crf/` examples for ner/chunking/parsing that people can build on."
] | 1,582 | 1,584 | 1,584 | NONE | null | Building on pull request [ bert(+lstm)+crf #2249 ], I make it more generic and similar to run_ner.py . There is a CRF layer adapted from https://github.com/jiesutd/LatticeLSTM , which does not have additional package requirements, and was published in ACL 2018. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3009/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3009",
"html_url": "https://github.com/huggingface/transformers/pull/3009",
"diff_url": "https://github.com/huggingface/transformers/pull/3009.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3009.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3008 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3008/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3008/comments | https://api.github.com/repos/huggingface/transformers/issues/3008/events | https://github.com/huggingface/transformers/pull/3008 | 570,474,899 | MDExOlB1bGxSZXF1ZXN0Mzc5NTAwODUw | 3,008 | [WIP] Remove tokenizers dependency | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,582 | 1,586 | 1,586 | COLLABORATOR | null | In v2.5.1 the tokenizers don't default to the fast implementations (as introduced in v2.5.0), which I think is a good thing when looking at the issues that have arisen from it. Even though the performance of the new tokenizers is phenomenal (and I complement everyone who has been working on it!), it seems a bit premature to make `tokenizers` a dependency. (In addition, see for instance this topic concerning installation issues: https://github.com/huggingface/transformers/issues/2980.)
Even though the fast implementation isn't the default any more, it is still part of the dependencies in setup.py. This PR removes `tokenizers` from the dependency list but indicates in the documentation that having `tokenizers` installed and using `use_fast` can result in great performance improvements.
Generally, I am not satisfied with how this PR has been implemented (quite some duplication across the different tokenizers), but I don't really see another way. Alternative ideas are definitely welcome. If, on the other hand, you decide to keep the dependency, that is fine too.
Note: my flake8 keeps failing with an obscure error so can't do the manual code checking now. Might try again later.
Note: tests will fail for the fast tokenizers (and perhaps on some other imports). I'll look further into this if you decide that this PR is welcome. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3008/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3008",
"html_url": "https://github.com/huggingface/transformers/pull/3008",
"diff_url": "https://github.com/huggingface/transformers/pull/3008.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3008.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3007 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3007/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3007/comments | https://api.github.com/repos/huggingface/transformers/issues/3007/events | https://github.com/huggingface/transformers/issues/3007 | 570,440,246 | MDU6SXNzdWU1NzA0NDAyNDY= | 3,007 | [Benchmark] Pipeline for question answering | {
"login": "dipanjannag",
"id": 1206413,
"node_id": "MDQ6VXNlcjEyMDY0MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1206413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dipanjannag",
"html_url": "https://github.com/dipanjannag",
"followers_url": "https://api.github.com/users/dipanjannag/followers",
"following_url": "https://api.github.com/users/dipanjannag/following{/other_user}",
"gists_url": "https://api.github.com/users/dipanjannag/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dipanjannag/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dipanjannag/subscriptions",
"organizations_url": "https://api.github.com/users/dipanjannag/orgs",
"repos_url": "https://api.github.com/users/dipanjannag/repos",
"events_url": "https://api.github.com/users/dipanjannag/events{/privacy}",
"received_events_url": "https://api.github.com/users/dipanjannag/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
}
] | closed | false | null | [] | [
"same experience with a 2080TI. It s like it s not batched...",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"As @pommedeterresautee mentioned, the samples are not batched. I've made a [simple modification](https://gist.github.com/vinicius-cleves/1b2d79a9665d165ac22451c225b2f8b0) to do batch inference on pytorch, but it didn`t seem to help much with the total processing time. \r\n"
] | 1,582 | 1,606 | 1,589 | NONE | null | # 🖥 Benchmarking `transformers`
## Benchmark
I'm trying to benchmark QA model with `bert-large-uncased-whole-word-masking-finetuned-squad`. But it seems it is extremely slow e.g. 3-4 sec for 1 question with 2 contexts.
I feel there is something I'm missing in pipeline.
## Sample Code:
```
def answer(self, contexts:List[str], question:str, **kwargs):
## tokenizer, model, pipeline all are cached in actual implementation
## via [reify](https://docs.pylonsproject.org/projects/pyramid/en/latest/api/decorator.html)
## so model loading is not a problem.
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', max_len=500)
model = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-
finetuned-squad')
pipeline('question-answering', model=model, tokenizer=tokenizer)
pipeline_input = []
for c in contexts:
pipeline_input.append({
'question' : question,
'context' : c
})
answers = pipeline(pipeline_input)
```
## Set-up
CPU: Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz
memory: 16GB
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3007/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3007/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3006 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3006/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3006/comments | https://api.github.com/repos/huggingface/transformers/issues/3006/events | https://github.com/huggingface/transformers/pull/3006 | 570,385,755 | MDExOlB1bGxSZXF1ZXN0Mzc5NDI2ODYw | 3,006 | [FIX] not training when epoch is small | {
"login": "mataney",
"id": 11559198,
"node_id": "MDQ6VXNlcjExNTU5MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/11559198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mataney",
"html_url": "https://github.com/mataney",
"followers_url": "https://api.github.com/users/mataney/followers",
"following_url": "https://api.github.com/users/mataney/following{/other_user}",
"gists_url": "https://api.github.com/users/mataney/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mataney/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mataney/subscriptions",
"organizations_url": "https://api.github.com/users/mataney/orgs",
"repos_url": "https://api.github.com/users/mataney/repos",
"events_url": "https://api.github.com/users/mataney/events{/privacy}",
"received_events_url": "https://api.github.com/users/mataney/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Cool, thanks @mataney! Do you mind running `make style` at the root of the repository to reformat the file according to `black` and `isort`, as indicated in our [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests)?\r\n\r\nThis is so the `check_code_quality` passes.",
"@LysandreJik Thanks for your reply.\r\n\r\nRecommited with the black formatting.\r\nIt is now failing because something else. Do you think you can rerun the tests? I believe it has something to do with the Indeterminism of the tests.\r\n\r\n",
"Can I have your help here @LysandreJik "
] | 1,582 | 1,584 | 1,584 | CONTRIBUTOR | null | Closes https://github.com/huggingface/transformers/issues/2995
Solving bug where for small epochs and large gradient_accumulation_steps we never train.
Explained further here: https://github.com/huggingface/transformers/issues/2995 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3006/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3006/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3006",
"html_url": "https://github.com/huggingface/transformers/pull/3006",
"diff_url": "https://github.com/huggingface/transformers/pull/3006.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3006.patch",
"merged_at": 1584631281000
} |
https://api.github.com/repos/huggingface/transformers/issues/3005 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3005/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3005/comments | https://api.github.com/repos/huggingface/transformers/issues/3005/events | https://github.com/huggingface/transformers/issues/3005 | 570,371,044 | MDU6SXNzdWU1NzAzNzEwNDQ= | 3,005 | Output of pipeline feature extraction | {
"login": "Stuffooh",
"id": 50005268,
"node_id": "MDQ6VXNlcjUwMDA1MjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/50005268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Stuffooh",
"html_url": "https://github.com/Stuffooh",
"followers_url": "https://api.github.com/users/Stuffooh/followers",
"following_url": "https://api.github.com/users/Stuffooh/following{/other_user}",
"gists_url": "https://api.github.com/users/Stuffooh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Stuffooh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Stuffooh/subscriptions",
"organizations_url": "https://api.github.com/users/Stuffooh/orgs",
"repos_url": "https://api.github.com/users/Stuffooh/repos",
"events_url": "https://api.github.com/users/Stuffooh/events{/privacy}",
"received_events_url": "https://api.github.com/users/Stuffooh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
}
] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"I cannot reproduce this. Can you post a verifiable example with full code that I can just copy and paste?\r\n\r\nHere you see that I just get 7 tokens back:\r\n\r\n```python\r\nimport numpy as np\r\nfrom transformers import AutoTokenizer, AutoModel, pipeline\r\n\r\n\r\nmodel = AutoModel.from_pretrained('distilbert-base-uncased')\r\ntokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased')\r\nnlp = pipeline('feature-extraction', model=model, tokenizer=tokenizer)\r\n\r\nfeatures = nlp('Do you like cookies ?')\r\nprint(features)\r\nfeatures = np.squeeze(features)\r\nprint(features.shape)\r\n# (7, 768)\r\n```",
"@BramVanroy Hi thanks for the quick reply.\r\n\r\nI have used the code example you provided and get the same output again:\r\n\r\n```\r\nimport numpy as np\r\nfrom transformers import AutoTokenizer, AutoModel, pipeline\r\n\r\n\r\nmodel = AutoModel.from_pretrained('distilbert-base-uncased')\r\ntokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased')\r\nnlp = pipeline('feature-extraction', model=model, tokenizer=tokenizer)\r\n\r\nfeatures = nlp('Do you like cookies ?')\r\nfeatures = np.squeeze(features)\r\nprint(feature.shape)\r\n# (512, 768)\r\n```\r\n\r\nYesterday I deleted my transformers directory and cloned the github repo of transformers again and used pip --install . to set everything up so I should be on the most recent version with no differences.",
"Hm, that is odd. I have tested with 2.5.0 and 2.5.1 and both give me 7 tokens back.\r\n\r\nCan you run `python transformers-cli env` and paste the result here?",
"I actually have a similar issue, but with the Fast Tokenizer for the `bert-base-uncased` model\r\n\r\n```\r\nmodel = BertModel.from_pretrained('bert-base-uncased')\r\n\r\ntokenizer_fast = BertTokenizerFast.from_pretrained('bert-base-uncased', \r\nadd_special_tokens=False)\r\n\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased', \r\nadd_special_tokens=False)\r\n\r\nsentence = 'We are very happy to include pipeline into the transformers repository.'\r\n\r\nnlp = pipeline('feature-extraction', model=model, tokenizer=tokenizer, device=0)\r\n\r\nnlp2 = pipeline('feature-extraction', model=model, tokenizer=tokenizer_fast, device=0)\r\n\r\ntnsor = np.squeeze(nlp(sentence))\r\n# (14, 768)\r\n\r\ntnsor = np.squeeze(nlp2(sentence))\r\n# (512, 768)\r\n\r\n```\r\n\r\nThe \"slow\" tokenizer gives me the expected 14 tokens (which is strange too because I set `add_special_tokens=False` but not relevant for this question) but the fast tokenizer gives me the padded 512 tokens.\r\n\r\nAny ideas? Thanks!",
"cc @mfuntowicz Users in this thread report that the behaviour of the fast tokenizer differs from the slow ones with respect to the pipeline. When the pipeline is used for feature extraction, the fast tokenizers return the fully padded output (512) instead of the expected output (number of subtokens). Not sure if related to https://github.com/huggingface/transformers/pull/2961",
"@BramVanroy I decided to clone and rebuild transformers again to make 100% sure I'm on the most recent version and have a clean working environment. After doing so I got the expected result of shape (<512, 768).\r\n\r\nIn the end I'm not sure what the problem was. Should I close this issue or keep it open for @mabergerx ?\r\n\r\n@mabergerx Try cloning the code into a new directory and rebuild from source. This ended up fixing the problem for me.",
"You can keep it open for now, because it seems to indicate some inconsistencies between versions or even commits. I will have a closer look when I find the time.",
"Regarding `add_special_tokens` behaviour, this is one of the major difference between fast and slow tokenizers. Fast requires the parameter to be defined at creation time (as you did), for the slow ones, it should be provided while calling methods like `encode`, `encode_plus`, `batch_encode_plus`. \r\n\r\nFor the initial padding issue, we fixed some stuff related to padding on transformers 2.5.1 which might also have resolved your issue. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,588 | 1,588 | NONE | null | Hi,
I am using the new pipeline feature of transformers for feature extraction and I have to say it's amazing. However I would like to alter the output of the pipeline slightly but I am not sure how to and I was hoping some people of the community could point me into the right direction.
I am using the following code snippet in my script:
```
nlp = pipeline('feature-extraction', model=args.model, config=args.config, tokenizer=args.tokenizer, device=args.device)
features = nlp(value)
features = np.squeeze(features)
features = features.astype('float32')
h5f = h5py.File(args.output_path, 'a')
h5f.create_dataset(key, data=features)
h5f.close()
```
The output for every value I put into the pipeline has the shape of (512, 768) no matter the length of the sentence I put into the pipeline.
I understand the shape of 512 is caused by padding and 768 because of the number of hidden states.
However I would like to have the output to be (15, 768) or (312, 768) for example depending on the input instead of always (512, 768). I know this is not standard but for my purpose I need this format.
Could someone please point me to the right direction how to achieve this?
Thanks a lot! I'm really at a loss here.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3005/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3005/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3004 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3004/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3004/comments | https://api.github.com/repos/huggingface/transformers/issues/3004/events | https://github.com/huggingface/transformers/issues/3004 | 570,369,585 | MDU6SXNzdWU1NzAzNjk1ODU= | 3,004 | BART : How can I train and evaluate BART on CNN/DM dataset | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"BART has only just been implemented and is not part of release code yet. I'm sure that examples will be added later on, you just need a bit more patience.",
"How is this going? I would like to train BART for text summarization with my own dataset and I have no idea of what preprocessing is needed or what inputs the model needs. I would appreciate the help @sshleifer.",
"No training code yet, but we have an eval example. Its very new obviously, so make an issue and tag me if it breaks :)",
"https://github.com/huggingface/transformers/blob/b3e0a1cf0573a909f1769b5f1c2b7273faf36cf4/examples/summarization/bart/evaluate_cnn.py",
"The associated PR I opened has the training code. Just in case you want to test it out and run some experiments/give feedback. I based it on [this colab](https://colab.research.google.com/drive/1C4jEf0fnLiz6Xdx4TDz1OoO4BRCjCx1m) that I wrote. ",
"I couldn't reach the accuracy in the paper with CNNDM dataset using the pre-trained model (facebook/bart-large-cnn). Can anybody reproduce the accuracy properly?"
] | 1,582 | 1,614 | 1,585 | CONTRIBUTOR | null | # ❓ Questions & Help
@sshleifer
I found examples for text summarization on CNN/DM dataset using BERT, but I couldn't find any example using BART.
**Are you going to add it later, or update the existing example to add BART, or it's not scheduled ?** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3004/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3004/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3003 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3003/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3003/comments | https://api.github.com/repos/huggingface/transformers/issues/3003/events | https://github.com/huggingface/transformers/pull/3003 | 570,214,339 | MDExOlB1bGxSZXF1ZXN0Mzc5Mjg2NzQ1 | 3,003 | Test correct tokenizers after default switch | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,582 | 1,582 | 1,582 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3003/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3003",
"html_url": "https://github.com/huggingface/transformers/pull/3003",
"diff_url": "https://github.com/huggingface/transformers/pull/3003.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3003.patch",
"merged_at": 1582587953000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3002 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3002/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3002/comments | https://api.github.com/repos/huggingface/transformers/issues/3002/events | https://github.com/huggingface/transformers/pull/3002 | 570,210,993 | MDExOlB1bGxSZXF1ZXN0Mzc5MjgzOTMx | 3,002 | Tokenizer Fast False by default | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,582 | 1,582 | 1,582 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3002/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3002/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3002",
"html_url": "https://github.com/huggingface/transformers/pull/3002",
"diff_url": "https://github.com/huggingface/transformers/pull/3002.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3002.patch",
"merged_at": 1582587058000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3001 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3001/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3001/comments | https://api.github.com/repos/huggingface/transformers/issues/3001/events | https://github.com/huggingface/transformers/issues/3001 | 570,163,848 | MDU6SXNzdWU1NzAxNjM4NDg= | 3,001 | Changing the loss function in BERT | {
"login": "paul-you",
"id": 23263212,
"node_id": "MDQ6VXNlcjIzMjYzMjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/23263212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paul-you",
"html_url": "https://github.com/paul-you",
"followers_url": "https://api.github.com/users/paul-you/followers",
"following_url": "https://api.github.com/users/paul-you/following{/other_user}",
"gists_url": "https://api.github.com/users/paul-you/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paul-you/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paul-you/subscriptions",
"organizations_url": "https://api.github.com/users/paul-you/orgs",
"repos_url": "https://api.github.com/users/paul-you/repos",
"events_url": "https://api.github.com/users/paul-you/events{/privacy}",
"received_events_url": "https://api.github.com/users/paul-you/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"This is a very general PyTorch question. Please post it on Stack Overflow. https://stackoverflow.com/\r\n\r\nAbout the BERT part: `outputs` is a tuple. `[0]` is the last hidden state (sequence output), `[1]` is the pooled output.\r\n\r\n"
] | 1,582 | 1,582 | 1,582 | NONE | null | # ❓ Questions & Help
Hello,
I'm trying to replace the _CrossEntropyLoss_ with the _KLDivLoss_ for a sentence classification task using some golden logits and the logits from the BERT model.
```
golden_logits = …
outputs = model(**inputs)
# the logits from BERT
logits = outputs[1]
loss_fct = KLDivLoss(reduction='sum')
loss = loss_fct(F.log_softmax(logits, dim=-1), F.softmax(golden_logits, dim=-1))
```
Am I doing this correctly ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3001/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3000 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3000/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3000/comments | https://api.github.com/repos/huggingface/transformers/issues/3000/events | https://github.com/huggingface/transformers/issues/3000 | 570,157,506 | MDU6SXNzdWU1NzAxNTc1MDY= | 3,000 | unk_token not set when loading TransformerXLTokenizer.from_pretrained() from a save_pretrained() | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,588 | 1,588 | MEMBER | null | Using `TransfomerXLTokenizer.from_pretrained()` when loading from `.save_pretrained()` generated files.
Python complains about missing <unknown> token:
```
line 175, in _build_from_file
raise ValueError("No <unkown> token in vocabulary")
ValueError: No <unkown> token in vocabulary
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3000/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2999 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2999/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2999/comments | https://api.github.com/repos/huggingface/transformers/issues/2999/events | https://github.com/huggingface/transformers/pull/2999 | 570,084,710 | MDExOlB1bGxSZXF1ZXN0Mzc5MTc3NjQz | 2,999 | Create README.md for the new model fine tuned for Spanish POS tagging | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2999?src=pr&el=h1) Report\n> Merging [#2999](https://codecov.io/gh/huggingface/transformers/pull/2999?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8194df8e0cff8e5866ec2bcbda34e3892f10eb39?src=pr&el=desc) will **decrease** coverage by `1.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2999?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2999 +/- ##\n==========================================\n- Coverage 77.16% 76.14% -1.02% \n==========================================\n Files 98 98 \n Lines 15999 15999 \n==========================================\n- Hits 12345 12182 -163 \n- Misses 3654 3817 +163\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2999?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.53% <0%> (+0.32%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2999?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2999?src=pr&el=footer). Last update [8194df8...b00f716](https://codecov.io/gh/huggingface/transformers/pull/2999?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,582 | 1,582 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2999/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2999",
"html_url": "https://github.com/huggingface/transformers/pull/2999",
"diff_url": "https://github.com/huggingface/transformers/pull/2999.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2999.patch",
"merged_at": 1582572830000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2998 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2998/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2998/comments | https://api.github.com/repos/huggingface/transformers/issues/2998/events | https://github.com/huggingface/transformers/pull/2998 | 570,075,765 | MDExOlB1bGxSZXF1ZXN0Mzc5MTY5ODUw | 2,998 | kwargs are passed to both model and configuration in AutoModels | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,582 | 1,651 | 1,582 | MEMBER | null | AutoModel doc says it passes arguments to config, but it actually also passes them to the models which makes them crash. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2998/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2998",
"html_url": "https://github.com/huggingface/transformers/pull/2998",
"diff_url": "https://github.com/huggingface/transformers/pull/2998.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2998.patch",
"merged_at": 1582571979000
} |
https://api.github.com/repos/huggingface/transformers/issues/2997 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2997/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2997/comments | https://api.github.com/repos/huggingface/transformers/issues/2997/events | https://github.com/huggingface/transformers/pull/2997 | 570,066,443 | MDExOlB1bGxSZXF1ZXN0Mzc5MTYxOTEz | 2,997 | add explaining example to XLNet LM modeling | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Cool! Indeed, if I remember the paper correctly there is no need to shift the labels. Maybe we could update the [associated docstring](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_xlnet.py#L980-L985) to make sure no one gets confused again?\r\n\r\nDone :-) ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2997?src=pr&el=h1) Report\n> Merging [#2997](https://codecov.io/gh/huggingface/transformers/pull/2997?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/21d8b6a33ebf96680b6a0aabd27fa7eaa068da93?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2997?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2997 +/- ##\n=======================================\n Coverage 77.19% 77.19% \n=======================================\n Files 98 98 \n Lines 16013 16013 \n=======================================\n Hits 12361 12361 \n Misses 3652 3652\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2997?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2997/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <ø> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2997?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2997?src=pr&el=footer). Last update [21d8b6a...1868b91](https://codecov.io/gh/huggingface/transformers/pull/2997?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,582 | 1,582 | MEMBER | null | 1) adds example explaining how XLNet could be used for standard auto-regressive modelling.
2) puts add_special_tokens=True for simple example to make sure no <sep> and <cls> are added to input. The are used for special training (similar to BERT) and might be confusing to user | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2997/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2997",
"html_url": "https://github.com/huggingface/transformers/pull/2997",
"diff_url": "https://github.com/huggingface/transformers/pull/2997.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2997.patch",
"merged_at": 1582576958000
} |
https://api.github.com/repos/huggingface/transformers/issues/2996 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2996/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2996/comments | https://api.github.com/repos/huggingface/transformers/issues/2996/events | https://github.com/huggingface/transformers/issues/2996 | 570,040,360 | MDU6SXNzdWU1NzAwNDAzNjA= | 2,996 | Trying to Use AlbertTokenizer With my own custom Vocab file | {
"login": "AES0007",
"id": 61427339,
"node_id": "MDQ6VXNlcjYxNDI3MzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/61427339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AES0007",
"html_url": "https://github.com/AES0007",
"followers_url": "https://api.github.com/users/AES0007/followers",
"following_url": "https://api.github.com/users/AES0007/following{/other_user}",
"gists_url": "https://api.github.com/users/AES0007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AES0007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AES0007/subscriptions",
"organizations_url": "https://api.github.com/users/AES0007/orgs",
"repos_url": "https://api.github.com/users/AES0007/repos",
"events_url": "https://api.github.com/users/AES0007/events{/privacy}",
"received_events_url": "https://api.github.com/users/AES0007/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"Hi, our Albert implementation only handles SentencePiece vocabulary.",
"Thank you for your response. I tried generating a sentencepiece vocab model via ..**_spm.SentencePieceTrainer.Train('--input=ALBERT_PEP3V.txt\r\n--model_prefix=albertv1 --vocab_size=10000 --hard_vocab_limit=False')_**....and a model is successfully generated, however when I run your tokenization all sentences get the exact same encoding; for example **_\" I am running\" will be [ 0, 3, 0, 3]_** and .....**_\" the dog jumps\" will be the same [0,3, 0, 3]_**. Any idea why this is happening?\r\n\r\nThanks again.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Can we use higgingface for sentence pair classification?\n\n\n\nChé Martin Ph.D\nT: 917-684-0864\nE: [email protected]\nLinkedIn: https://www.linkedin.com/in/chemartin\n________________________________\nFrom: stale[bot] <[email protected]>\nSent: Wednesday, April 29, 2020 10:09:51 AM\nTo: huggingface/transformers <[email protected]>\nCc: Che Martin <[email protected]>; Author <[email protected]>\nSubject: Re: [huggingface/transformers] Trying to Use AlbertTokenizer With my own custom Vocab file (#2996)\n\n\nThis issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n\n—\nYou are receiving this because you authored the thread.\nReply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/issues/2996#issuecomment-621235452>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AOUU5C3VU3LKUY5A4X5URUTRPAYK7ANCNFSM4K2ODLBA>.\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,594 | 1,594 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Good Afternoon All,
I am trying to used AlbertTokenizer to tokenize my corpus with a custom vocab file. I tried the command : **AlbertTokenizer("my-custom-vocab", do_lower_case=True, keep_accents=False, bos_token='[CLS]', eos_token='[SEP]', unk_token='<unk>', sep_token='[SEP]', pad_token='<pad>', cls_token='[CLS]', mask_token='[MASK]', )** but it gives an error : " **_RuntimeError: Internal: /sentencepiece/src/sentencepiece_processor.cc(73) [model_proto->ParseFromArray(serialized.data(), serialized.size())]"_** . a subser of my Vocab file is attached, what am I doing wrong?
Regards,
[vocab.txt](https://github.com/huggingface/transformers/files/4246094/vocab.txt)
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2996/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2995 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2995/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2995/comments | https://api.github.com/repos/huggingface/transformers/issues/2995/events | https://github.com/huggingface/transformers/issues/2995 | 569,934,996 | MDU6SXNzdWU1Njk5MzQ5OTY= | 2,995 | No optimizer steps when gradient_accumulation_steps smaller than epoch_iterator length | {
"login": "mataney",
"id": 11559198,
"node_id": "MDQ6VXNlcjExNTU5MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/11559198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mataney",
"html_url": "https://github.com/mataney",
"followers_url": "https://api.github.com/users/mataney/followers",
"following_url": "https://api.github.com/users/mataney/following{/other_user}",
"gists_url": "https://api.github.com/users/mataney/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mataney/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mataney/subscriptions",
"organizations_url": "https://api.github.com/users/mataney/orgs",
"repos_url": "https://api.github.com/users/mataney/repos",
"events_url": "https://api.github.com/users/mataney/events{/privacy}",
"received_events_url": "https://api.github.com/users/mataney/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834052574,
"node_id": "MDU6TGFiZWwxODM0MDUyNTc0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Sequence%20Classification",
"name": "Ex: Sequence Classification",
"color": "46FFCF",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"In this scenario, the user would do a single optimizer step over the whole batch?\r\n\r\nSure, we would welcome a PR! Could you add a comment as well, so that the purpose of this line may be clear for the user?",
"After the change we would do a single optimizer step over each *epoch*.\r\nAdded a comment and created a PR. \r\nhttps://github.com/huggingface/transformers/pull/3006\r\nCheers."
] | 1,582 | 1,584 | 1,584 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Doesn't matter.
Language I am using the model on (English, Chinese ...): Doesn't matter.
The problem arises when using:
* [x] the official example scripts
* [ ] my own modified scripts
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: Any of the tasks.
* [ ] my own task or dataset
## To reproduce
Steps to reproduce the behavior:
1. Run `run_glue.py` where number of batches for each epoch is smaller than gradient_accumulation_steps.
In https://github.com/huggingface/transformers/blob/8194df8e0cff8e5866ec2bcbda34e3892f10eb39/examples/run_glue.py#L233
`step` is at most `len(train_dataloader)/batch_size`. If the latter is small, then we would never enter the if condition and never call `optimizer.step()` and so on.
I know it's the user responsibility to be aware of this, but this can be easily solved by by altering the condition to be:
```
if (step + 1) % args.gradient_accumulation_steps == 0 or ((step + 1) < args.gradient_accumulation_steps and (step + 1) == len(epoch_iterator)):
```
Should I create a small PR? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2995/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2994 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2994/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2994/comments | https://api.github.com/repos/huggingface/transformers/issues/2994/events | https://github.com/huggingface/transformers/pull/2994 | 569,920,896 | MDExOlB1bGxSZXF1ZXN0Mzc5MDQzMTIw | 2,994 | Tf qa pipelines test | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2994?src=pr&el=h1) Report\n> Merging [#2994](https://codecov.io/gh/huggingface/transformers/pull/2994?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8194df8e0cff8e5866ec2bcbda34e3892f10eb39?src=pr&el=desc) will **increase** coverage by `0.06%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2994?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2994 +/- ##\n==========================================\n+ Coverage 77.16% 77.22% +0.06% \n==========================================\n Files 98 98 \n Lines 15999 15999 \n==========================================\n+ Hits 12345 12355 +10 \n+ Misses 3654 3644 -10\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2994?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.37% <0%> (+0.16%)` | :arrow_up: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `71.64% <0%> (+0.75%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `74.5% <0%> (+5.88%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2994?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2994?src=pr&el=footer). Last update [8194df8...0e7d609](https://codecov.io/gh/huggingface/transformers/pull/2994?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@julien-c the TF_QA pipeline test currently fails with a segmentation error, and no way to debug it. We think it might be hardware related, hence the reduction in concurrency.",
"I think you inadvertently re-added `run_all_tests_torch_and_tf` (which should not be here anymore)",
"Maybe just mark the flaky test as `@slow`, and we'll see if it works more reliably on our own CI?",
"oops, indeed, my bad. Alright, that works as well."
] | 1,582 | 1,651 | 1,583 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2994/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2994/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2994",
"html_url": "https://github.com/huggingface/transformers/pull/2994",
"diff_url": "https://github.com/huggingface/transformers/pull/2994.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2994.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2993 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2993/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2993/comments | https://api.github.com/repos/huggingface/transformers/issues/2993/events | https://github.com/huggingface/transformers/issues/2993 | 569,866,132 | MDU6SXNzdWU1Njk4NjYxMzI= | 2,993 | Too many bugs in Version 2.5.0 | {
"login": "erikchwang",
"id": 16256959,
"node_id": "MDQ6VXNlcjE2MjU2OTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/16256959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erikchwang",
"html_url": "https://github.com/erikchwang",
"followers_url": "https://api.github.com/users/erikchwang/followers",
"following_url": "https://api.github.com/users/erikchwang/following{/other_user}",
"gists_url": "https://api.github.com/users/erikchwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erikchwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erikchwang/subscriptions",
"organizations_url": "https://api.github.com/users/erikchwang/orgs",
"repos_url": "https://api.github.com/users/erikchwang/repos",
"events_url": "https://api.github.com/users/erikchwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/erikchwang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
},
{
"id": 1843765959,
"node_id": "MDU6TGFiZWwxODQzNzY1OTU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Installation",
"name": "Installation",
"color": "bfdadc",
"default": false,
"description": ""
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | null | [] | [
"Hi! Indeed, there have been a few issues as this was the first release incorporating `tokenizers` by default. A new version of `tokenizers` and `transformers` will be available either today or tomorrow and should fix most of these.",
"For future reference, when you say that some code \"fails\", please also provide the stack trace. This helps greatly when debugging.",
"Thanks, stack trace provided...\r\nI just noticed that in Version 2.5.0, `AutoTokenizer.from_pretrained()` takes a new argument `use_fast`, and defaults it to `True`. This seems to be the reason for the error, because when I set it to `False`, the loaded model can be correctly saved by `save_pretrained()`. \r\nI wonder why this `use_fast` argument is added, and why it is default to `True`? ",
"`use_fast` uses the `tokenizers` library which is a new, extremely fast implementation of different tokenizers. I agree that for the first few releases it might've been better to expose the argument but setting it to False by default as to catch errors only by early adopters. Now many errors are reported that could've otherwise been avoided. In the meantime, you can explicitly set it to False.",
"For Tokenizers library: \r\n1, Where is the document about how to install and use it? The Readme is too brief...\r\n2, I understand that it is designed as a combination of various tokenizers. But to use a pre-trained model, is it better to use the original tokenizer to avoid subtle differences like special tokens? If so, the Transformers library should not use the tokenizers from Tokenizers library by default...",
"`tokenizers` sits in its own repository. You can find it [here](https://github.com/huggingface/tokenizers) and its [Python](https://github.com/huggingface/tokenizers/tree/master/bindings/python) bindings here.\r\n\r\nI think that the fast tokenizers are tested to get the exact same output as the other ones.",
"Thanks...\r\nIt seems that `tokenizers` has been installed together with `transformers` by `pip install transformers`? \r\nIn the future, will the tokenizer classes (e.g. BertTokenizer, AutoTokenizer, etc.) still be kept in the `transformers` library? Or they will be deprecated? ",
"I cannot answer that, I don't know what the roadmap looks like.",
"Install Python 64-bit instead of 32-bit solved my same issue.",
"Which issue did you solved? \r\nI think 64-bit Python is almost used by everyone...",
"1) This issue should be opened on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) as it is an installation issue with the `huggingface/tokenizers` library.\r\n\r\n2) This issue is solved in the current master (and 2.5.1) as well.\r\n\r\n3) This is fixed in https://github.com/huggingface/transformers/pull/3198 which will be merged in a bit.",
"i still have this prob, is anyone can tell me how to solve it?",
"which problem?",
"Still seeing the error \r\n```\r\n[libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1506] CHECK failed: (index) < (current_size_): \r\nterminate called after throwing an instance of 'google::protobuf::FatalException'\r\n what(): CHECK failed: (index) < (current_size_): \r\n```\r\nhow do I work around this?",
"Hi @catyeo18, please provide the code that gets you this error, alongside the different versions of software you're using. Here's the [template](https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title=) for bug reports. Thank you."
] | 1,582 | 1,587 | 1,583 | NONE | null | 1. It cannot be installed on MacOS. By runing `pip install -U transformers`, I got the following errors:
> Building wheels for collected packages: tokenizers
> Building wheel for tokenizers (PEP 517) ... error
> ERROR: Command errored out with exit status 1:
> command: /anaconda/bin/python /anaconda/lib/python3.7/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /var/folders/5h/fr2vhgsx4jd8wz4bphzt22_8p1v0bf/T/tmpfh6km7na
> cwd: /private/var/folders/5h/fr2vhgsx4jd8wz4bphzt22_8p1v0bf/T/pip-install-fog09t3h/tokenizers
> Complete output (36 lines):
> running bdist_wheel
> running build
> running build_py
> creating build
> creating build/lib
> creating build/lib/tokenizers
> copying tokenizers/__init__.py -> build/lib/tokenizers
> creating build/lib/tokenizers/models
> copying tokenizers/models/__init__.py -> build/lib/tokenizers/models
> creating build/lib/tokenizers/decoders
> copying tokenizers/decoders/__init__.py -> build/lib/tokenizers/decoders
> creating build/lib/tokenizers/normalizers
> copying tokenizers/normalizers/__init__.py -> build/lib/tokenizers/normalizers
> creating build/lib/tokenizers/pre_tokenizers
> copying tokenizers/pre_tokenizers/__init__.py -> build/lib/tokenizers/pre_tokenizers
> creating build/lib/tokenizers/processors
> copying tokenizers/processors/__init__.py -> build/lib/tokenizers/processors
> creating build/lib/tokenizers/trainers
> copying tokenizers/trainers/__init__.py -> build/lib/tokenizers/trainers
> creating build/lib/tokenizers/implementations
> copying tokenizers/implementations/byte_level_bpe.py -> build/lib/tokenizers/implementations
> copying tokenizers/implementations/sentencepiece_bpe.py -> build/lib/tokenizers/implementations
> copying tokenizers/implementations/base_tokenizer.py -> build/lib/tokenizers/implementations
> copying tokenizers/implementations/__init__.py -> build/lib/tokenizers/implementations
> copying tokenizers/implementations/char_level_bpe.py -> build/lib/tokenizers/implementations
> copying tokenizers/implementations/bert_wordpiece.py -> build/lib/tokenizers/implementations
> copying tokenizers/__init__.pyi -> build/lib/tokenizers
> copying tokenizers/models/__init__.pyi -> build/lib/tokenizers/models
> copying tokenizers/decoders/__init__.pyi -> build/lib/tokenizers/decoders
> copying tokenizers/normalizers/__init__.pyi -> build/lib/tokenizers/normalizers
> copying tokenizers/pre_tokenizers/__init__.pyi -> build/lib/tokenizers/pre_tokenizers
> copying tokenizers/processors/__init__.pyi -> build/lib/tokenizers/processors
> copying tokenizers/trainers/__init__.pyi -> build/lib/tokenizers/trainers
> running build_ext
> running build_rust
> error: Can not find Rust compiler
> ----------------------------------------
> ERROR: Failed building wheel for tokenizers
> Running setup.py clean for tokenizers
> Failed to build tokenizers
> ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly
2. On Linux, it can be installed, but failed with the following code:
> import transformers
> transformers.AutoTokenizer.from_pretrained("bert-base-cased").save_pretrained("./")
> transformers.AutoModel.from_pretrained("bert-base-cased").save_pretrained("./")
> transformers.AutoTokenizer.from_pretrained("./")
> transformers.AutoModel.from_pretrained("./")
Actually, it is the second line that generates the following errors:
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> File "/anaconda/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 587, in save_pretrained
> return vocab_files + (special_tokens_map_file, added_tokens_file)
> TypeError: unsupported operand type(s) for +: 'NoneType' and 'tuple'
3. The vocabulary size of xlm-roberta is wrong, so it failed with the following code, (this bug also exist in Version 2.4.1):
> import transformers
> tokenizer = transformers.AutoTokenizer.from_pretrained("xlm-roberta-base")
> tokenizer.convert_ids_to_tokens(range(tokenizer.vocab_size))
The error is actually caused by the wrong vocab size:
> [libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1506] CHECK failed: (index) < (current_size_):
> terminate called after throwing an instance of 'google::protobuf::FatalException'
> what(): CHECK failed: (index) < (current_size_):
> zsh: abort python | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2993/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2992 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2992/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2992/comments | https://api.github.com/repos/huggingface/transformers/issues/2992/events | https://github.com/huggingface/transformers/issues/2992 | 569,856,054 | MDU6SXNzdWU1Njk4NTYwNTQ= | 2,992 | Train TFXLNetForSequenceClassification model failed. | {
"login": "qtzheng",
"id": 12178769,
"node_id": "MDQ6VXNlcjEyMTc4NzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/12178769?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qtzheng",
"html_url": "https://github.com/qtzheng",
"followers_url": "https://api.github.com/users/qtzheng/followers",
"following_url": "https://api.github.com/users/qtzheng/following{/other_user}",
"gists_url": "https://api.github.com/users/qtzheng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qtzheng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qtzheng/subscriptions",
"organizations_url": "https://api.github.com/users/qtzheng/orgs",
"repos_url": "https://api.github.com/users/qtzheng/repos",
"events_url": "https://api.github.com/users/qtzheng/events{/privacy}",
"received_events_url": "https://api.github.com/users/qtzheng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,582 | 1,582 | 1,582 | NONE | null | # ❓ Questions & Help
Train TFXLNetForSequenceClassification model failed.
## Details
**train_dataset,dev_dataset:**
`<RepeatDataset shapes: ({input_ids: (None, None), attention_mask: (None, None), token_type_ids: (None, None)}, (None,)), types: ({input_ids: tf.int32, attention_mask: tf.int32, token_type_ids: tf.int32}, tf.int64)>`
```
model = TFXLNetForSequenceClassification.from_pretrained(path)
model.config.num_labels=1
train_steps = 10
valid_steps = 5
model.fit(train_dataset,
epochs=6,
steps_per_epoch=train_steps,
validation_data=dev_dataset,
validation_steps=valid_steps,)
ValueError Traceback (most recent call last)
<ipython-input-76-7d07613f7463> in <module>
5 steps_per_epoch=train_steps,
6 validation_data=dev_dataset,
----> 7 validation_steps=valid_steps,)
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
817 max_queue_size=max_queue_size,
818 workers=workers,
--> 819 use_multiprocessing=use_multiprocessing)
820
821 def evaluate(self,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
233 max_queue_size=max_queue_size,
234 workers=workers,
--> 235 use_multiprocessing=use_multiprocessing)
236
237 total_samples = _get_total_number_of_samples(training_data_adapter)
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in _process_training_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, steps_per_epoch, validation_split, validation_data, validation_steps, shuffle, distribution_strategy, max_queue_size, workers, use_multiprocessing)
591 max_queue_size=max_queue_size,
592 workers=workers,
--> 593 use_multiprocessing=use_multiprocessing)
594 val_adapter = None
595 if validation_data:
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in _process_inputs(model, mode, x, y, batch_size, epochs, sample_weights, class_weights, shuffle, steps, distribution_strategy, max_queue_size, workers, use_multiprocessing)
704 max_queue_size=max_queue_size,
705 workers=workers,
--> 706 use_multiprocessing=use_multiprocessing)
707
708 return adapter
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\data_adapter.py in __init__(self, x, y, sample_weights, standardize_function, **kwargs)
700
701 if standardize_function is not None:
--> 702 x = standardize_function(x)
703
704 # Note that the dataset instance is immutable, its fine to reusing the user
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in standardize_function(dataset)
682 return x, y
683 return x, y, sample_weights
--> 684 return dataset.map(map_fn, num_parallel_calls=dataset_ops.AUTOTUNE)
685
686 if mode == ModeKeys.PREDICT:
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py in map(self, map_func, num_parallel_calls)
1589 else:
1590 return ParallelMapDataset(
-> 1591 self, map_func, num_parallel_calls, preserve_cardinality=True)
1592
1593 def flat_map(self, map_func):
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py in __init__(self, input_dataset, map_func, num_parallel_calls, use_inter_op_parallelism, preserve_cardinality, use_legacy_function)
3924 self._transformation_name(),
3925 dataset=input_dataset,
-> 3926 use_legacy_function=use_legacy_function)
3927 self._num_parallel_calls = ops.convert_to_tensor(
3928 num_parallel_calls, dtype=dtypes.int32, name="num_parallel_calls")
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py in __init__(self, func, transformation_name, dataset, input_classes, input_shapes, input_types, input_structure, add_to_graph, use_legacy_function, defun_kwargs)
3145 with tracking.resource_tracker_scope(resource_tracker):
3146 # TODO(b/141462134): Switch to using garbage collection.
-> 3147 self._function = wrapper_fn._get_concrete_function_internal()
3148
3149 if add_to_graph:
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py in _get_concrete_function_internal(self, *args, **kwargs)
2393 """Bypasses error checking when getting a graph function."""
2394 graph_function = self._get_concrete_function_internal_garbage_collected(
-> 2395 *args, **kwargs)
2396 # We're returning this concrete function to someone, and they may keep a
2397 # reference to the FuncGraph without keeping a reference to the
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
2387 args, kwargs = None, None
2388 with self._lock:
-> 2389 graph_function, _, _ = self._maybe_define_function(args, kwargs)
2390 return graph_function
2391
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py in _maybe_define_function(self, args, kwargs)
2701
2702 self._function_cache.missed.add(call_context_key)
-> 2703 graph_function = self._create_graph_function(args, kwargs)
2704 self._function_cache.primary[cache_key] = graph_function
2705 return graph_function, args, kwargs
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\eager\function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
2591 arg_names=arg_names,
2592 override_flat_arg_shapes=override_flat_arg_shapes,
-> 2593 capture_by_value=self._capture_by_value),
2594 self._function_attributes,
2595 # Tell the ConcreteFunction to clean up its graph once it goes out of
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\framework\func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
976 converted_func)
977
--> 978 func_outputs = python_func(*func_args, **func_kwargs)
979
980 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py in wrapper_fn(*args)
3138 attributes=defun_kwargs)
3139 def wrapper_fn(*args): # pylint: disable=missing-docstring
-> 3140 ret = _wrapper_helper(*args)
3141 ret = structure.to_tensor_list(self._output_structure, ret)
3142 return [ops.convert_to_tensor(t) for t in ret]
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\data\ops\dataset_ops.py in _wrapper_helper(*args)
3080 nested_args = (nested_args,)
3081
-> 3082 ret = autograph.tf_convert(func, ag_ctx)(*nested_args)
3083 # If `func` returns a list of tensors, `nest.flatten()` and
3084 # `ops.convert_to_tensor()` would conspire to attempt to stack
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\autograph\impl\api.py in wrapper(*args, **kwargs)
235 except Exception as e: # pylint:disable=broad-except
236 if hasattr(e, 'ag_error_metadata'):
--> 237 raise e.ag_error_metadata.to_exception(e)
238 else:
239 raise
ValueError: in converted code:
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py:677 map_fn
batch_size=None)
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training.py:2410 _standardize_tensors
exception_prefix='input')
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\keras\engine\training_utils.py:510 standardize_input_data
'for each key in: ' + str(names))
ValueError: No data provided for "inputs". Need data for each key in: ['attention_mask', 'inputs', 'token_type_ids']
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2992/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2991 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2991/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2991/comments | https://api.github.com/repos/huggingface/transformers/issues/2991/events | https://github.com/huggingface/transformers/pull/2991 | 569,834,572 | MDExOlB1bGxSZXF1ZXN0Mzc4OTcyMDYx | 2,991 | run_ner.py / bert-base-multilingual-cased can output empty tokens | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2991?src=pr&el=h1) Report\n> Merging [#2991](https://codecov.io/gh/huggingface/transformers/pull/2991?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/38f5fe9e0277df67a01db80a1c640ac072a2381e?src=pr&el=desc) will **decrease** coverage by `1.03%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2991?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2991 +/- ##\n==========================================\n- Coverage 77.16% 76.12% -1.04% \n==========================================\n Files 98 98 \n Lines 15997 15997 \n==========================================\n- Hits 12344 12178 -166 \n- Misses 3653 3819 +166\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2991?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2991/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2991/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2991/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2991/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2991/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2991/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.2% <0%> (-0.17%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2991?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2991?src=pr&el=footer). Last update [38f5fe9...57f312d](https://codecov.io/gh/huggingface/transformers/pull/2991?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This looks good to me. Maybe we can make it a test? This might break some of the other examples as well I will check. "
] | 1,582 | 1,585 | 1,585 | MEMBER | null | This can happen when using bert-base-multilingual-cased with an input containing an unique space.
In this case, the tokenizer will output just an empty word_tokens thus leading to an non-consistent behavior
over the labels_ids tokens adding one more tokens than tokens vector. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2991/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2991/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2991",
"html_url": "https://github.com/huggingface/transformers/pull/2991",
"diff_url": "https://github.com/huggingface/transformers/pull/2991.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2991.patch",
"merged_at": 1585321195000
} |
https://api.github.com/repos/huggingface/transformers/issues/2990 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2990/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2990/comments | https://api.github.com/repos/huggingface/transformers/issues/2990/events | https://github.com/huggingface/transformers/issues/2990 | 569,711,675 | MDU6SXNzdWU1Njk3MTE2NzU= | 2,990 | run_ner.py example | {
"login": "jmamou",
"id": 19263306,
"node_id": "MDQ6VXNlcjE5MjYzMzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/19263306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmamou",
"html_url": "https://github.com/jmamou",
"followers_url": "https://api.github.com/users/jmamou/followers",
"following_url": "https://api.github.com/users/jmamou/following{/other_user}",
"gists_url": "https://api.github.com/users/jmamou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmamou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmamou/subscriptions",
"organizations_url": "https://api.github.com/users/jmamou/orgs",
"repos_url": "https://api.github.com/users/jmamou/repos",
"events_url": "https://api.github.com/users/jmamou/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmamou/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834060867,
"node_id": "MDU6TGFiZWwxODM0MDYwODY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Named%20Entity%20Recognition",
"name": "Ex: Named Entity Recognition",
"color": "06FFD8",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Are you looking for this? https://github.com/huggingface/transformers/blob/master/examples/ner/run_ner.py",
"On a (somewhat) related note: the code might still be there but on huggingface's webpage the following link doesn't point to anything: https://huggingface.co/transformers/examples.html#named-entity-recognition (there is no more an example of how to call the script)",
"the example has been moved to https://github.com/huggingface/transformers/tree/master/examples/ner\r\n",
"Thx Jonathan",
"I found it here: https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py\n\nThe NER has been renamed to Token Classification.\n"
] | 1,582 | 1,589 | 1,582 | CONTRIBUTOR | null | # ❓ Questions & Help
## Details
It seem that examples/run_ner.py has been removed from github repo. Is there any issue with this code?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2990/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2989 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2989/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2989/comments | https://api.github.com/repos/huggingface/transformers/issues/2989/events | https://github.com/huggingface/transformers/pull/2989 | 569,605,554 | MDExOlB1bGxSZXF1ZXN0Mzc4Nzg1NjY4 | 2,989 | Documentation | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2989?src=pr&el=h1) Report\n> Merging [#2989](https://codecov.io/gh/huggingface/transformers/pull/2989?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c913eb9c3894b4031dc059d22b42e38a5fcef989?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2989?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2989 +/- ##\n==========================================\n+ Coverage 77.26% 77.27% +<.01% \n==========================================\n Files 98 98 \n Lines 16040 16047 +7 \n==========================================\n+ Hits 12393 12400 +7 \n Misses 3647 3647\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2989?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbS5weQ==) | `96.22% <ø> (ø)` | :arrow_up: |\n| [src/transformers/configuration\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.05% <ø> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.16% <ø> (ø)` | :arrow_up: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.19% <ø> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.05% <ø> (ø)` | :arrow_up: |\n| [src/transformers/tokenization\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZGlzdGlsYmVydC5weQ==) | `100% <ø> (ø)` | :arrow_up: |\n| [src/transformers/tokenization\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/2989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZmxhdWJlcnQucHk=) | `40.42% <ø> (ø)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.5% <ø> (ø)` | :arrow_up: |\n| [src/transformers/configuration\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/2989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2ZsYXViZXJ0LnB5) | `100% <ø> (ø)` | :arrow_up: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2989/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `96.87% <ø> (ø)` | :arrow_up: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/2989/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2989?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2989?src=pr&el=footer). Last update [c913eb9...b393150](https://codecov.io/gh/huggingface/transformers/pull/2989?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,651 | 1,582 | MEMBER | null | Updating documentation for tokenizers.
~Still left to do:~ will do in a future PR | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2989/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2989",
"html_url": "https://github.com/huggingface/transformers/pull/2989",
"diff_url": "https://github.com/huggingface/transformers/pull/2989.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2989.patch",
"merged_at": 1582674216000
} |
https://api.github.com/repos/huggingface/transformers/issues/2988 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2988/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2988/comments | https://api.github.com/repos/huggingface/transformers/issues/2988/events | https://github.com/huggingface/transformers/pull/2988 | 569,581,261 | MDExOlB1bGxSZXF1ZXN0Mzc4NzY3Mjg4 | 2,988 | Speed up GELU computation with torch.jit | {
"login": "mryab",
"id": 16766985,
"node_id": "MDQ6VXNlcjE2NzY2OTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/16766985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mryab",
"html_url": "https://github.com/mryab",
"followers_url": "https://api.github.com/users/mryab/followers",
"following_url": "https://api.github.com/users/mryab/following{/other_user}",
"gists_url": "https://api.github.com/users/mryab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mryab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mryab/subscriptions",
"organizations_url": "https://api.github.com/users/mryab/orgs",
"repos_url": "https://api.github.com/users/mryab/repos",
"events_url": "https://api.github.com/users/mryab/events{/privacy}",
"received_events_url": "https://api.github.com/users/mryab/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2988?src=pr&el=h1) Report\n> Merging [#2988](https://codecov.io/gh/huggingface/transformers/pull/2988?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/38f5fe9e0277df67a01db80a1c640ac072a2381e?src=pr&el=desc) will **decrease** coverage by `1.05%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2988?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2988 +/- ##\n==========================================\n- Coverage 77.16% 76.11% -1.06% \n==========================================\n Files 98 98 \n Lines 15997 15997 \n==========================================\n- Hits 12344 12176 -168 \n- Misses 3653 3821 +168\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2988?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/2988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `75% <100%> (-12.5%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.2% <0%> (-0.17%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2988?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2988?src=pr&el=footer). Last update [38f5fe9...7e91273](https://codecov.io/gh/huggingface/transformers/pull/2988?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Any reason why the other activation functions (swish, _gelu_python) do not need jit? (I have no experience with JIT, so this is a genuine question. When should jit.script be used, and when shouldn't it?)",
"Indeed, it's possible to wrap both activations you mentioned with torch.jit; in case of `_gelu_python` it's likely to yield similar reduction in execution time. I will come back with benchmarking results and, if you think it's a good idea, will add JIT compilation to this PR. \r\n\r\nAnswering your question on use of `jit.script`: it usually makes sense to optimize functions with many elementwise ops, as they tend to get fused into a single kernel, which eliminates unnecessary memory accesses. There are other advantages, e.g. removing Python overhead and lifting GIL as a result; if you're interested, [this tutorial](https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/) and [this blogpost](http://blog.christianperone.com/2018/10/pytorch-1-0-tracing-jit-and-libtorch-c-api-to-integrate-pytorch-into-nodejs/) give a good overview of underlying optimizations. \r\n\r\nTL;DR: `jit.script` useful when you have TorchScript-friendly functions/modules with lots of custom PyTorch code; if your code uses unsupported Python features, you either leave it be or use torch.jit.trace.\r\n\r\nTalking of `swish`, there is something I'd like to mention: its current implementation can be made more memory-efficient (see [this](https://github.com/lukemelas/EfficientNet-PyTorch/pull/88) and [this](https://medium.com/the-artificial-impostor/more-memory-efficient-swish-activation-function-e07c22c12a76)) at the cost of losing `torch.jit`/`torch.onnx` support. Not sure if swish will benefit much from JIT compilation — would memory savings be useful then?",
"Here's the results for both activations (done on 1080Ti, I've updated the gist with two scripts):\r\n\r\n_gelu_python\r\n```torch.float32 (32, 128) gelu 1.8e-04 3.6e-04 jit 1.1e-04 1.7e-04 speedup forward 1.73 backward 2.12\r\ntorch.float32 (32, 512) gelu 1.9e-04 3.7e-04 jit 7.0e-05 1.6e-04 speedup forward 2.69 backward 2.36\r\ntorch.float32 (32, 1024) gelu 1.9e-04 3.7e-04 jit 6.9e-05 1.6e-04 speedup forward 2.71 backward 2.33\r\ntorch.float32 (32, 4096) gelu 1.8e-04 3.6e-04 jit 6.9e-05 1.6e-04 speedup forward 2.66 backward 2.29\r\ntorch.float32 (256, 128) gelu 1.9e-04 3.6e-04 jit 7.0e-05 1.6e-04 speedup forward 2.66 backward 2.30\r\ntorch.float32 (256, 512) gelu 1.8e-04 3.6e-04 jit 6.9e-05 1.6e-04 speedup forward 2.65 backward 2.31\r\ntorch.float32 (256, 1024) gelu 1.8e-04 3.6e-04 jit 6.9e-05 1.6e-04 speedup forward 2.67 backward 2.30\r\ntorch.float32 (256, 4096) gelu 1.7e-04 3.6e-04 jit 9.8e-05 1.5e-04 speedup forward 1.74 backward 2.33\r\ntorch.float32 (1024, 128) gelu 1.8e-04 3.6e-04 jit 6.9e-05 1.6e-04 speedup forward 2.67 backward 2.30\r\ntorch.float32 (1024, 512) gelu 1.9e-04 3.6e-04 jit 7.3e-05 1.6e-04 speedup forward 2.55 backward 2.34\r\ntorch.float32 (1024, 1024) gelu 1.7e-04 3.5e-04 jit 9.9e-05 1.6e-04 speedup forward 1.74 backward 2.29\r\ntorch.float32 (1024, 4096) gelu 5.1e-04 1.1e-03 jit 3.1e-04 2.9e-04 speedup forward 1.65 backward 3.78\r\ntorch.float32 (8192, 128) gelu 1.7e-04 3.6e-04 jit 1.0e-04 1.5e-04 speedup forward 1.74 backward 2.30\r\ntorch.float32 (8192, 512) gelu 5.1e-04 1.1e-03 jit 3.1e-04 2.9e-04 speedup forward 1.65 backward 3.78\r\ntorch.float32 (8192, 1024) gelu 9.8e-04 2.1e-03 jit 6.1e-04 4.6e-04 speedup forward 1.61 backward 4.43\r\ntorch.float32 (8192, 4096) gelu 3.8e-03 8.1e-03 jit 2.6e-03 1.9e-03 speedup forward 1.46 backward 4.15\r\ntorch.float16 (32, 128) gelu 1.9e-04 3.6e-04 jit 9.6e-05 1.8e-04 speedup forward 1.94 backward 1.98\r\ntorch.float16 (32, 512) gelu 1.8e-04 3.6e-04 jit 6.8e-05 1.5e-04 speedup forward 2.73 backward 2.38\r\ntorch.float16 (32, 1024) gelu 1.9e-04 3.6e-04 jit 7.0e-05 1.6e-04 speedup forward 2.66 backward 2.28\r\ntorch.float16 (32, 4096) gelu 1.9e-04 3.6e-04 jit 6.9e-05 1.6e-04 speedup forward 2.68 backward 2.33\r\ntorch.float16 (256, 128) gelu 1.9e-04 3.6e-04 jit 7.0e-05 1.6e-04 speedup forward 2.66 backward 2.29\r\ntorch.float16 (256, 512) gelu 1.9e-04 3.6e-04 jit 6.9e-05 1.6e-04 speedup forward 2.67 backward 2.30\r\ntorch.float16 (256, 1024) gelu 1.9e-04 3.7e-04 jit 7.0e-05 1.6e-04 speedup forward 2.68 backward 2.31\r\ntorch.float16 (256, 4096) gelu 1.9e-04 3.7e-04 jit 7.4e-05 1.5e-04 speedup forward 2.56 backward 2.43\r\ntorch.float16 (1024, 128) gelu 1.9e-04 3.6e-04 jit 6.9e-05 1.6e-04 speedup forward 2.67 backward 2.28\r\ntorch.float16 (1024, 512) gelu 1.9e-04 3.6e-04 jit 6.9e-05 1.6e-04 speedup forward 2.69 backward 2.30\r\ntorch.float16 (1024, 1024) gelu 1.9e-04 3.7e-04 jit 7.4e-05 1.5e-04 speedup forward 2.56 backward 2.40\r\ntorch.float16 (1024, 4096) gelu 3.3e-04 8.1e-04 jit 2.1e-04 2.3e-04 speedup forward 1.62 backward 3.50\r\ntorch.float16 (8192, 128) gelu 1.9e-04 3.7e-04 jit 7.4e-05 1.6e-04 speedup forward 2.56 backward 2.34\r\ntorch.float16 (8192, 512) gelu 3.4e-04 8.1e-04 jit 2.1e-04 2.3e-04 speedup forward 1.62 backward 3.51\r\ntorch.float16 (8192, 1024) gelu 6.3e-04 1.5e-03 jit 4.5e-04 3.7e-04 speedup forward 1.39 backward 4.06\r\ntorch.float16 (8192, 4096) gelu 2.5e-03 5.9e-03 jit 2.2e-03 1.5e-03 speedup forward 1.11 backward 3.93\r\n```\r\n\r\nswish\r\n\r\n```\r\ntorch.float32 (32, 128) swish 5.9e-05 1.8e-04 jit 1.0e-04 1.8e-04 speedup forward 0.59 backward 1.01\r\ntorch.float32 (32, 512) swish 5.8e-05 1.8e-04 jit 5.4e-05 1.4e-04 speedup forward 1.08 backward 1.30\r\ntorch.float32 (32, 1024) swish 5.8e-05 1.8e-04 jit 5.4e-05 1.4e-04 speedup forward 1.08 backward 1.31\r\ntorch.float32 (32, 4096) swish 5.9e-05 1.8e-04 jit 5.4e-05 1.4e-04 speedup forward 1.08 backward 1.33\r\ntorch.float32 (256, 128) swish 5.8e-05 1.8e-04 jit 5.4e-05 1.4e-04 speedup forward 1.08 backward 1.33\r\ntorch.float32 (256, 512) swish 5.9e-05 1.8e-04 jit 5.4e-05 1.3e-04 speedup forward 1.09 backward 1.36\r\ntorch.float32 (256, 1024) swish 5.9e-05 1.8e-04 jit 5.4e-05 1.4e-04 speedup forward 1.09 backward 1.32\r\ntorch.float32 (256, 4096) swish 8.6e-05 2.2e-04 jit 7.4e-05 1.4e-04 speedup forward 1.17 backward 1.57\r\ntorch.float32 (1024, 128) swish 5.8e-05 1.8e-04 jit 5.5e-05 1.4e-04 speedup forward 1.07 backward 1.28\r\ntorch.float32 (1024, 512) swish 6.7e-05 1.9e-04 jit 5.6e-05 1.4e-04 speedup forward 1.20 backward 1.31\r\ntorch.float32 (1024, 1024) swish 8.6e-05 2.2e-04 jit 7.4e-05 1.4e-04 speedup forward 1.17 backward 1.56\r\ntorch.float32 (1024, 4096) swish 2.6e-04 5.8e-04 jit 2.0e-04 2.4e-04 speedup forward 1.33 backward 2.39\r\ntorch.float32 (8192, 128) swish 8.8e-05 2.2e-04 jit 7.4e-05 1.4e-04 speedup forward 1.18 backward 1.63\r\ntorch.float32 (8192, 512) swish 2.6e-04 5.7e-04 jit 2.0e-04 2.4e-04 speedup forward 1.34 backward 2.36\r\ntorch.float32 (8192, 1024) swish 4.9e-04 1.0e-03 jit 3.7e-04 3.9e-04 speedup forward 1.32 backward 2.69\r\ntorch.float32 (8192, 4096) swish 1.9e-03 4.1e-03 jit 1.5e-03 1.6e-03 speedup forward 1.25 backward 2.56\r\ntorch.float16 (32, 128) swish 5.8e-05 1.8e-04 jit 9.5e-05 1.7e-04 speedup forward 0.62 backward 1.06\r\ntorch.float16 (32, 512) swish 5.8e-05 1.8e-04 jit 5.4e-05 1.3e-04 speedup forward 1.09 backward 1.35\r\ntorch.float16 (32, 1024) swish 5.9e-05 1.8e-04 jit 5.4e-05 1.3e-04 speedup forward 1.10 backward 1.32\r\ntorch.float16 (32, 4096) swish 5.9e-05 1.8e-04 jit 5.4e-05 1.4e-04 speedup forward 1.10 backward 1.30\r\ntorch.float16 (256, 128) swish 5.8e-05 1.8e-04 jit 5.3e-05 1.3e-04 speedup forward 1.09 backward 1.33\r\ntorch.float16 (256, 512) swish 5.9e-05 1.8e-04 jit 5.4e-05 1.4e-04 speedup forward 1.10 backward 1.29\r\ntorch.float16 (256, 1024) swish 5.9e-05 1.8e-04 jit 5.4e-05 1.4e-04 speedup forward 1.09 backward 1.30\r\ntorch.float16 (256, 4096) swish 7.5e-05 1.8e-04 jit 8.1e-05 1.4e-04 speedup forward 0.93 backward 1.31\r\ntorch.float16 (1024, 128) swish 5.9e-05 1.8e-04 jit 5.4e-05 1.4e-04 speedup forward 1.10 backward 1.29\r\ntorch.float16 (1024, 512) swish 5.9e-05 1.8e-04 jit 5.9e-05 1.4e-04 speedup forward 1.00 backward 1.30\r\ntorch.float16 (1024, 1024) swish 7.3e-05 1.8e-04 jit 8.1e-05 1.4e-04 speedup forward 0.91 backward 1.30\r\ntorch.float16 (1024, 4096) swish 2.1e-04 4.3e-04 jit 2.1e-04 2.1e-04 speedup forward 0.99 backward 2.08\r\ntorch.float16 (8192, 128) swish 7.4e-05 1.8e-04 jit 8.1e-05 1.3e-04 speedup forward 0.91 backward 1.37\r\ntorch.float16 (8192, 512) swish 2.1e-04 4.2e-04 jit 2.1e-04 2.1e-04 speedup forward 0.99 backward 2.03\r\ntorch.float16 (8192, 1024) swish 3.8e-04 7.5e-04 jit 3.7e-04 3.0e-04 speedup forward 1.02 backward 2.47\r\ntorch.float16 (8192, 4096) swish 1.4e-03 2.8e-03 jit 1.4e-03 1.1e-03 speedup forward 1.06 backward 2.60\r\n```",
"Same benchmarks on RTX 2080Ti:\r\n\r\n_python_gelu\r\n```\r\ntorch.float32 (32, 128) gelu 2.1e-04 5.9e-04 jit 1.2e-04 2.2e-04 speedup forward 1.79 backward 2.63\r\ntorch.float32 (32, 512) gelu 2.3e-04 6.0e-04 jit 6.5e-05 1.6e-04 speedup forward 3.59 backward 3.76\r\ntorch.float32 (32, 1024) gelu 2.3e-04 5.8e-04 jit 6.4e-05 1.6e-04 speedup forward 3.54 backward 3.73\r\ntorch.float32 (32, 4096) gelu 1.7e-04 3.3e-04 jit 6.2e-05 1.4e-04 speedup forward 2.65 backward 2.38\r\ntorch.float32 (256, 128) gelu 1.7e-04 3.6e-04 jit 6.6e-05 1.9e-04 speedup forward 2.59 backward 1.94\r\ntorch.float32 (256, 512) gelu 2.5e-04 6.7e-04 jit 6.7e-05 2.0e-04 speedup forward 3.71 backward 3.34\r\ntorch.float32 (256, 1024) gelu 2.3e-04 6.1e-04 jit 6.6e-05 1.9e-04 speedup forward 3.41 backward 3.25\r\ntorch.float32 (256, 4096) gelu 2.1e-04 5.3e-04 jit 9.2e-05 2.0e-04 speedup forward 2.33 backward 2.64\r\ntorch.float32 (1024, 128) gelu 2.1e-04 5.0e-04 jit 6.5e-05 1.9e-04 speedup forward 3.25 backward 2.70\r\ntorch.float32 (1024, 512) gelu 2.2e-04 5.2e-04 jit 6.7e-05 1.8e-04 speedup forward 3.21 backward 2.91\r\ntorch.float32 (1024, 1024) gelu 2.4e-04 6.1e-04 jit 9.2e-05 2.0e-04 speedup forward 2.56 backward 3.06\r\ntorch.float32 (1024, 4096) gelu 4.0e-04 9.3e-04 jit 2.7e-04 3.6e-04 speedup forward 1.44 backward 2.58\r\ntorch.float32 (8192, 128) gelu 2.3e-04 5.7e-04 jit 9.4e-05 2.2e-04 speedup forward 2.44 backward 2.63\r\ntorch.float32 (8192, 512) gelu 4.0e-04 9.3e-04 jit 2.7e-04 3.4e-04 speedup forward 1.47 backward 2.76\r\ntorch.float32 (8192, 1024) gelu 7.4e-04 1.6e-03 jit 5.5e-04 4.8e-04 speedup forward 1.36 backward 3.42\r\ntorch.float32 (8192, 4096) gelu 2.8e-03 5.8e-03 jit 2.2e-03 1.3e-03 speedup forward 1.26 backward 4.55\r\ntorch.float16 (32, 128) gelu 2.4e-04 6.7e-04 jit 1.1e-04 2.0e-04 speedup forward 2.16 backward 3.29\r\ntorch.float16 (32, 512) gelu 2.4e-04 5.0e-04 jit 7.6e-05 1.8e-04 speedup forward 3.11 backward 2.80\r\ntorch.float16 (32, 1024) gelu 2.1e-04 5.4e-04 jit 6.4e-05 1.8e-04 speedup forward 3.31 backward 3.03\r\ntorch.float16 (32, 4096) gelu 2.2e-04 5.7e-04 jit 6.5e-05 1.9e-04 speedup forward 3.40 backward 3.04\r\ntorch.float16 (256, 128) gelu 2.1e-04 5.3e-04 jit 7.1e-05 2.0e-04 speedup forward 2.93 backward 2.61\r\ntorch.float16 (256, 512) gelu 2.2e-04 4.8e-04 jit 7.9e-05 2.1e-04 speedup forward 2.83 backward 2.27\r\ntorch.float16 (256, 1024) gelu 2.2e-04 5.8e-04 jit 6.4e-05 1.8e-04 speedup forward 3.35 backward 3.28\r\ntorch.float16 (256, 4096) gelu 1.9e-04 4.5e-04 jit 6.5e-05 1.6e-04 speedup forward 2.93 backward 2.85\r\ntorch.float16 (1024, 128) gelu 1.9e-04 4.5e-04 jit 6.4e-05 1.7e-04 speedup forward 2.99 backward 2.73\r\ntorch.float16 (1024, 512) gelu 1.9e-04 4.4e-04 jit 5.9e-05 1.5e-04 speedup forward 3.18 backward 2.97\r\ntorch.float16 (1024, 1024) gelu 2.1e-04 5.2e-04 jit 6.5e-05 1.6e-04 speedup forward 3.16 backward 3.23\r\ntorch.float16 (1024, 4096) gelu 2.8e-04 6.4e-04 jit 1.5e-04 2.4e-04 speedup forward 1.83 backward 2.60\r\ntorch.float16 (8192, 128) gelu 2.1e-04 5.4e-04 jit 6.4e-05 1.8e-04 speedup forward 3.27 backward 2.96\r\ntorch.float16 (8192, 512) gelu 2.8e-04 6.7e-04 jit 1.5e-04 2.4e-04 speedup forward 1.83 backward 2.79\r\ntorch.float16 (8192, 1024) gelu 4.8e-04 1.1e-03 jit 3.0e-04 3.5e-04 speedup forward 1.57 backward 3.03\r\ntorch.float16 (8192, 4096) gelu 1.8e-03 3.4e-03 jit 1.5e-03 8.8e-04 speedup forward 1.14 backward 3.91\r\n```\r\n\r\nswish\r\n```\r\ntorch.float32 (32, 128) swish 7.5e-05 2.6e-04 jit 1.1e-04 2.0e-04 speedup forward 0.71 backward 1.32\r\ntorch.float32 (32, 512) swish 7.5e-05 2.6e-04 jit 5.8e-05 1.7e-04 speedup forward 1.31 backward 1.57\r\ntorch.float32 (32, 1024) swish 7.2e-05 2.5e-04 jit 5.8e-05 1.6e-04 speedup forward 1.24 backward 1.50\r\ntorch.float32 (32, 4096) swish 7.1e-05 2.6e-04 jit 6.1e-05 1.9e-04 speedup forward 1.17 backward 1.38\r\ntorch.float32 (256, 128) swish 7.2e-05 2.5e-04 jit 5.7e-05 1.7e-04 speedup forward 1.26 backward 1.50\r\ntorch.float32 (256, 512) swish 7.4e-05 2.7e-04 jit 5.9e-05 1.8e-04 speedup forward 1.25 backward 1.55\r\ntorch.float32 (256, 1024) swish 7.3e-05 2.6e-04 jit 6.2e-05 2.0e-04 speedup forward 1.18 backward 1.35\r\ntorch.float32 (256, 4096) swish 8.5e-05 2.7e-04 jit 6.5e-05 1.6e-04 speedup forward 1.31 backward 1.75\r\ntorch.float32 (1024, 128) swish 7.4e-05 2.7e-04 jit 5.8e-05 1.8e-04 speedup forward 1.27 backward 1.47\r\ntorch.float32 (1024, 512) swish 7.5e-05 2.8e-04 jit 6.4e-05 2.2e-04 speedup forward 1.16 backward 1.29\r\ntorch.float32 (1024, 1024) swish 9.2e-05 3.3e-04 jit 7.0e-05 2.1e-04 speedup forward 1.32 backward 1.59\r\ntorch.float32 (1024, 4096) swish 1.9e-04 5.7e-04 jit 1.6e-04 2.7e-04 speedup forward 1.24 backward 2.10\r\ntorch.float32 (8192, 128) swish 9.1e-05 3.2e-04 jit 7.2e-05 2.0e-04 speedup forward 1.26 backward 1.54\r\ntorch.float32 (8192, 512) swish 1.9e-04 5.5e-04 jit 1.6e-04 2.7e-04 speedup forward 1.20 backward 2.03\r\ntorch.float32 (8192, 1024) swish 3.5e-04 8.8e-04 jit 3.2e-04 3.9e-04 speedup forward 1.09 backward 2.24\r\ntorch.float32 (8192, 4096) swish 1.3e-03 2.7e-03 jit 1.3e-03 1.0e-03 speedup forward 0.99 backward 2.62\r\ntorch.float16 (32, 128) swish 7.0e-05 2.5e-04 jit 1.0e-04 2.1e-04 speedup forward 0.69 backward 1.18\r\ntorch.float16 (32, 512) swish 6.9e-05 2.4e-04 jit 6.6e-05 1.8e-04 speedup forward 1.05 backward 1.38\r\ntorch.float16 (32, 1024) swish 7.0e-05 2.4e-04 jit 6.0e-05 1.7e-04 speedup forward 1.18 backward 1.43\r\ntorch.float16 (32, 4096) swish 6.9e-05 2.5e-04 jit 6.0e-05 1.8e-04 speedup forward 1.14 backward 1.37\r\ntorch.float16 (256, 128) swish 6.5e-05 2.4e-04 jit 5.8e-05 1.6e-04 speedup forward 1.12 backward 1.48\r\ntorch.float16 (256, 512) swish 7.1e-05 2.6e-04 jit 6.0e-05 1.8e-04 speedup forward 1.20 backward 1.41\r\ntorch.float16 (256, 1024) swish 6.8e-05 2.5e-04 jit 6.0e-05 1.8e-04 speedup forward 1.14 backward 1.37\r\ntorch.float16 (256, 4096) swish 7.1e-05 2.5e-04 jit 9.7e-05 2.1e-04 speedup forward 0.73 backward 1.20\r\ntorch.float16 (1024, 128) swish 7.0e-05 2.5e-04 jit 6.0e-05 1.8e-04 speedup forward 1.17 backward 1.42\r\ntorch.float16 (1024, 512) swish 7.2e-05 2.6e-04 jit 6.8e-05 1.7e-04 speedup forward 1.06 backward 1.49\r\ntorch.float16 (1024, 1024) swish 6.7e-05 2.4e-04 jit 9.7e-05 2.1e-04 speedup forward 0.69 backward 1.14\r\ntorch.float16 (1024, 4096) swish 1.3e-04 3.6e-04 jit 1.9e-04 1.8e-04 speedup forward 0.69 backward 1.98\r\ntorch.float16 (8192, 128) swish 7.0e-05 2.4e-04 jit 9.7e-05 1.9e-04 speedup forward 0.73 backward 1.26\r\ntorch.float16 (8192, 512) swish 1.3e-04 3.5e-04 jit 1.9e-04 2.2e-04 speedup forward 0.66 backward 1.62\r\ntorch.float16 (8192, 1024) swish 2.1e-04 5.7e-04 jit 3.5e-04 3.1e-04 speedup forward 0.62 backward 1.82\r\ntorch.float16 (8192, 4096) swish 7.6e-04 1.6e-03 jit 1.3e-03 7.2e-04 speedup forward 0.60 backward 2.17\r\n```\r\n\r\nSeems like it makes sense to compile _python_gelu, and for swish the benefits are negligible",
"We only use _gelu_python for torch < 1.4. \r\nMy only concern with this PR is that it will break in early pytorch versions or on CPU or something, can you test it under those circumstances?",
"I've tested the current implementation with pytorch==1.0.0 on CPU, and it indeed breaks because torch.jit did not support python floats at that time. I have two possible solutions for this, @sshleifer what will be the best one?\r\n\r\nFirst: slightly modify gelu_python and gelu_new to be backwards-compatible\r\n```\r\[email protected]\r\ndef jit_gelu_python(x):\r\n \"\"\" Original Implementation of the gelu activation function in Google Bert repo when initially created.\r\n For information: OpenAI GPT's gelu is slightly different (and gives slightly different results):\r\n 0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3))))\r\n This is now written in C in torch.nn.functional\r\n Also see https://arxiv.org/abs/1606.08415\r\n \"\"\"\r\n gelu_const = torch.sqrt(torch.full((), 2.0, dtype=x.dtype, device=x.device))\r\n return x * 0.5 * (1.0 + torch.erf(x / gelu_const))\r\n\r\[email protected]\r\ndef jit_gelu(x):\r\n \"\"\" Implementation of the gelu activation function currently in Google Bert repo (identical to OpenAI GPT).\r\n Also see https://arxiv.org/abs/1606.08415\r\n \"\"\"\r\n gelu_const = torch.sqrt(torch.full((), 2.0/math.pi, dtype=x.dtype, device=x.device))\r\n return 0.5 * x * (1 + torch.tanh(gelu_const * (x + 0.044715 * torch.pow(x, 3))))\r\n```\r\n\r\nSecond: use torch.jit.script only with pytorch>1.4.0. We won't need to wrap `gelu`, as it already has a native implementation, and for `gelu_new` we'll add a single check.\r\n",
"I've changed the PR so that gelu_new gets JIT-compiled only on pytorch>=1.4. Benchmarking resuts are the same with 3-4x faster forward and 3x faster backward for this activation (although no speedup on CPU float32). @sshleifer is it ready to be merged now?",
"In my opinion, yes. LGTM. \r\n@LysandreJik @julien-c this is a backwards compatible speedup. \r\n",
"Hello! Unfortunately, we'll have to [revert this PR](https://github.com/huggingface/transformers/pull/4050) as jitting an activation function prevents the model from being pickled.\r\n\r\nThis has already been an issue in several cases:\r\n- For TPU support, the models should be serializable. https://github.com/huggingface/transformers/pull/3743\r\n- For ONNX support (@mfuntowicz, @thomwolf)\r\n- For Pytorch-Lightning https://github.com/huggingface/transformers/issues/4038#issuecomment-620624613\r\n\r\nNonetheless, thank you for your contribution and for such a detailed study of what was to be gained from it."
] | 1,582 | 1,588 | 1,585 | CONTRIBUTOR | null | Currently, the implementation of the GELU activation uses several unfused pointwise operations. In my experiments, computing this activation takes about 10% of forward time for GPT2-like networks for inputs of size similar to (32,128). This PR speeds up the execution of gelu_new during both forward (~3-5x) and backward (~2-3x) passes with the help of torch.jit, which might be helpful for both training and inference.
Below are the benchmarking results, done on pytorch v1.4.0 and transformers v2.5.0 with RTX 2080Ti and GTX 1080Ti. The benchmarking code is available [here](https://gist.github.com/mryab/d639d7ba5e741cdd6a6c712a118e97f8).
1080Ti:
```
torch.float32 (32, 128) gelu 2.6e-04 4.1e-04 jit 1.1e-04 1.8e-04 speedup forward 2.50 backward 2.27
torch.float32 (32, 512) gelu 2.6e-04 4.1e-04 jit 6.5e-05 1.5e-04 speedup forward 4.06 backward 2.67
torch.float32 (32, 1024) gelu 2.6e-04 4.0e-04 jit 6.7e-05 1.6e-04 speedup forward 3.94 backward 2.59
torch.float32 (32, 4096) gelu 2.5e-04 3.9e-04 jit 6.6e-05 1.6e-04 speedup forward 3.75 backward 2.51
torch.float32 (256, 128) gelu 2.7e-04 4.1e-04 jit 6.7e-05 1.6e-04 speedup forward 3.96 backward 2.61
torch.float32 (256, 512) gelu 2.5e-04 4.0e-04 jit 6.5e-05 1.5e-04 speedup forward 3.88 backward 2.57
torch.float32 (256, 1024) gelu 2.5e-04 4.0e-04 jit 6.2e-05 1.5e-04 speedup forward 4.05 backward 2.62
torch.float32 (256, 4096) gelu 2.6e-04 4.2e-04 jit 1.0e-04 1.7e-04 speedup forward 2.52 backward 2.45
torch.float32 (1024, 128) gelu 2.5e-04 3.9e-04 jit 6.5e-05 1.5e-04 speedup forward 3.82 backward 2.57
torch.float32 (1024, 512) gelu 2.5e-04 3.8e-04 jit 7.2e-05 1.5e-04 speedup forward 3.43 backward 2.52
torch.float32 (1024, 1024) gelu 2.6e-04 4.2e-04 jit 1.0e-04 1.7e-04 speedup forward 2.52 backward 2.44
torch.float32 (1024, 4096) gelu 8.8e-04 1.3e-03 jit 3.2e-04 3.5e-04 speedup forward 2.71 backward 3.79
torch.float32 (8192, 128) gelu 2.6e-04 4.2e-04 jit 1.0e-04 1.7e-04 speedup forward 2.51 backward 2.43
torch.float32 (8192, 512) gelu 8.8e-04 1.3e-03 jit 3.2e-04 3.5e-04 speedup forward 2.72 backward 3.80
torch.float32 (8192, 1024) gelu 1.7e-03 2.5e-03 jit 6.4e-04 5.9e-04 speedup forward 2.69 backward 4.30
torch.float32 (8192, 4096) gelu 6.7e-03 1.0e-02 jit 2.7e-03 2.5e-03 speedup forward 2.53 backward 4.05
torch.float16 (32, 128) gelu 2.6e-04 4.0e-04 jit 9.4e-05 1.8e-04 speedup forward 2.79 backward 2.24
torch.float16 (32, 512) gelu 2.5e-04 3.9e-04 jit 6.2e-05 1.4e-04 speedup forward 4.09 backward 2.74
torch.float16 (32, 1024) gelu 2.6e-04 4.0e-04 jit 6.2e-05 1.5e-04 speedup forward 4.22 backward 2.68
torch.float16 (32, 4096) gelu 2.4e-04 3.8e-04 jit 6.3e-05 1.5e-04 speedup forward 3.84 backward 2.56
torch.float16 (256, 128) gelu 2.6e-04 4.0e-04 jit 6.1e-05 1.4e-04 speedup forward 4.34 backward 2.81
torch.float16 (256, 512) gelu 2.5e-04 3.9e-04 jit 6.4e-05 1.5e-04 speedup forward 3.98 backward 2.59
torch.float16 (256, 1024) gelu 2.4e-04 3.7e-04 jit 6.3e-05 1.4e-04 speedup forward 3.82 backward 2.65
torch.float16 (256, 4096) gelu 2.3e-04 3.2e-04 jit 7.6e-05 1.4e-04 speedup forward 3.00 backward 2.32
torch.float16 (1024, 128) gelu 2.2e-04 3.2e-04 jit 6.3e-05 1.4e-04 speedup forward 3.47 backward 2.32
torch.float16 (1024, 512) gelu 2.2e-04 3.2e-04 jit 6.3e-05 1.4e-04 speedup forward 3.47 backward 2.31
torch.float16 (1024, 1024) gelu 2.3e-04 3.2e-04 jit 7.6e-05 1.4e-04 speedup forward 3.01 backward 2.31
torch.float16 (1024, 4096) gelu 5.4e-04 8.9e-04 jit 2.2e-04 2.6e-04 speedup forward 2.44 backward 3.40
torch.float16 (8192, 128) gelu 2.5e-04 3.8e-04 jit 7.6e-05 1.5e-04 speedup forward 3.29 backward 2.61
torch.float16 (8192, 512) gelu 5.4e-04 8.9e-04 jit 2.2e-04 2.5e-04 speedup forward 2.43 backward 3.49
torch.float16 (8192, 1024) gelu 1.0e-03 1.7e-03 jit 4.8e-04 4.6e-04 speedup forward 2.18 backward 3.60
torch.float16 (8192, 4096) gelu 4.2e-03 6.5e-03 jit 2.3e-03 2.0e-03 speedup forward 1.83 backward 3.30
```
RTX 2080Ti:
```
torch.float32 (32, 128) gelu 3.0e-04 6.2e-04 jit 1.2e-04 2.2e-04 speedup forward 2.50 backward 2.80
torch.float32 (32, 512) gelu 3.2e-04 6.8e-04 jit 6.8e-05 2.1e-04 speedup forward 4.66 backward 3.20
torch.float32 (32, 1024) gelu 3.4e-04 7.2e-04 jit 6.8e-05 2.1e-04 speedup forward 4.96 backward 3.38
torch.float32 (32, 4096) gelu 3.3e-04 7.0e-04 jit 6.4e-05 1.8e-04 speedup forward 5.07 backward 3.83
torch.float32 (256, 128) gelu 3.3e-04 6.9e-04 jit 6.5e-05 1.9e-04 speedup forward 5.07 backward 3.57
torch.float32 (256, 512) gelu 3.0e-04 6.2e-04 jit 6.4e-05 1.9e-04 speedup forward 4.73 backward 3.21
torch.float32 (256, 1024) gelu 3.3e-04 6.9e-04 jit 6.6e-05 2.1e-04 speedup forward 4.95 backward 3.35
torch.float32 (256, 4096) gelu 3.3e-04 6.8e-04 jit 9.3e-05 2.2e-04 speedup forward 3.53 backward 3.09
torch.float32 (1024, 128) gelu 3.1e-04 6.2e-04 jit 6.5e-05 1.9e-04 speedup forward 4.70 backward 3.32
torch.float32 (1024, 512) gelu 3.4e-04 6.4e-04 jit 7.7e-05 1.9e-04 speedup forward 4.41 backward 3.30
torch.float32 (1024, 1024) gelu 3.1e-04 6.1e-04 jit 9.5e-05 2.2e-04 speedup forward 3.26 backward 2.73
torch.float32 (1024, 4096) gelu 6.2e-04 9.9e-04 jit 2.7e-04 3.1e-04 speedup forward 2.26 backward 3.15
torch.float32 (8192, 128) gelu 3.1e-04 4.9e-04 jit 9.7e-05 1.9e-04 speedup forward 3.13 backward 2.55
torch.float32 (8192, 512) gelu 6.1e-04 1.0e-03 jit 2.7e-04 3.4e-04 speedup forward 2.27 backward 2.99
torch.float32 (8192, 1024) gelu 1.2e-03 1.9e-03 jit 5.3e-04 5.5e-04 speedup forward 2.21 backward 3.38
torch.float32 (8192, 4096) gelu 4.5e-03 6.7e-03 jit 2.2e-03 1.6e-03 speedup forward 2.04 backward 4.24
torch.float16 (32, 128) gelu 3.2e-04 6.3e-04 jit 1.1e-04 2.2e-04 speedup forward 2.84 backward 2.92
torch.float16 (32, 512) gelu 3.3e-04 6.9e-04 jit 6.2e-05 1.6e-04 speedup forward 5.23 backward 4.29
torch.float16 (32, 1024) gelu 3.0e-04 5.9e-04 jit 6.5e-05 1.7e-04 speedup forward 4.58 backward 3.46
torch.float16 (32, 4096) gelu 3.0e-04 6.1e-04 jit 6.4e-05 1.8e-04 speedup forward 4.63 backward 3.34
torch.float16 (256, 128) gelu 3.0e-04 5.9e-04 jit 6.4e-05 1.7e-04 speedup forward 4.61 backward 3.49
torch.float16 (256, 512) gelu 3.0e-04 5.9e-04 jit 6.3e-05 1.7e-04 speedup forward 4.68 backward 3.41
torch.float16 (256, 1024) gelu 2.9e-04 5.7e-04 jit 6.5e-05 1.6e-04 speedup forward 4.40 backward 3.54
torch.float16 (256, 4096) gelu 2.9e-04 5.5e-04 jit 7.5e-05 2.0e-04 speedup forward 3.87 backward 2.74
torch.float16 (1024, 128) gelu 3.7e-04 6.3e-04 jit 8.0e-05 2.3e-04 speedup forward 4.59 backward 2.75
torch.float16 (1024, 512) gelu 3.4e-04 6.0e-04 jit 6.6e-05 1.6e-04 speedup forward 5.13 backward 3.81
torch.float16 (1024, 1024) gelu 3.0e-04 5.9e-04 jit 7.2e-05 1.9e-04 speedup forward 4.12 backward 3.08
torch.float16 (1024, 4096) gelu 4.1e-04 6.9e-04 jit 1.6e-04 2.6e-04 speedup forward 2.49 backward 2.68
torch.float16 (8192, 128) gelu 3.6e-04 6.6e-04 jit 7.0e-05 1.8e-04 speedup forward 5.08 backward 3.73
torch.float16 (8192, 512) gelu 4.1e-04 7.0e-04 jit 1.6e-04 2.5e-04 speedup forward 2.57 backward 2.76
torch.float16 (8192, 1024) gelu 7.4e-04 1.2e-03 jit 3.2e-04 4.1e-04 speedup forward 2.30 backward 2.81
torch.float16 (8192, 4096) gelu 2.8e-03 3.9e-03 jit 1.5e-03 1.2e-03 speedup forward 1.86 backward 3.34
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2988/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2988/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2988",
"html_url": "https://github.com/huggingface/transformers/pull/2988",
"diff_url": "https://github.com/huggingface/transformers/pull/2988.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2988.patch",
"merged_at": 1585941622000
} |
https://api.github.com/repos/huggingface/transformers/issues/2987 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2987/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2987/comments | https://api.github.com/repos/huggingface/transformers/issues/2987/events | https://github.com/huggingface/transformers/pull/2987 | 569,575,777 | MDExOlB1bGxSZXF1ZXN0Mzc4NzYzMDQ3 | 2,987 | Add preprocessing step for transfo-xl tokenization to avoid tokenizing words followed by punction to <unk> | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2987?src=pr&el=h1) Report\n> Merging [#2987](https://codecov.io/gh/huggingface/transformers/pull/2987?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/129f0604acb9e8b9cebd2897437324198fa37a0a?src=pr&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `91.66%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2987?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2987 +/- ##\n==========================================\n+ Coverage 77.17% 77.18% +0.01% \n==========================================\n Files 98 98 \n Lines 15997 16009 +12 \n==========================================\n+ Hits 12345 12356 +11 \n- Misses 3652 3653 +1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2987?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/2987/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `39.8% <91.66%> (+1.57%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2987?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2987?src=pr&el=footer). Last update [129f060...e0a9fd2](https://codecov.io/gh/huggingface/transformers/pull/2987?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Good to be merged then, I think!",
"Using this new preprocessing step, what is the proper way to format a paragraph of text so that it will work as a prompt for XLNet? Just a few lines of code as an example would make it clear to me, I think. \r\n\r\nThe \"padding text\" in the example contains sentences with punctuation not separated by spaces. At the end of the text is \"\\<eod\\> \\</s\\> \\<eos\\>\" Shouldn't I include a \\<eos\\> after every sentence, by hand? If not and it can be done automatically, how do you avoid things like Mr. or Dr. that end in periods but don't end the sentence? \r\n\r\nThanks.\r\n",
"Hi @summerstay, this preprocessing step is only necessary for Transfo-XL Net. For XLNet, you don't need to separate the text. Try out this code to see what I mean:\r\n\r\n```\r\ndef show_tokenization(text, model_name):\r\n tok = AutoTokenizer.from_pretrained(model_name)\r\n print(tok.decode(tok.encode(text, add_special_tokens=False)))\r\n\r\n\r\nshow_tokenization('This is an example. See what happens with, and. ?', 'transfo-xl-wt103')\r\nshow_tokenization('This is an example. See what happens with, and. ?', 'xlnet-base-cased')\r\n\r\n# prints:\r\n# You might want to consider setting `add_space_before_punct_symbol=True` as an argument to the `tokenizer.encode()` to avoid tokenizing words with punctuation symbols to the `<unk>` token\r\n# This is an <unk> See what happens <unk> <unk>?\r\n# This is an example. See what happens with, and.?\r\n```\r\n"
] | 1,582 | 1,584 | 1,582 | MEMBER | null | The problem is well shown in Issue #2000 .
This PR adds a preprocessing step to transfo-xl tokenization to better deal with the problem.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2987/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2987",
"html_url": "https://github.com/huggingface/transformers/pull/2987",
"diff_url": "https://github.com/huggingface/transformers/pull/2987.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2987.patch",
"merged_at": 1582575071000
} |
https://api.github.com/repos/huggingface/transformers/issues/2986 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2986/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2986/comments | https://api.github.com/repos/huggingface/transformers/issues/2986/events | https://github.com/huggingface/transformers/issues/2986 | 569,569,745 | MDU6SXNzdWU1Njk1Njk3NDU= | 2,986 | How to generate BERT/Roberta word/sentence embedding? | {
"login": "zjplab",
"id": 16349466,
"node_id": "MDQ6VXNlcjE2MzQ5NDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/16349466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zjplab",
"html_url": "https://github.com/zjplab",
"followers_url": "https://api.github.com/users/zjplab/followers",
"following_url": "https://api.github.com/users/zjplab/following{/other_user}",
"gists_url": "https://api.github.com/users/zjplab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zjplab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zjplab/subscriptions",
"organizations_url": "https://api.github.com/users/zjplab/orgs",
"repos_url": "https://api.github.com/users/zjplab/repos",
"events_url": "https://api.github.com/users/zjplab/events{/privacy}",
"received_events_url": "https://api.github.com/users/zjplab/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"Hi there. A few weeks or months ago, I wrote this notebook to introduce my colleagues to doing inference on LMs. In other words: how can I get a sentence representation out of them. You can have a look [here](https://github.com/BramVanroy/bert-for-inference/blob/master/introduction-to-bert.ipynb). It should be self-explanatory.",
"Hey @zjplab, for sentence embeddings, I'd recommend this library https://github.com/UKPLab/sentence-transformers along with their paper. They explain how they get their sentence embeddings as well as the pros and cons to several different methods of doing it. They have embeddings for bert/roberta and many more",
"There's also spaCy's wrapper of transformers [spacy-transformers](https://github.com/explosion/spacy-transformers). Can compare sentences to each other, and access sentence embeddings:\r\n\r\n[examples/Spacy_Transformers_Demo.ipynb](https://github.com/explosion/spacy-transformers/blob/master/examples/Spacy_Transformers_Demo.ipynb)\r\n```python\r\n# $ pip install spacy-transformers\r\n# $ python -m spacy download en_trf_bertbaseuncased_lg\r\n\r\nimport spacy\r\nnlp = spacy.load(\"en_trf_bertbaseuncased_lg\")\r\napple1 = nlp(\"Apple shares rose on the news.\")\r\napple2 = nlp(\"Apple sold fewer iPhones this quarter.\")\r\napple3 = nlp(\"Apple pie is delicious.\")\r\n\r\n# sentence similarity\r\nprint(apple1.similarity(apple2)) #0.69861203\r\nprint(apple1.similarity(apple3)) #0.5404963\r\n\r\n# sentence embeddings\r\napple1.vector # or apple1.tensor.sum(axis=0)\r\n```\r\n\r\nI'm fairly confident `apple1.vector` is the sentence embedding, but someone will want to double-check.\r\n\r\n[Edit] spacy-transformers currenty requires transformers==2.0.0, which is pretty far behind. It also doesn't let you embed batches (one sentence at a time). I'm gonna use UKPLab/sentence-transformers, personally.",
"> There's also spaCy's wrapper of transformers [spacy-transformers](https://github.com/explosion/spacy-transformers). Can compare sentences to each other, and access sentence embeddings:\r\n> \r\n> [examples/Spacy_Transformers_Demo.ipynb](https://github.com/explosion/spacy-transformers/blob/master/examples/Spacy_Transformers_Demo.ipynb)\r\n> \r\n> ```python\r\n> # $ pip install spacy-transformers\r\n> # $ python -m spacy download en_trf_bertbaseuncased_lg\r\n> \r\n> import spacy\r\n> nlp = spacy.load(\"en_trf_bertbaseuncased_lg\")\r\n> apple1 = nlp(\"Apple shares rose on the news.\")\r\n> apple2 = nlp(\"Apple sold fewer iPhones this quarter.\")\r\n> apple3 = nlp(\"Apple pie is delicious.\")\r\n> \r\n> # sentence similarity\r\n> print(apple1.similarity(apple2)) #0.69861203\r\n> print(apple1.similarity(apple3)) #0.5404963\r\n> \r\n> # sentence embeddings\r\n> apple1.vector # or apple1.tensor.sum(axis=0)\r\n> ```\r\n> \r\n> I'm fairly confident `apple1.vector` is the sentence embedding, but someone will want to double-check.\r\n> \r\n> [Edit] spacy-transformers currenty requires transformers==2.0.0, which is pretty far behind. It also doesn't let you embed batches (one sentence at a time). I'm gonna use UKPLab/sentence-transformers, personally.\r\n\r\nIs there any way to compare a contextualized word embedding with a word embedding? Let's say I have a sentence \"Apples are delicious\" and I want to compare the similarity of the contextualized word \"apples\" against words such as \"fruit\" or \"company\". Is there any way to do so with transformers like BERT that could deliver reliable numbers? Thanks in advance.\r\n",
"This one seems to do the job too: [https://github.com/ashokc/Bow-to-Bert](https://github.com/ashokc/Bow-to-Bert), accompanied with this blog post [http://xplordat.com/2019/09/23/bow-to-bert/](http://xplordat.com/2019/09/23/bow-to-bert/)"
] | 1,582 | 1,596 | 1,582 | NONE | null | I know the stanford operation.
```python
tokenizer = RobertaTokenizer.from_pretrained('roberta-large')
model = RobertaModel.from_pretrained('roberta-large')
input_ids = torch.tensor(tokenizer.encode("Hello, my <span class="highlighter highlight-on">dog</span> is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids)
last_hidden_states = outputs[0] #(batch_size, input_len, embedding_size) But I need single vector for each sentence
```
But. I am working on improving RNN with incorporating Bert-like pretrain model embedding. How to get a sentence embedding so in this case(one vector for entire sentence)? Averaging or some transformation of the last_hidden_states? Is `add_special_token` necessary? Any suggested papers to read? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2986/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2986/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2985 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2985/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2985/comments | https://api.github.com/repos/huggingface/transformers/issues/2985/events | https://github.com/huggingface/transformers/issues/2985 | 569,562,120 | MDU6SXNzdWU1Njk1NjIxMjA= | 2,985 | `AutoModel.from_pretrained` sends config kwargs to model | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | null | [] | [
"Seems related to #2694",
"Indeed! I fixed the misleading documentation with #2998."
] | 1,582 | 1,582 | 1,582 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert (may apply to more)
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```python
from transformers import AutoModel
AutoModel.from_pretrained('bert-base-uncased', output_attention=True)
```
([example from the docs](https://huggingface.co/transformers/model_doc/auto.html#transformers.AutoModel.from_pretrained))
It crashes:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "[...]/transformers/src/transformers/modeling_auto.py", line 384, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "[...]/transformers/src/transformers/modeling_utils.py", line 463, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
TypeError: __init__() got an unexpected keyword argument 'output_attention'
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
That the code returns a correct model, without crashing
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.0 (master)
- Platform: Linux-4.15.0-76-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.4
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.0.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: no | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2985/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2984 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2984/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2984/comments | https://api.github.com/repos/huggingface/transformers/issues/2984/events | https://github.com/huggingface/transformers/pull/2984 | 569,561,917 | MDExOlB1bGxSZXF1ZXN0Mzc4NzUzNTMy | 2,984 | add_ctags_to_git_ignore | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2984?src=pr&el=h1) Report\n> Merging [#2984](https://codecov.io/gh/huggingface/transformers/pull/2984?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/129f0604acb9e8b9cebd2897437324198fa37a0a?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2984?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2984 +/- ##\n=======================================\n Coverage 77.17% 77.17% \n=======================================\n Files 98 98 \n Lines 15997 15997 \n=======================================\n Hits 12345 12345 \n Misses 3652 3652\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2984?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2984?src=pr&el=footer). Last update [129f060...5636868](https://codecov.io/gh/huggingface/transformers/pull/2984?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,582 | 1,582 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2984/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2984",
"html_url": "https://github.com/huggingface/transformers/pull/2984",
"diff_url": "https://github.com/huggingface/transformers/pull/2984.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2984.patch",
"merged_at": 1582494932000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2983 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2983/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2983/comments | https://api.github.com/repos/huggingface/transformers/issues/2983/events | https://github.com/huggingface/transformers/pull/2983 | 569,554,729 | MDExOlB1bGxSZXF1ZXN0Mzc4NzQ4Mjgy | 2,983 | NER support for Albert in run_ner.py and NerPipeline | {
"login": "marma",
"id": 144026,
"node_id": "MDQ6VXNlcjE0NDAyNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/144026?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marma",
"html_url": "https://github.com/marma",
"followers_url": "https://api.github.com/users/marma/followers",
"following_url": "https://api.github.com/users/marma/following{/other_user}",
"gists_url": "https://api.github.com/users/marma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marma/subscriptions",
"organizations_url": "https://api.github.com/users/marma/orgs",
"repos_url": "https://api.github.com/users/marma/repos",
"events_url": "https://api.github.com/users/marma/events{/privacy}",
"received_events_url": "https://api.github.com/users/marma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2983?src=pr&el=h1) Report\n> Merging [#2983](https://codecov.io/gh/huggingface/transformers/pull/2983?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/129f0604acb9e8b9cebd2897437324198fa37a0a?src=pr&el=desc) will **decrease** coverage by `0.11%`.\n> The diff coverage is `19.23%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2983?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2983 +/- ##\n==========================================\n- Coverage 77.17% 77.05% -0.12% \n==========================================\n Files 98 98 \n Lines 15997 16023 +26 \n==========================================\n+ Hits 12345 12347 +2 \n- Misses 3652 3676 +24\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2983?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/2983/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.91% <ø> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2983/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `75.47% <ø> (ø)` | :arrow_up: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2983/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `70.88% <ø> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/2983/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `75.25% <19.23%> (-3.9%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2983/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.05% <0%> (-0.44%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2983?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2983?src=pr&el=footer). Last update [129f060...f711575](https://codecov.io/gh/huggingface/transformers/pull/2983?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,582 | 1,582 | CONTRIBUTOR | null | I have added a class, AlbertForTokenClassification, based on BertForTokenClassification and added it to lists used for checking NER capabilities in run_ner.py and NerPipeline.
I tested NER fine-tuning on albert-base-v2, albert-large-v2 and albert-xlarge-v2 on the english CONLL 2003 dataset and they all get F1 of around 0.93-0.94, so it seems to be working. The fine-tuned models are published [here](https://huggingface.co/KB).
I've also added some command-line options to better control tokenization since different tokenizers have different possible arguments and defaults. I guess that in the end, when all tokenizers behave the same, these options will be unnecessary.
I changed how NerPipeline outputs tokens, from .decode(..) to .convert_ids_to_tokens(...) since it removes '_' at the beginning of tokens making it impossible (for sentencepiece tokens) to know which tokens form a word. Using .convert(...) would make sense if it were outputting whole words and not words split into tokens. It might make sense to change this so that NerPipeline outputs whole words. That would assume that all the tokens in a word gets classified with the same label, which is not always the case.
I had one weird thing happening: when fine-tuning albert-large-v2 specifically for 3 or 4 epochs F1 would be reported as exactly 0. When setting num_train_epochs to 2 or 5 this did not happen. I'm going to assume that this has nothing to do with the code submitted :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2983/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2983",
"html_url": "https://github.com/huggingface/transformers/pull/2983",
"diff_url": "https://github.com/huggingface/transformers/pull/2983.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2983.patch",
"merged_at": 1582816976000
} |
https://api.github.com/repos/huggingface/transformers/issues/2982 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2982/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2982/comments | https://api.github.com/repos/huggingface/transformers/issues/2982/events | https://github.com/huggingface/transformers/pull/2982 | 569,545,742 | MDExOlB1bGxSZXF1ZXN0Mzc4NzQxODIy | 2,982 | Change masking to direct labeling for TPU support. | {
"login": "srush",
"id": 35882,
"node_id": "MDQ6VXNlcjM1ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srush",
"html_url": "https://github.com/srush",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"organizations_url": "https://api.github.com/users/srush/orgs",
"repos_url": "https://api.github.com/users/srush/repos",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"received_events_url": "https://api.github.com/users/srush/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2982?src=pr&el=h1) Report\n> Merging [#2982](https://codecov.io/gh/huggingface/transformers/pull/2982?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d1ab1fab1be7199e082129dfbe46eb52bca92799?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `62.5%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2982?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2982 +/- ##\n==========================================\n+ Coverage 74.09% 74.09% +<.01% \n==========================================\n Files 93 93 \n Lines 15249 15253 +4 \n==========================================\n+ Hits 11298 11301 +3 \n- Misses 3951 3952 +1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2982?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `64.95% <0%> (-0.31%)` | :arrow_down: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `95.87% <100%> (+0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `87.92% <100%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.25% <50%> (+0.04%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2982?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2982?src=pr&el=footer). Last update [d1ab1fa...757e2c3](https://codecov.io/gh/huggingface/transformers/pull/2982?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,582 | 1,582 | CONTRIBUTOR | null | Torch XLA conversion is very sensitive to certain core pytorch operations. These result in TPU being slower than CPU operation. The most obvious are binary masking and calls to item().
https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md
This PR replaces some of these calls that are in the example/NER pathway.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2982/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2982/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2982",
"html_url": "https://github.com/huggingface/transformers/pull/2982",
"diff_url": "https://github.com/huggingface/transformers/pull/2982.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2982.patch",
"merged_at": 1582660064000
} |
https://api.github.com/repos/huggingface/transformers/issues/2981 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2981/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2981/comments | https://api.github.com/repos/huggingface/transformers/issues/2981/events | https://github.com/huggingface/transformers/issues/2981 | 569,525,918 | MDU6SXNzdWU1Njk1MjU5MTg= | 2,981 | Strange bug when Finetuning own pretrained model (with an even stranger solution) | {
"login": "aditya-malte",
"id": 20294625,
"node_id": "MDQ6VXNlcjIwMjk0NjI1",
"avatar_url": "https://avatars.githubusercontent.com/u/20294625?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aditya-malte",
"html_url": "https://github.com/aditya-malte",
"followers_url": "https://api.github.com/users/aditya-malte/followers",
"following_url": "https://api.github.com/users/aditya-malte/following{/other_user}",
"gists_url": "https://api.github.com/users/aditya-malte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aditya-malte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aditya-malte/subscriptions",
"organizations_url": "https://api.github.com/users/aditya-malte/orgs",
"repos_url": "https://api.github.com/users/aditya-malte/repos",
"events_url": "https://api.github.com/users/aditya-malte/events{/privacy}",
"received_events_url": "https://api.github.com/users/aditya-malte/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | null | [] | [
"Hi! This is an interesting use-case, I think the error stems from the `run_glue` script trying to re-use the different attributes the `run_language_modeling` script had saved.\r\n\r\nThat includes:\r\n- the optimizer state\r\n- the scheduler state\r\n- the current global step, which is inferred from the name\r\n\r\nYour patch works because \r\n1) the optimizer state shouldn't be kept across different trainings. Deleting the optimizer file makes sense.\r\n2) The script believes you're already at a very high global step, as inferred from the name of your file. Setting a very high number of epochs means a very high number of steps to complete the training, hence some remaining steps.\r\n\r\nWe should work to fix the issue, but for now I would recommend deleting the files you don't need (`optimizer.pt` and `scheduler.pt`), and rename your folder containing your model/config/tokenizer files so that it doesn't end with a number.",
"Maybe we could raise a warning after pretraining is over. Ideally, this should be handled by the script itself, and such deletion etc. should not be required ",
"Yes, I was also stuck on this issue. @LysandreJik , kudos to your hack.",
"Stuck in the same issue too. Thanks for your suggestion @LysandreJik ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,592 | 1,592 | NONE | null | # 🐛 Bug
## Information
Roberta
Language I am using the model on (English, Chinese ...): Latin script(migh have a mix of languages)
The problem arises when using:
run_glue on model obtained from run_language_modeling
The tasks I am working on is:
Sequence Classification(single)
Steps to reproduce the behavior:
1. Train model using run_language_modeling
2. Use trained model in run_glue script
Error:
File "run_glue.py", line 148, in train
optimizer.load_state_dict(torch.load(os.path.join(args.model_name_or_path, "optimizer.pt")))
File "/usr/local/lib/python3.6/dist-packages/torch/optim/optimizer.py", line 116, in load_state_dict
raise ValueError("loaded state dict contains a parameter group "
ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group
Note: I searched what might cause this error(freezing some layers and passing an incorrect params_group). But I have not done anything like that, so this error should not occur.
## Quick Hack/Solution:
There is a strange solution simply by deleting optimizer.pt and setting number of epochs to an arbitrarily large number. Not setting epochs to a very high number causes the script to proceed directly to evaluation and not do any training.
## Environment info
Google Colab
Tokenizers 0.5
Transformers 2.5
GPU:P4 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2981/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/2981/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2980 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2980/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2980/comments | https://api.github.com/repos/huggingface/transformers/issues/2980/events | https://github.com/huggingface/transformers/issues/2980 | 569,518,803 | MDU6SXNzdWU1Njk1MTg4MDM= | 2,980 | Cannot install Transformers version >2.3.0 with pip on CentOS | {
"login": "JohnGiorgi",
"id": 8917831,
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnGiorgi",
"html_url": "https://github.com/JohnGiorgi",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
},
{
"id": 1843765959,
"node_id": "MDU6TGFiZWwxODQzNzY1OTU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Installation",
"name": "Installation",
"color": "bfdadc",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Which version of Rust are you on? Looking at the trace [PyO3 ](https://pyo3.rs/v0.9.0-alpha.1/)requires at least 1.37.0-nightly 2019-07-19.",
"I am on version `1.41.0`",
"Interestingly, their website says 1.37.x is the requirement, but [on GitHub](https://github.com/PyO3/pyo3#usage) they say you need 1.42.0-nightly 2020-01-21. That's quite a harsh requirement, I think, but nothing you can do about it I suppose. (Except installing an older version of PyO3 from source.) Can you try either of those options and let us know?",
"I went with the former option (installing a nightly build of rust). Here is what I tried:\r\n\r\n1. Install [rustup](https://rustup.rs/)\r\n\r\n```\r\ncurl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh\r\n```\r\n\r\n2. Install a [nightly build of rust](https://doc.rust-lang.org/book/appendix-07-nightly-rust.html#rustup-and-the-role-of-rust-nightly)\r\n\r\n```\r\ncurl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh\r\nsource $HOME/.cargo/env\r\n```\r\n\r\nTry to update `transformers`:\r\n\r\n```\r\npip install --upgrade transformers\r\n```\r\n\r\nNo beans. Got the following stacktrace:\r\n\r\n```\r\nBuilding wheels for collected packages: tokenizers\r\n Building wheel for tokenizers (PEP 517) ... error\r\n ERROR: Command errored out with exit status 1:\r\n command: /home/johnmg/t2t/bin/python /home/johnmg/t2t/lib/python3.7/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /tmp/tmp6fk4hgm1\r\n cwd: /tmp/pip-install-k2pjj650/tokenizers\r\n Complete output (224 lines):\r\n running bdist_wheel\r\n running build\r\n running build_py\r\n creating build\r\n creating build/lib\r\n creating build/lib/tokenizers\r\n copying tokenizers/__init__.py -> build/lib/tokenizers\r\n creating build/lib/tokenizers/models\r\n copying tokenizers/models/__init__.py -> build/lib/tokenizers/models\r\n creating build/lib/tokenizers/decoders\r\n copying tokenizers/decoders/__init__.py -> build/lib/tokenizers/decoders\r\n creating build/lib/tokenizers/normalizers\r\n copying tokenizers/normalizers/__init__.py -> build/lib/tokenizers/normalizers\r\n creating build/lib/tokenizers/pre_tokenizers\r\n copying tokenizers/pre_tokenizers/__init__.py -> build/lib/tokenizers/pre_tokenizers\r\n creating build/lib/tokenizers/processors\r\n copying tokenizers/processors/__init__.py -> build/lib/tokenizers/processors\r\n creating build/lib/tokenizers/trainers\r\n copying tokenizers/trainers/__init__.py -> build/lib/tokenizers/trainers\r\n creating build/lib/tokenizers/implementations\r\n copying tokenizers/implementations/sentencepiece_bpe.py -> build/lib/tokenizers/implementations\r\n copying tokenizers/implementations/char_level_bpe.py -> build/lib/tokenizers/implementations\r\n copying tokenizers/implementations/byte_level_bpe.py -> build/lib/tokenizers/implementations\r\n copying tokenizers/implementations/bert_wordpiece.py -> build/lib/tokenizers/implementations\r\n copying tokenizers/implementations/base_tokenizer.py -> build/lib/tokenizers/implementations\r\n copying tokenizers/implementations/__init__.py -> build/lib/tokenizers/implementations\r\n copying tokenizers/__init__.pyi -> build/lib/tokenizers\r\n copying tokenizers/models/__init__.pyi -> build/lib/tokenizers/models\r\n copying tokenizers/decoders/__init__.pyi -> build/lib/tokenizers/decoders\r\n copying tokenizers/normalizers/__init__.pyi -> build/lib/tokenizers/normalizers\r\n copying tokenizers/pre_tokenizers/__init__.pyi -> build/lib/tokenizers/pre_tokenizers\r\n copying tokenizers/processors/__init__.pyi -> build/lib/tokenizers/processors\r\n copying tokenizers/trainers/__init__.pyi -> build/lib/tokenizers/trainers\r\n running build_ext\r\n running build_rust\r\n Updating crates.io index\r\n warning: unused manifest key: target.x86_64-apple-darwin.rustflags\r\n Compiling proc-macro2 v1.0.8\r\n Compiling unicode-xid v0.2.0\r\n Compiling syn v1.0.15\r\n Compiling libc v0.2.67\r\n Compiling lazy_static v1.4.0\r\n Compiling autocfg v1.0.0\r\n Compiling cfg-if v0.1.10\r\n Compiling semver-parser v0.7.0\r\n Compiling memchr v2.3.3\r\n Compiling serde v1.0.104\r\n Compiling regex-syntax v0.6.14\r\n Running `rustc --crate-name build_script_build --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/proc-macro2-1.0.8/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature=\"default\"' --cfg 'feature=\"proc-macro\"' -C metadata=7f8009cddc5e6def -C extra-filename=-7f8009cddc5e6def --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/proc-macro2-7f8009cddc5e6def -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name unicode_xid /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode-xid-0.2.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"default\"' -C metadata=c4b64db85789a8a8 -C extra-filename=-c4b64db85789a8a8 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name build_script_build --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/syn-1.0.15/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature=\"clone-impls\"' --cfg 'feature=\"default\"' --cfg 'feature=\"derive\"' --cfg 'feature=\"extra-traits\"' --cfg 'feature=\"full\"' --cfg 'feature=\"parsing\"' --cfg 'feature=\"printing\"' --cfg 'feature=\"proc-macro\"' --cfg 'feature=\"quote\"' --cfg 'feature=\"visit\"' -C metadata=df69c996af1dadc1 -C extra-filename=-df69c996af1dadc1 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/syn-df69c996af1dadc1 -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/libc-0.2.67/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature=\"default\"' --cfg 'feature=\"std\"' -C metadata=59dfff2fc32cb87b -C extra-filename=-59dfff2fc32cb87b --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/libc-59dfff2fc32cb87b -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name lazy_static /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/lazy_static-1.4.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=daaa2cdb90fc8b44 -C extra-filename=-daaa2cdb90fc8b44 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name autocfg /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/autocfg-1.0.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=be76b16d1dfaa3e8 -C extra-filename=-be76b16d1dfaa3e8 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name cfg_if --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/cfg-if-0.1.10/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=796850ba8a8cedaa -C extra-filename=-796850ba8a8cedaa --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name semver_parser /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/semver-parser-0.7.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=69550f148bc5bb95 -C extra-filename=-69550f148bc5bb95 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Compiling maybe-uninit v2.0.0\r\n Compiling ryu v1.0.2\r\n Compiling getrandom v0.1.14\r\n Compiling unicode-width v0.1.7\r\n Compiling itoa v0.4.5\r\n Compiling scopeguard v1.1.0\r\n Compiling ppv-lite86 v0.2.6\r\n Compiling bitflags v1.2.1\r\n Compiling rayon-core v1.7.0\r\n Compiling version_check v0.9.1\r\n Compiling unindent v0.1.5\r\n Compiling smallvec v1.2.0\r\n Compiling either v1.5.3\r\n Compiling strsim v0.8.0\r\n Compiling vec_map v0.8.1\r\n Compiling ansi_term v0.11.0\r\n Compiling number_prefix v0.3.0\r\n Compiling unicode_categories v0.1.1\r\n Compiling spin v0.5.2\r\n Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/memchr-2.3.3/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature=\"default\"' --cfg 'feature=\"std\"' --cfg 'feature=\"use_std\"' -C metadata=1d902e6b0fc561bf -C extra-filename=-1d902e6b0fc561bf --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/memchr-1d902e6b0fc561bf -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name regex_syntax /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/regex-syntax-0.6.14/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"default\"' --cfg 'feature=\"unicode\"' --cfg 'feature=\"unicode-age\"' --cfg 'feature=\"unicode-bool\"' --cfg 'feature=\"unicode-case\"' --cfg 'feature=\"unicode-gencat\"' --cfg 'feature=\"unicode-perl\"' --cfg 'feature=\"unicode-script\"' --cfg 'feature=\"unicode-segment\"' -C metadata=49413942df53b636 -C extra-filename=-49413942df53b636 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/serde-1.0.104/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature=\"default\"' --cfg 'feature=\"derive\"' --cfg 'feature=\"serde_derive\"' --cfg 'feature=\"std\"' -C metadata=d826302b09b30fa3 -C extra-filename=-d826302b09b30fa3 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/serde-d826302b09b30fa3 -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/maybe-uninit-2.0.0/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -C metadata=733894adef0cf9fb -C extra-filename=-733894adef0cf9fb --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/maybe-uninit-733894adef0cf9fb -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ryu-1.0.2/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -C metadata=5e3d074139bd55e5 -C extra-filename=-5e3d074139bd55e5 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/ryu-5e3d074139bd55e5 -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name build_script_build --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/getrandom-0.1.14/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature=\"std\"' -C metadata=df35e20e514661d3 -C extra-filename=-df35e20e514661d3 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/getrandom-df35e20e514661d3 -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name unicode_width /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode-width-0.1.7/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"default\"' -C metadata=23c2035cef4c6900 -C extra-filename=-23c2035cef4c6900 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name itoa /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/itoa-0.4.5/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=0670cd2bba7e59c0 -C extra-filename=-0670cd2bba7e59c0 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name scopeguard /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/scopeguard-1.1.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=cc84917dee271887 -C extra-filename=-cc84917dee271887 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name ppv_lite86 --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ppv-lite86-0.2.6/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"simd\"' --cfg 'feature=\"std\"' -C metadata=013047a1e1834c1c -C extra-filename=-013047a1e1834c1c --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/bitflags-1.2.1/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature=\"default\"' -C metadata=836b697d86bba37f -C extra-filename=-836b697d86bba37f --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/bitflags-836b697d86bba37f -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name build_script_build --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.0/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -C metadata=f57886c0482abf7e -C extra-filename=-f57886c0482abf7e --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/rayon-core-f57886c0482abf7e -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name version_check /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/version_check-0.9.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=db8b107c34362735 -C extra-filename=-db8b107c34362735 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name unindent --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/unindent-0.1.5/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=dc1487787c7c90f6 -C extra-filename=-dc1487787c7c90f6 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name smallvec --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/smallvec-1.2.0/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=d5eba03a866e39a3 -C extra-filename=-d5eba03a866e39a3 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name either /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/either-1.5.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=99303bd779c82d42 -C extra-filename=-99303bd779c82d42 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name strsim /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/strsim-0.8.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=e2648e9eff68e95c -C extra-filename=-e2648e9eff68e95c --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name vec_map /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/vec_map-0.8.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=6138169e8c0f8f54 -C extra-filename=-6138169e8c0f8f54 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name ansi_term /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ansi_term-0.11.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=fa4822742d417eef -C extra-filename=-fa4822742d417eef --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name number_prefix /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/number_prefix-0.3.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"default\"' --cfg 'feature=\"std\"' -C metadata=e3de789ab67e6629 -C extra-filename=-e3de789ab67e6629 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name unicode_categories /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode_categories-0.1.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=190c74b7ffce666b -C extra-filename=-190c74b7ffce666b --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Running `rustc --crate-name spin /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/spin-0.5.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=bbad36607e408080 -C extra-filename=-bbad36607e408080 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow`\r\n Compiling semver v0.9.0\r\n Running `rustc --crate-name semver /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/semver-0.9.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"default\"' -C metadata=3dc1d287cff8dfce -C extra-filename=-3dc1d287cff8dfce --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern semver_parser=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsemver_parser-69550f148bc5bb95.rmeta --cap-lints allow`\r\n Compiling textwrap v0.11.0\r\n Running `rustc --crate-name textwrap /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/textwrap-0.11.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=50a2c2e63154ee37 -C extra-filename=-50a2c2e63154ee37 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern unicode_width=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libunicode_width-23c2035cef4c6900.rmeta --cap-lints allow`\r\n Compiling thread_local v1.0.1\r\n Running `rustc --crate-name thread_local /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/thread_local-1.0.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=eac48644045cdc04 -C extra-filename=-eac48644045cdc04 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern lazy_static=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblazy_static-daaa2cdb90fc8b44.rmeta --cap-lints allow`\r\n Compiling unicode-normalization-alignments v0.1.12\r\n Running `rustc --crate-name unicode_normalization_alignments /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode-normalization-alignments-0.1.12/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=4bec82dc12738e91 -C extra-filename=-4bec82dc12738e91 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern smallvec=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsmallvec-d5eba03a866e39a3.rmeta --cap-lints allow`\r\n Compiling rustc_version v0.2.3\r\n Running `rustc --crate-name rustc_version /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rustc_version-0.2.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=f9f112a037565bee -C extra-filename=-f9f112a037565bee --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern semver=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsemver-3dc1d287cff8dfce.rmeta --cap-lints allow`\r\n Compiling crossbeam-utils v0.7.2\r\n Compiling crossbeam-epoch v0.8.2\r\n Compiling num-traits v0.2.11\r\n Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-utils-0.7.2/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature=\"default\"' --cfg 'feature=\"lazy_static\"' --cfg 'feature=\"std\"' -C metadata=82d9fcbae2b0fcfe -C extra-filename=-82d9fcbae2b0fcfe --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/crossbeam-utils-82d9fcbae2b0fcfe -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern autocfg=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libautocfg-be76b16d1dfaa3e8.rlib --cap-lints allow`\r\n Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-epoch-0.8.2/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature=\"default\"' --cfg 'feature=\"lazy_static\"' --cfg 'feature=\"std\"' -C metadata=d1169acc937644c9 -C extra-filename=-d1169acc937644c9 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/crossbeam-epoch-d1169acc937644c9 -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern autocfg=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libautocfg-be76b16d1dfaa3e8.rlib --cap-lints allow`\r\n Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/num-traits-0.2.11/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature=\"default\"' --cfg 'feature=\"std\"' -C metadata=5b86f099584d3dae -C extra-filename=-5b86f099584d3dae --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/num-traits-5b86f099584d3dae -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern autocfg=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libautocfg-be76b16d1dfaa3e8.rlib --cap-lints allow`\r\n Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/rayon-core-f57886c0482abf7e/build-script-build`\r\n Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/memchr-1d902e6b0fc561bf/build-script-build`\r\n Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/bitflags-836b697d86bba37f/build-script-build`\r\n Running `rustc --crate-name memchr /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/memchr-2.3.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"default\"' --cfg 'feature=\"std\"' --cfg 'feature=\"use_std\"' -C metadata=2eb004acc56bfef6 -C extra-filename=-2eb004acc56bfef6 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow --cfg memchr_runtime_simd --cfg memchr_runtime_sse2 --cfg memchr_runtime_sse42 --cfg memchr_runtime_avx`\r\n Running `rustc --crate-name bitflags /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/bitflags-1.2.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"default\"' -C metadata=561e1eaab5576c8d -C extra-filename=-561e1eaab5576c8d --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow --cfg bitflags_const_fn`\r\n Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/libc-59dfff2fc32cb87b/build-script-build`\r\n Running `rustc --crate-name libc /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/libc-0.2.67/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"default\"' --cfg 'feature=\"std\"' -C metadata=1ecf199bab423512 -C extra-filename=-1ecf199bab423512 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow --cfg freebsd11 --cfg libc_priv_mod_use --cfg libc_union --cfg libc_const_size_of --cfg libc_align --cfg libc_core_cvoid --cfg libc_packedN`\r\n Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/getrandom-df35e20e514661d3/build-script-build`\r\n Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/ryu-5e3d074139bd55e5/build-script-build`\r\n Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/syn-df69c996af1dadc1/build-script-build`\r\n Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/serde-d826302b09b30fa3/build-script-build`\r\n Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/maybe-uninit-733894adef0cf9fb/build-script-build`\r\n Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/proc-macro2-7f8009cddc5e6def/build-script-build`\r\n Running `rustc --crate-name maybe_uninit /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/maybe-uninit-2.0.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=d37c4119dc6540c4 -C extra-filename=-d37c4119dc6540c4 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow --cfg derive_copy --cfg repr_transparent --cfg native_uninit`\r\n Running `rustc --crate-name ryu /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ryu-1.0.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=c0ae351db5af2a03 -C extra-filename=-c0ae351db5af2a03 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow --cfg integer128 --cfg must_use_return --cfg maybe_uninit`\r\n Compiling memoffset v0.5.3\r\n Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/memoffset-0.5.3/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -C metadata=3cd08936cb404f8e -C extra-filename=-3cd08936cb404f8e --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/build/memoffset-3cd08936cb404f8e -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern rustc_version=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/librustc_version-f9f112a037565bee.rlib --cap-lints allow`\r\n Compiling c2-chacha v0.2.3\r\n Running `rustc --crate-name c2_chacha --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/c2-chacha-0.2.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"simd\"' --cfg 'feature=\"std\"' -C metadata=6547bb04c22119fb -C extra-filename=-6547bb04c22119fb --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern ppv_lite86=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libppv_lite86-013047a1e1834c1c.rmeta --cap-lints allow`\r\n Running `rustc --crate-name proc_macro2 --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/proc-macro2-1.0.8/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"default\"' --cfg 'feature=\"proc-macro\"' -C metadata=e43ba7375fb44eb2 -C extra-filename=-e43ba7375fb44eb2 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern unicode_xid=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libunicode_xid-c4b64db85789a8a8.rmeta --cap-lints allow --cfg use_proc_macro --cfg wrap_proc_macro --cfg proc_macro_span`\r\n Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/crossbeam-utils-82d9fcbae2b0fcfe/build-script-build`\r\n Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/num-traits-5b86f099584d3dae/build-script-build`\r\n Compiling aho-corasick v0.7.8\r\n Running `rustc --crate-name aho_corasick /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/aho-corasick-0.7.8/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"default\"' --cfg 'feature=\"std\"' -C metadata=1aa04c8e211e8977 -C extra-filename=-1aa04c8e211e8977 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern memchr=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libmemchr-2eb004acc56bfef6.rmeta --cap-lints allow`\r\n Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/crossbeam-epoch-d1169acc937644c9/build-script-build`\r\n Running `rustc --crate-name num_traits /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/num-traits-0.2.11/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"default\"' --cfg 'feature=\"std\"' -C metadata=5895a2370c90e52c -C extra-filename=-5895a2370c90e52c --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow --cfg has_i128`\r\n Running `/tmp/pip-install-k2pjj650/tokenizers/target/release/build/memoffset-3cd08936cb404f8e/build-script-build`\r\n Compiling quote v1.0.2\r\n Running `rustc --crate-name quote --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/quote-1.0.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"default\"' --cfg 'feature=\"proc-macro\"' -C metadata=2a3c58f3767a45fb -C extra-filename=-2a3c58f3767a45fb --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rmeta --cap-lints allow`\r\n Running `rustc --crate-name memoffset /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/memoffset-0.5.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=2c7b74dca44a9da4 -C extra-filename=-2c7b74dca44a9da4 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --cap-lints allow --cfg memoffset_maybe_uninit --cfg memoffset_doctests`\r\n Running `rustc --crate-name crossbeam_utils /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-utils-0.7.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"default\"' --cfg 'feature=\"lazy_static\"' --cfg 'feature=\"std\"' -C metadata=388986d928bc4f32 -C extra-filename=-388986d928bc4f32 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern cfg_if=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcfg_if-796850ba8a8cedaa.rmeta --extern lazy_static=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblazy_static-daaa2cdb90fc8b44.rmeta --cap-lints allow --cfg has_min_const_fn --cfg has_atomic_u8 --cfg has_atomic_u16 --cfg has_atomic_u32 --cfg has_atomic_u64`\r\n Running `rustc --crate-name syn --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/syn-1.0.15/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"clone-impls\"' --cfg 'feature=\"default\"' --cfg 'feature=\"derive\"' --cfg 'feature=\"extra-traits\"' --cfg 'feature=\"full\"' --cfg 'feature=\"parsing\"' --cfg 'feature=\"printing\"' --cfg 'feature=\"proc-macro\"' --cfg 'feature=\"quote\"' --cfg 'feature=\"visit\"' -C metadata=f20a4e9749d3ee5d -C extra-filename=-f20a4e9749d3ee5d --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rmeta --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rmeta --extern unicode_xid=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libunicode_xid-c4b64db85789a8a8.rmeta --cap-lints allow`\r\n Compiling clicolors-control v1.0.1\r\n Compiling num_cpus v1.12.0\r\n Compiling termios v0.3.1\r\n Compiling atty v0.2.14\r\n Running `rustc --crate-name getrandom --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/getrandom-0.1.14/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"std\"' -C metadata=048e32c5a0c04df6 -C extra-filename=-048e32c5a0c04df6 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern cfg_if=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcfg_if-796850ba8a8cedaa.rmeta --extern libc=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblibc-1ecf199bab423512.rmeta --cap-lints allow`\r\n Running `rustc --crate-name clicolors_control /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/clicolors-control-1.0.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"default\"' --cfg 'feature=\"terminal_autoconfig\"' -C metadata=0b0b6007b4183ec9 -C extra-filename=-0b0b6007b4183ec9 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern lazy_static=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblazy_static-daaa2cdb90fc8b44.rmeta --extern libc=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblibc-1ecf199bab423512.rmeta --cap-lints allow`\r\n Running `rustc --crate-name num_cpus /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/num_cpus-1.12.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=d96ba1da53092c10 -C extra-filename=-d96ba1da53092c10 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern libc=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblibc-1ecf199bab423512.rmeta --cap-lints allow`\r\n Running `rustc --crate-name termios /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/termios-0.3.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=ec3a50c8d1bc7e85 -C extra-filename=-ec3a50c8d1bc7e85 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern libc=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblibc-1ecf199bab423512.rmeta --cap-lints allow`\r\n Running `rustc --crate-name atty /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/atty-0.2.14/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=a326ad27cb30e935 -C extra-filename=-a326ad27cb30e935 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern libc=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblibc-1ecf199bab423512.rmeta --cap-lints allow`\r\n Compiling crossbeam-queue v0.2.1\r\n Running `rustc --crate-name crossbeam_epoch /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-epoch-0.8.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"default\"' --cfg 'feature=\"lazy_static\"' --cfg 'feature=\"std\"' -C metadata=91bb1210b79b03da -C extra-filename=-91bb1210b79b03da --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern cfg_if=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcfg_if-796850ba8a8cedaa.rmeta --extern crossbeam_utils=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcrossbeam_utils-388986d928bc4f32.rmeta --extern lazy_static=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblazy_static-daaa2cdb90fc8b44.rmeta --extern maybe_uninit=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libmaybe_uninit-d37c4119dc6540c4.rmeta --extern memoffset=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libmemoffset-2c7b74dca44a9da4.rmeta --extern scopeguard=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libscopeguard-cc84917dee271887.rmeta --cap-lints allow --cfg has_min_const_fn`\r\n Running `rustc --crate-name crossbeam_queue /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-queue-0.2.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"default\"' --cfg 'feature=\"std\"' -C metadata=c776341a9b185621 -C extra-filename=-c776341a9b185621 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern cfg_if=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcfg_if-796850ba8a8cedaa.rmeta --extern crossbeam_utils=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcrossbeam_utils-388986d928bc4f32.rmeta --cap-lints allow`\r\n Compiling clap v2.33.0\r\n Running `rustc --crate-name clap /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/clap-2.33.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"ansi_term\"' --cfg 'feature=\"atty\"' --cfg 'feature=\"color\"' --cfg 'feature=\"default\"' --cfg 'feature=\"strsim\"' --cfg 'feature=\"suggestions\"' --cfg 'feature=\"vec_map\"' -C metadata=3071ac3e668ed07c -C extra-filename=-3071ac3e668ed07c --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern ansi_term=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libansi_term-fa4822742d417eef.rmeta --extern atty=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libatty-a326ad27cb30e935.rmeta --extern bitflags=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libbitflags-561e1eaab5576c8d.rmeta --extern strsim=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libstrsim-e2648e9eff68e95c.rmeta --extern textwrap=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libtextwrap-50a2c2e63154ee37.rmeta --extern unicode_width=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libunicode_width-23c2035cef4c6900.rmeta --extern vec_map=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libvec_map-6138169e8c0f8f54.rmeta --cap-lints allow`\r\n Compiling rand_core v0.5.1\r\n Running `rustc --crate-name rand_core --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rand_core-0.5.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"alloc\"' --cfg 'feature=\"getrandom\"' --cfg 'feature=\"std\"' -C metadata=7ce6542ec4257257 -C extra-filename=-7ce6542ec4257257 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern getrandom=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libgetrandom-048e32c5a0c04df6.rmeta --cap-lints allow`\r\n Compiling regex v1.3.4\r\n Running `rustc --crate-name regex /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/regex-1.3.4/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"aho-corasick\"' --cfg 'feature=\"default\"' --cfg 'feature=\"memchr\"' --cfg 'feature=\"perf\"' --cfg 'feature=\"perf-cache\"' --cfg 'feature=\"perf-dfa\"' --cfg 'feature=\"perf-inline\"' --cfg 'feature=\"perf-literal\"' --cfg 'feature=\"std\"' --cfg 'feature=\"thread_local\"' --cfg 'feature=\"unicode\"' --cfg 'feature=\"unicode-age\"' --cfg 'feature=\"unicode-bool\"' --cfg 'feature=\"unicode-case\"' --cfg 'feature=\"unicode-gencat\"' --cfg 'feature=\"unicode-perl\"' --cfg 'feature=\"unicode-script\"' --cfg 'feature=\"unicode-segment\"' -C metadata=88e281e069c7ba33 -C extra-filename=-88e281e069c7ba33 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern aho_corasick=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libaho_corasick-1aa04c8e211e8977.rmeta --extern memchr=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libmemchr-2eb004acc56bfef6.rmeta --extern regex_syntax=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libregex_syntax-49413942df53b636.rmeta --extern thread_local=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libthread_local-eac48644045cdc04.rmeta --cap-lints allow`\r\n Compiling crossbeam-deque v0.7.3\r\n Running `rustc --crate-name crossbeam_deque /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-deque-0.7.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=0c2beaa054f6da4d -C extra-filename=-0c2beaa054f6da4d --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern crossbeam_epoch=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcrossbeam_epoch-91bb1210b79b03da.rmeta --extern crossbeam_utils=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcrossbeam_utils-388986d928bc4f32.rmeta --extern maybe_uninit=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libmaybe_uninit-d37c4119dc6540c4.rmeta --cap-lints allow`\r\n Compiling rand_chacha v0.2.1\r\n Running `rustc --crate-name rand_chacha --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rand_chacha-0.2.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"std\"' -C metadata=9c3e099a39a894e1 -C extra-filename=-9c3e099a39a894e1 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern c2_chacha=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libc2_chacha-6547bb04c22119fb.rmeta --extern rand_core=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/librand_core-7ce6542ec4257257.rmeta --cap-lints allow`\r\n Running `rustc --crate-name rayon_core --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=43e482f55cc760b6 -C extra-filename=-43e482f55cc760b6 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern crossbeam_deque=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcrossbeam_deque-0c2beaa054f6da4d.rmeta --extern crossbeam_queue=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcrossbeam_queue-c776341a9b185621.rmeta --extern crossbeam_utils=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcrossbeam_utils-388986d928bc4f32.rmeta --extern lazy_static=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblazy_static-daaa2cdb90fc8b44.rmeta --extern num_cpus=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libnum_cpus-d96ba1da53092c10.rmeta --cap-lints allow`\r\n Compiling rand v0.7.3\r\n Running `rustc --crate-name rand --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rand-0.7.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"alloc\"' --cfg 'feature=\"default\"' --cfg 'feature=\"getrandom\"' --cfg 'feature=\"getrandom_package\"' --cfg 'feature=\"libc\"' --cfg 'feature=\"std\"' -C metadata=d905e01484b0667a -C extra-filename=-d905e01484b0667a --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern getrandom_package=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libgetrandom-048e32c5a0c04df6.rmeta --extern libc=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblibc-1ecf199bab423512.rmeta --extern rand_chacha=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/librand_chacha-9c3e099a39a894e1.rmeta --extern rand_core=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/librand_core-7ce6542ec4257257.rmeta --cap-lints allow`\r\n Compiling rayon v1.3.0\r\n Running `rustc --crate-name rayon --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-1.3.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=5f29404b086ee91b -C extra-filename=-5f29404b086ee91b --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern crossbeam_deque=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libcrossbeam_deque-0c2beaa054f6da4d.rmeta --extern either=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libeither-99303bd779c82d42.rmeta --extern rayon_core=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/librayon_core-43e482f55cc760b6.rmeta --cap-lints allow`\r\n Compiling console v0.9.2\r\n Running `rustc --crate-name console --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/console-0.9.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"default\"' --cfg 'feature=\"unicode-width\"' -C metadata=2ce4edc7b4c8449e -C extra-filename=-2ce4edc7b4c8449e --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern clicolors_control=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libclicolors_control-0b0b6007b4183ec9.rmeta --extern lazy_static=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblazy_static-daaa2cdb90fc8b44.rmeta --extern libc=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblibc-1ecf199bab423512.rmeta --extern regex=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libregex-88e281e069c7ba33.rmeta --extern termios=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libtermios-ec3a50c8d1bc7e85.rmeta --extern unicode_width=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libunicode_width-23c2035cef4c6900.rmeta --cap-lints allow`\r\n Compiling indicatif v0.14.0\r\n Running `rustc --crate-name indicatif --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/indicatif-0.14.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature=\"default\"' -C metadata=e47d5e054f023436 -C extra-filename=-e47d5e054f023436 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern console=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libconsole-2ce4edc7b4c8449e.rmeta --extern lazy_static=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/liblazy_static-daaa2cdb90fc8b44.rmeta --extern number_prefix=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libnumber_prefix-e3de789ab67e6629.rmeta --extern regex=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libregex-88e281e069c7ba33.rmeta --cap-lints allow`\r\n Compiling pyo3-derive-backend v0.8.5\r\n Running `rustc --crate-name pyo3_derive_backend --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/pyo3-derive-backend-0.8.5/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=aba0e2d131928acb -C extra-filename=-aba0e2d131928acb --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rmeta --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rmeta --extern syn=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsyn-f20a4e9749d3ee5d.rmeta --cap-lints allow`\r\n Compiling serde_derive v1.0.104\r\n Compiling proc-macro-hack v0.5.11\r\n Compiling ghost v0.1.1\r\n Compiling ctor v0.1.13\r\n Compiling inventory-impl v0.1.5\r\n Compiling pyo3cls v0.8.5\r\n Running `rustc --crate-name serde_derive /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/serde_derive-1.0.104/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 --cfg 'feature=\"default\"' -C metadata=034c70940b2eedef -C extra-filename=-034c70940b2eedef --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rlib --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rlib --extern syn=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsyn-f20a4e9749d3ee5d.rlib --extern proc_macro --cap-lints allow`\r\n Running `rustc --crate-name ghost --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ghost-0.1.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=883c66593baf1317 -C extra-filename=-883c66593baf1317 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rlib --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rlib --extern syn=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsyn-f20a4e9749d3ee5d.rlib --extern proc_macro --cap-lints allow`\r\n Running `rustc --crate-name proc_macro_hack --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/proc-macro-hack-0.5.11/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=3cf880fc746cfd7d -C extra-filename=-3cf880fc746cfd7d --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rlib --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rlib --extern syn=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsyn-f20a4e9749d3ee5d.rlib --extern proc_macro --cap-lints allow`\r\n Running `rustc --crate-name ctor --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ctor-0.1.13/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=2c98d245a0bd2934 -C extra-filename=-2c98d245a0bd2934 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rlib --extern syn=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsyn-f20a4e9749d3ee5d.rlib --extern proc_macro --cap-lints allow`\r\n Running `rustc --crate-name inventory_impl --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/inventory-impl-0.1.5/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=753bfdf5c2c1b356 -C extra-filename=-753bfdf5c2c1b356 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rlib --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rlib --extern syn=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsyn-f20a4e9749d3ee5d.rlib --extern proc_macro --cap-lints allow`\r\n Running `rustc --crate-name pyo3cls --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/pyo3cls-0.8.5/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=3b5ed485ca9e08fe -C extra-filename=-3b5ed485ca9e08fe --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rlib --extern pyo3_derive_backend=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libpyo3_derive_backend-aba0e2d131928acb.rlib --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rlib --extern syn=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsyn-f20a4e9749d3ee5d.rlib --extern proc_macro --cap-lints allow`\r\n Compiling paste-impl v0.1.7\r\n Compiling indoc-impl v0.3.4\r\n Running `rustc --crate-name paste_impl --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/paste-impl-0.1.7/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=84e729460d896f1b -C extra-filename=-84e729460d896f1b --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro_hack=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro_hack-3cf880fc746cfd7d.so --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rlib --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rlib --extern syn=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsyn-f20a4e9749d3ee5d.rlib --extern proc_macro --cap-lints allow`\r\n Running `rustc --crate-name indoc_impl --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/indoc-impl-0.3.4/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=3a756aee825cada7 -C extra-filename=-3a756aee825cada7 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro_hack=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro_hack-3cf880fc746cfd7d.so --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rlib --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rlib --extern syn=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsyn-f20a4e9749d3ee5d.rlib --extern unindent=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libunindent-dc1487787c7c90f6.rlib --extern proc_macro --cap-lints allow`\r\n error: /lib64/libc.so.6: version `GLIBC_2.18' not found (required by /tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro_hack-3cf880fc746cfd7d.so)\r\n --> /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/indoc-impl-0.3.4/src/lib.rs:12:5\r\n |\r\n 12 | use proc_macro_hack::proc_macro_hack;\r\n | ^^^^^^^^^^^^^^^\r\n\r\n error: aborting due to previous error\r\n\r\n error: could not compile `indoc-impl`.\r\n\r\n Caused by:\r\n process didn't exit successfully: `rustc --crate-name indoc_impl --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/indoc-impl-0.3.4/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=3a756aee825cada7 -C extra-filename=-3a756aee825cada7 --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro_hack=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro_hack-3cf880fc746cfd7d.so --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rlib --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rlib --extern syn=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsyn-f20a4e9749d3ee5d.rlib --extern unindent=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libunindent-dc1487787c7c90f6.rlib --extern proc_macro --cap-lints allow` (exit code: 1)\r\n warning: build failed, waiting for other jobs to finish...\r\n error: /lib64/libc.so.6: version `GLIBC_2.18' not found (required by /tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro_hack-3cf880fc746cfd7d.so)\r\n --> /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/paste-impl-0.1.7/src/lib.rs:6:5\r\n |\r\n 6 | use proc_macro_hack::proc_macro_hack;\r\n | ^^^^^^^^^^^^^^^\r\n\r\n error: aborting due to previous error\r\n\r\n error: could not compile `paste-impl`.\r\n\r\n Caused by:\r\n process didn't exit successfully: `rustc --crate-name paste_impl --edition=2018 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/paste-impl-0.1.7/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=84e729460d896f1b -C extra-filename=-84e729460d896f1b --out-dir /tmp/pip-install-k2pjj650/tokenizers/target/release/deps -L dependency=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps --extern proc_macro_hack=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro_hack-3cf880fc746cfd7d.so --extern proc_macro2=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libproc_macro2-e43ba7375fb44eb2.rlib --extern quote=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libquote-2a3c58f3767a45fb.rlib --extern syn=/tmp/pip-install-k2pjj650/tokenizers/target/release/deps/libsyn-f20a4e9749d3ee5d.rlib --extern proc_macro --cap-lints allow` (exit code: 1)\r\n warning: build failed, waiting for other jobs to finish...\r\n error: build failed\r\n cargo rustc --lib --manifest-path Cargo.toml --features pyo3/python3 pyo3/extension-module --release --verbose -- --crate-type cdylib\r\n error: cargo failed with code: 101\r\n\r\n ----------------------------------------\r\n ERROR: Failed building wheel for tokenizers\r\nFailed to build tokenizers\r\nERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly\r\n```",
"I am not sure if this is a version issue or an incompatibility issue, cf. and related: https://stackoverflow.com/questions/55363823/redhat-centos-glibc-2-18-not-found\r\n\r\nBecause of cross-platform compatibility, it might better to have `tokenizers` as an optional dependency and set `use_fast` to False by default instead to True.",
"Hi @JohnGiorgi, \r\n\r\nThanks for reporting this issue and sorry you're having trouble with transformers. \r\n\r\nAs @BramVanroy mentioned pyo3 package (which allows us to build the Python binding to Rust) requires some features only available in nightly release. However 1.42 is only required for the version `9.0-alpha` of pyo3 which we're not currently using.\r\n\r\nI did try to reproduce the error you linked above on a Fedora machine I've at hand and wasn't able to. Can you provide some more information regarding this machine ? On which platform is it running ? x86_64 ? POWER9 ? Also if you can include some `uname -a ` output that would be very helpful.\r\n\r\nAny information you might be able to provide to us will help us track down this build issue.\r\n\r\nMany thanks ",
"Looking at the trace, it seems `tokenizers` uses v0.8.5 of PyO3, which (according to their docs) requires 1.37.0-nightly 2019-07-19. So it's a bit odd that installation didn't work for OP on 1.41 from the start. But perhaps it has to do with GLIBC_2.18?\r\n\r\n@mfuntowicz From the CentOS fora, [it seems that 2.18 will never come to CentOS 7](https://forums.centos.org/viewtopic.php?t=71740) so I fear this is just an incompatibility. That adds to my point that I would suggest that `tokenizers` is an optional dependency and use_fast should be False by default. \r\n",
"@BramVanroy The nightly features can take many release cycles before landing on a stable version. So these features are probably still part of the nightly only, even in the stable `1.4.1`.\r\n\r\nAnyway, we provide `manylinux` wheels, and these are built on `CentOS 5` so I think the real problem here is to find out why it didn't download the wheel in the first place, but tried to re-compile instead.",
"@n1t0 Ah, that makes sense. Thanks for the clarification.\r\n\r\n@JohnGiorgi Can you try to install `tokenizers` with `pip debug install tokenizers -vv`? It'll show you all compatible tags.",
"@mfuntowicz\r\n\r\nOutput of `lscpu`\r\n\r\n```\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nByte Order: Little Endian\r\nCPU(s): 40\r\nOn-line CPU(s) list: 0-39\r\nThread(s) per core: 1\r\nCore(s) per socket: 20\r\nSocket(s): 2\r\nNUMA node(s): 2\r\nVendor ID: GenuineIntel\r\nCPU family: 6\r\nModel: 85\r\nModel name: Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz\r\nStepping: 4\r\nCPU MHz: 2057.373\r\nCPU max MHz: 3700.0000\r\nCPU min MHz: 1000.0000\r\nBogoMIPS: 4800.00\r\nVirtualization: VT-x\r\nL1d cache: 32K\r\nL1i cache: 32K\r\nL2 cache: 1024K\r\nL3 cache: 28160K\r\nNUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38\r\nNUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 invpcid_single intel_ppin intel_pt ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke md_clear spec_ctrl intel_stibp flush_l1d\r\n```\r\n\r\nOutput of `uname -a`\r\n\r\n```\r\nLinux beluga1.int.ets1.calculquebec.ca 3.10.0-1062.9.1.el7.x86_64 #1 SMP Fri Dec 6 15:49:49 UTC 2019 x86_64 GNU/Linux\r\n```\r\n\r\nOutput of `cat /etc/os-release`\r\n\r\n```\r\nNAME=\"CentOS Linux\"\r\nVERSION=\"7 (Core)\"\r\nID=\"centos\"\r\nID_LIKE=\"rhel fedora\"\r\nVERSION_ID=\"7\"\r\nPRETTY_NAME=\"CentOS Linux 7 (Core)\"\r\nANSI_COLOR=\"0;31\"\r\nCPE_NAME=\"cpe:/o:centos:centos:7\"\r\nHOME_URL=\"https://www.centos.org/\"\r\nBUG_REPORT_URL=\"https://bugs.centos.org/\"\r\n\r\nCENTOS_MANTISBT_PROJECT=\"CentOS-7\"\r\nCENTOS_MANTISBT_PROJECT_VERSION=\"7\"\r\nREDHAT_SUPPORT_PRODUCT=\"centos\"\r\nREDHAT_SUPPORT_PRODUCT_VERSION=\"7\"\r\n```\r\n\r\nI am working on a Compute Canada cluster, so information about it can also be found [here](https://docs.computecanada.ca/wiki/B%C3%A9luga/en).\r\n\r\n@BramVanroy \r\n\r\nHere is the output of `pip debug install tokenizers -vv`\r\n\r\n```\r\nWARNING: This command is only meant for debugging. Do not use this with automation for parsing and getting these details, since the output and options of this command may change without notice.\r\npip version: pip 20.0.2 from /home/johnmg/t2t/lib/python3.7/site-packages/pip (python 3.7)\r\nsys.version: 3.7.4 (default, Jul 18 2019, 19:34:02)\r\n[GCC 5.4.0]\r\nsys.executable: /home/johnmg/t2t/bin/python\r\nsys.getdefaultencoding: utf-8\r\nsys.getfilesystemencoding: utf-8\r\nlocale.getpreferredencoding: UTF-8\r\nsys.platform: linux\r\nsys.implementation:\r\n name: cpython\r\n'cert' config value: install, wheel, :env:\r\nREQUESTS_CA_BUNDLE: None\r\nCURL_CA_BUNDLE: /etc/pki/tls/certs/ca-bundle.crt\r\npip._vendor.certifi.where(): /home/johnmg/t2t/lib/python3.7/site-packages/pip/_vendor/certifi/cacert.pem\r\nCompatible tags: 27\r\n cp37-cp37m-linux_x86_64\r\n cp37-abi3-linux_x86_64\r\n cp37-none-linux_x86_64\r\n cp36-abi3-linux_x86_64\r\n cp35-abi3-linux_x86_64\r\n cp34-abi3-linux_x86_64\r\n cp33-abi3-linux_x86_64\r\n cp32-abi3-linux_x86_64\r\n py37-none-linux_x86_64\r\n py3-none-linux_x86_64\r\n py36-none-linux_x86_64\r\n py35-none-linux_x86_64\r\n py34-none-linux_x86_64\r\n py33-none-linux_x86_64\r\n py32-none-linux_x86_64\r\n py31-none-linux_x86_64\r\n py30-none-linux_x86_64\r\n cp37-none-any\r\n py37-none-any\r\n py3-none-any\r\n py36-none-any\r\n py35-none-any\r\n py34-none-any\r\n py33-none-any\r\n py32-none-any\r\n py31-none-any\r\n py30-none-any\r\n```",
"I'm also having these same errors. It gets past the install, but then when running the tests `pip install -e \".[testing]\"` I get: \r\n\r\n`error: Can not find Rust compiler\r\n ----------------------------------------\r\n ERROR: Failed building wheel for tokenizers\r\nFailed to build tokenizers\r\nERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly`",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hey guys, @JohnGiorgi @Shane-Neeley , I think I figured out what was happening to you (to me also). At the time of your question, I think there was a high possibility that in the \"setup.py\" of transformers' source code, there was a line writing \"tokenizers=0.5.2\" ; but in fact this old version will bother the update of transformers.\r\nBut when I install the newest tokenizers by pip (pip install -U tokenizers), you will get a tokenizers==0.7.0. That's why at the time of your question, there were always conflits and bugs about this tokenizers (I also installed Rust and setuptool-rust, it was always the same error).\r\n\r\nAnd now they just corrected this line in the setup.py. So I just suggest you to \r\n1. uninstall your old version transformers (very important!)\r\n2. pip install -U tokenizers (so that it becomes tokenizers==0.7.0)\r\n3. install transformers from source !\r\n\r\nThen voilà ! You'll get a brand new smoothy transformers.",
"> Hey guys, @JohnGiorgi @Shane-Neeley , I think I figured out what was happening to you (to me also). At the time of your question, I think there was a high possibility that in the \"setup.py\" of transformers' source code, there was a line writing \"tokenizers=0.5.2\" ; but in fact this old version will bother the update of transformers.\r\n> But when I install the newest tokenizers by pip (pip install -U tokenizers), you will get a tokenizers==0.7.0. That's why at the time of your question, there were always conflits and bugs about this tokenizers (I also installed Rust and setuptool-rust, it was always the same error).\r\n> \r\n> And now they just corrected this line in the setup.py. So I just suggest you to\r\n> \r\n> 1. uninstall your old version transformers (very important!)\r\n> 2. pip install -U tokenizers (so that it becomes tokenizers==0.7.0)\r\n> 3. install transformers from source !\r\n> \r\n> Then voilà ! You'll get a brand new smoothy transformers.\r\n\r\nI am installing transformers from source but I am getting the error.",
"`curl https://sh.rustup.rs -sSf | sh`\r\n`source $HOME/.cargo/env`\r\n`pip3 install --upgrade transformers`\r\n\r\nthese lines worked for me",
"> `curl https://sh.rustup.rs -sSf | sh` `source $HOME/.cargo/env` `pip3 install --upgrade transformers`\r\n> \r\n> эти строки сработали для меня\r\n\r\nПодскажите, пожалуйста, куда именно Вы вводите данные команды?\r\n\r\nPlease tell me where exactly you enter these commands?"
] | 1,582 | 1,705 | 1,588 | CONTRIBUTOR | null | # 🐛 Bug
I cannot install `pip install transformers` for a release newer than `2.3.0`. The install errors out when trying to install `tokenizers`. This is similar to [another issue](https://github.com/huggingface/transformers/issues/2831), except I have a Rust Compiler in my environment so I do not see: `"error: can not find Rust Compiler"`.
## Information
Model I am using (Bert, XLNet ...):
N/A
Language I am using the model on (English, Chinese ...):
N/A
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
pip install transformers
```
which leads to the following error:
```
Building wheels for collected packages: tokenizers
Building wheel for tokenizers (PEP 517) ... error
ERROR: Command errored out with exit status 1:
command: /home/johnmg/t2t/bin/python /home/johnmg/t2t/lib/python3.7/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /tmp/tmpcvv0fpj6
cwd: /tmp/pip-install-d2wcoxbe/tokenizers
Complete output (221 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib
creating build/lib/tokenizers
copying tokenizers/__init__.py -> build/lib/tokenizers
creating build/lib/tokenizers/models
copying tokenizers/models/__init__.py -> build/lib/tokenizers/models
creating build/lib/tokenizers/decoders
copying tokenizers/decoders/__init__.py -> build/lib/tokenizers/decoders
creating build/lib/tokenizers/normalizers
copying tokenizers/normalizers/__init__.py -> build/lib/tokenizers/normalizers
creating build/lib/tokenizers/pre_tokenizers
copying tokenizers/pre_tokenizers/__init__.py -> build/lib/tokenizers/pre_tokenizers
creating build/lib/tokenizers/processors
copying tokenizers/processors/__init__.py -> build/lib/tokenizers/processors
creating build/lib/tokenizers/trainers
copying tokenizers/trainers/__init__.py -> build/lib/tokenizers/trainers
creating build/lib/tokenizers/implementations
copying tokenizers/implementations/sentencepiece_bpe.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/char_level_bpe.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/byte_level_bpe.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/bert_wordpiece.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/base_tokenizer.py -> build/lib/tokenizers/implementations
copying tokenizers/implementations/__init__.py -> build/lib/tokenizers/implementations
copying tokenizers/__init__.pyi -> build/lib/tokenizers
copying tokenizers/models/__init__.pyi -> build/lib/tokenizers/models
copying tokenizers/decoders/__init__.pyi -> build/lib/tokenizers/decoders
copying tokenizers/normalizers/__init__.pyi -> build/lib/tokenizers/normalizers
copying tokenizers/pre_tokenizers/__init__.pyi -> build/lib/tokenizers/pre_tokenizers
copying tokenizers/processors/__init__.pyi -> build/lib/tokenizers/processors
copying tokenizers/trainers/__init__.pyi -> build/lib/tokenizers/trainers
running build_ext
running build_rust
Updating crates.io index
warning: unused manifest key: target.x86_64-apple-darwin.rustflags
Compiling proc-macro2 v1.0.8
Compiling unicode-xid v0.2.0
Compiling syn v1.0.15
Compiling libc v0.2.67
Compiling autocfg v1.0.0
Compiling lazy_static v1.4.0
Compiling cfg-if v0.1.10
Compiling semver-parser v0.7.0
Compiling memchr v2.3.3
Compiling serde v1.0.104
Compiling maybe-uninit v2.0.0
Compiling ryu v1.0.2
Compiling regex-syntax v0.6.14
Compiling getrandom v0.1.14
Compiling scopeguard v1.1.0
Compiling unicode-width v0.1.7
Compiling itoa v0.4.5
Compiling bitflags v1.2.1
Running `rustc --crate-name unicode_xid /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode-xid-0.2.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' -C metadata=d0c8361b0afb9c55 -C extra-filename=-d0c8361b0afb9c55 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --edition=2018 --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/proc-macro2-1.0.8/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="proc-macro"' -C metadata=97f59661e87a2bff -C extra-filename=-97f59661e87a2bff --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/proc-macro2-97f59661e87a2bff -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --edition=2018 --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/syn-1.0.15/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="clone-impls"' --cfg 'feature="default"' --cfg 'feature="derive"' --cfg 'feature="extra-traits"' --cfg 'feature="full"' --cfg 'feature="parsing"' --cfg 'feature="printing"' --cfg 'feature="proc-macro"' --cfg 'feature="quote"' --cfg 'feature="visit"' -C metadata=be4245bf41be9154 -C extra-filename=-be4245bf41be9154 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/syn-be4245bf41be9154 -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/libc-0.2.67/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=4aa17b314a9f9392 -C extra-filename=-4aa17b314a9f9392 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/libc-4aa17b314a9f9392 -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name autocfg /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/autocfg-1.0.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=0b99959d54eb5a43 -C extra-filename=-0b99959d54eb5a43 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name lazy_static /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/lazy_static-1.4.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=7fe90463f0542b89 -C extra-filename=-7fe90463f0542b89 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name semver_parser /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/semver-parser-0.7.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=ce71380f50d590b6 -C extra-filename=-ce71380f50d590b6 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --edition=2018 --crate-name cfg_if /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/cfg-if-0.1.10/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=1b04fa8f4baea64e -C extra-filename=-1b04fa8f4baea64e --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/memchr-2.3.3/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' --cfg 'feature="use_std"' -C metadata=682166ccfd58c578 -C extra-filename=-682166ccfd58c578 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/memchr-682166ccfd58c578 -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/serde-1.0.104/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="derive"' --cfg 'feature="serde_derive"' --cfg 'feature="std"' -C metadata=e3191056f1858817 -C extra-filename=-e3191056f1858817 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/serde-e3191056f1858817 -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/maybe-uninit-2.0.0/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -C metadata=99bcd9a60d46382c -C extra-filename=-99bcd9a60d46382c --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/maybe-uninit-99bcd9a60d46382c -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ryu-1.0.2/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -C metadata=e704a6b7a71f3d7a -C extra-filename=-e704a6b7a71f3d7a --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/ryu-e704a6b7a71f3d7a -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name regex_syntax /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/regex-syntax-0.6.14/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="unicode"' --cfg 'feature="unicode-age"' --cfg 'feature="unicode-bool"' --cfg 'feature="unicode-case"' --cfg 'feature="unicode-gencat"' --cfg 'feature="unicode-perl"' --cfg 'feature="unicode-script"' --cfg 'feature="unicode-segment"' -C metadata=feb44197369905d4 -C extra-filename=-feb44197369905d4 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --edition=2018 --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/getrandom-0.1.14/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="std"' -C metadata=c2394b8b43d330b2 -C extra-filename=-c2394b8b43d330b2 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/getrandom-c2394b8b43d330b2 -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name scopeguard /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/scopeguard-1.1.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=8a63ed9d96488c18 -C extra-filename=-8a63ed9d96488c18 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name unicode_width /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode-width-0.1.7/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' -C metadata=397b6227577b65ae -C extra-filename=-397b6227577b65ae --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name itoa /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/itoa-0.4.5/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=b9ca519a13df71bf -C extra-filename=-b9ca519a13df71bf --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Compiling ppv-lite86 v0.2.6
Compiling rayon-core v1.7.0
Compiling unindent v0.1.5
Compiling version_check v0.9.1
Compiling strsim v0.8.0
Compiling vec_map v0.8.1
Compiling either v1.5.3
Compiling number_prefix v0.3.0
Compiling smallvec v1.2.0
Compiling ansi_term v0.11.0
Compiling unicode_categories v0.1.1
Compiling spin v0.5.2
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/bitflags-1.2.1/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' -C metadata=74dee2c088f4fdf7 -C extra-filename=-74dee2c088f4fdf7 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/bitflags-74dee2c088f4fdf7 -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --edition=2018 --crate-name ppv_lite86 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ppv-lite86-0.2.6/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="simd"' --cfg 'feature="std"' -C metadata=a66cdba604de2e91 -C extra-filename=-a66cdba604de2e91 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --edition=2018 --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.0/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -C metadata=01d02e6ac28bd2a9 -C extra-filename=-01d02e6ac28bd2a9 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/rayon-core-01d02e6ac28bd2a9 -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name version_check /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/version_check-0.9.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=cb4ea0451d56bc0d -C extra-filename=-cb4ea0451d56bc0d --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --edition=2018 --crate-name unindent /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/unindent-0.1.5/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=54ee88ab0038e61b -C extra-filename=-54ee88ab0038e61b --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name strsim /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/strsim-0.8.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=97f24fc34d9d28cd -C extra-filename=-97f24fc34d9d28cd --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name vec_map /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/vec_map-0.8.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=135a62f8a977656a -C extra-filename=-135a62f8a977656a --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name either /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/either-1.5.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=6fcca30f288e7f70 -C extra-filename=-6fcca30f288e7f70 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name number_prefix /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/number_prefix-0.3.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=c617a5b231fd33f2 -C extra-filename=-c617a5b231fd33f2 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --edition=2018 --crate-name smallvec /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/smallvec-1.2.0/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=5a1b93c8a07d924a -C extra-filename=-5a1b93c8a07d924a --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name ansi_term /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ansi_term-0.11.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=f9c541f6d3ce7af3 -C extra-filename=-f9c541f6d3ce7af3 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name unicode_categories /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode_categories-0.1.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=3376acf28b9791fe -C extra-filename=-3376acf28b9791fe --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Running `rustc --crate-name spin /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/spin-0.5.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=6c5053f023d06140 -C extra-filename=-6c5053f023d06140 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow`
Compiling thread_local v1.0.1
Running `rustc --crate-name thread_local /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/thread_local-1.0.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=7cdd46b26d4f9805 -C extra-filename=-7cdd46b26d4f9805 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern lazy_static=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblazy_static-7fe90463f0542b89.rmeta --cap-lints allow`
Compiling textwrap v0.11.0
Running `rustc --crate-name textwrap /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/textwrap-0.11.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=0f312c8508ee8e9d -C extra-filename=-0f312c8508ee8e9d --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern unicode_width=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libunicode_width-397b6227577b65ae.rmeta --cap-lints allow`
Compiling semver v0.9.0
Running `rustc --crate-name semver /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/semver-0.9.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' -C metadata=21f81775743ad422 -C extra-filename=-21f81775743ad422 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern semver_parser=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsemver_parser-ce71380f50d590b6.rmeta --cap-lints allow`
Compiling unicode-normalization-alignments v0.1.12
Running `rustc --crate-name unicode_normalization_alignments /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/unicode-normalization-alignments-0.1.12/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=2f95bcef5d770e35 -C extra-filename=-2f95bcef5d770e35 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern smallvec=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsmallvec-5a1b93c8a07d924a.rmeta --cap-lints allow`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/rayon-core-01d02e6ac28bd2a9/build-script-build`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/memchr-682166ccfd58c578/build-script-build`
Running `rustc --crate-name memchr /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/memchr-2.3.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' --cfg 'feature="use_std"' -C metadata=916f9f60d041f29f -C extra-filename=-916f9f60d041f29f --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow --cfg memchr_runtime_simd --cfg memchr_runtime_sse2 --cfg memchr_runtime_sse42 --cfg memchr_runtime_avx`
Compiling rustc_version v0.2.3
Running `rustc --crate-name rustc_version /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rustc_version-0.2.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=63af08b6d5f0b1a9 -C extra-filename=-63af08b6d5f0b1a9 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern semver=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsemver-21f81775743ad422.rmeta --cap-lints allow`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/getrandom-c2394b8b43d330b2/build-script-build`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/bitflags-74dee2c088f4fdf7/build-script-build`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/maybe-uninit-99bcd9a60d46382c/build-script-build`
Compiling crossbeam-utils v0.7.2
Compiling crossbeam-epoch v0.8.2
Compiling num-traits v0.2.11
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-utils-0.7.2/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="lazy_static"' --cfg 'feature="std"' -C metadata=5b24f08aed575110 -C extra-filename=-5b24f08aed575110 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/crossbeam-utils-5b24f08aed575110 -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern autocfg=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libautocfg-0b99959d54eb5a43.rlib --cap-lints allow`
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-epoch-0.8.2/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="lazy_static"' --cfg 'feature="std"' -C metadata=515968de6557b6b7 -C extra-filename=-515968de6557b6b7 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/crossbeam-epoch-515968de6557b6b7 -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern autocfg=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libautocfg-0b99959d54eb5a43.rlib --cap-lints allow`
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/num-traits-0.2.11/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=618c6828911959ab -C extra-filename=-618c6828911959ab --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/num-traits-618c6828911959ab -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern autocfg=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libautocfg-0b99959d54eb5a43.rlib --cap-lints allow`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/ryu-e704a6b7a71f3d7a/build-script-build`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/serde-e3191056f1858817/build-script-build`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/libc-4aa17b314a9f9392/build-script-build`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/syn-be4245bf41be9154/build-script-build`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/proc-macro2-97f59661e87a2bff/build-script-build`
Running `rustc --crate-name bitflags /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/bitflags-1.2.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' -C metadata=95683541c46a6653 -C extra-filename=-95683541c46a6653 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow --cfg bitflags_const_fn`
Running `rustc --crate-name maybe_uninit /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/maybe-uninit-2.0.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=169701ffcef7a104 -C extra-filename=-169701ffcef7a104 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow --cfg derive_copy --cfg repr_transparent --cfg native_uninit`
Compiling c2-chacha v0.2.3
Running `rustc --edition=2018 --crate-name c2_chacha /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/c2-chacha-0.2.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="simd"' --cfg 'feature="std"' -C metadata=db6d7fc899faf453 -C extra-filename=-db6d7fc899faf453 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern ppv_lite86=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libppv_lite86-a66cdba604de2e91.rmeta --cap-lints allow`
Running `rustc --crate-name ryu /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ryu-1.0.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=81c16096c65f1d25 -C extra-filename=-81c16096c65f1d25 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow --cfg integer128 --cfg must_use_return --cfg maybe_uninit`
Running `rustc --edition=2018 --crate-name proc_macro2 /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/proc-macro2-1.0.8/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="proc-macro"' -C metadata=ce62abe820ec95ab -C extra-filename=-ce62abe820ec95ab --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern unicode_xid=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libunicode_xid-d0c8361b0afb9c55.rmeta --cap-lints allow --cfg use_proc_macro --cfg wrap_proc_macro`
Running `rustc --crate-name libc /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/libc-0.2.67/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=5d9b252ed56b1945 -C extra-filename=-5d9b252ed56b1945 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow --cfg freebsd11 --cfg libc_priv_mod_use --cfg libc_union --cfg libc_const_size_of --cfg libc_align --cfg libc_core_cvoid --cfg libc_packedN`
Compiling aho-corasick v0.7.8
Running `rustc --crate-name aho_corasick /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/aho-corasick-0.7.8/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=5f8df0f6460a66c2 -C extra-filename=-5f8df0f6460a66c2 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern memchr=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libmemchr-916f9f60d041f29f.rmeta --cap-lints allow`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/num-traits-618c6828911959ab/build-script-build`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/crossbeam-utils-5b24f08aed575110/build-script-build`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/crossbeam-epoch-515968de6557b6b7/build-script-build`
Compiling memoffset v0.5.3
Running `rustc --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/memoffset-0.5.3/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -C metadata=8ebd67a7766256e7 -C extra-filename=-8ebd67a7766256e7 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/memoffset-8ebd67a7766256e7 -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern rustc_version=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/librustc_version-63af08b6d5f0b1a9.rlib --cap-lints allow`
Compiling quote v1.0.2
Running `rustc --edition=2018 --crate-name quote /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/quote-1.0.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="proc-macro"' -C metadata=5dd3b63b3c37ba50 -C extra-filename=-5dd3b63b3c37ba50 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro2-ce62abe820ec95ab.rmeta --cap-lints allow`
Running `rustc --crate-name num_traits /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/num-traits-0.2.11/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=682c148c69950086 -C extra-filename=-682c148c69950086 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow --cfg has_i128`
Compiling num_cpus v1.12.0
Compiling termios v0.3.1
Compiling clicolors-control v1.0.1
Compiling atty v0.2.14
Running `rustc --edition=2018 --crate-name getrandom /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/getrandom-0.1.14/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="std"' -C metadata=3bcce62cba29d0a1 -C extra-filename=-3bcce62cba29d0a1 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern cfg_if=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcfg_if-1b04fa8f4baea64e.rmeta --extern libc=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblibc-5d9b252ed56b1945.rmeta --cap-lints allow`
Running `rustc --crate-name num_cpus /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/num_cpus-1.12.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=5375d111fec819b4 -C extra-filename=-5375d111fec819b4 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern libc=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblibc-5d9b252ed56b1945.rmeta --cap-lints allow`
Running `rustc --crate-name termios /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/termios-0.3.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=d64097ba20dddbc5 -C extra-filename=-d64097ba20dddbc5 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern libc=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblibc-5d9b252ed56b1945.rmeta --cap-lints allow`
Running `rustc --crate-name clicolors_control /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/clicolors-control-1.0.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="terminal_autoconfig"' -C metadata=f95aedfd36305d68 -C extra-filename=-f95aedfd36305d68 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern lazy_static=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblazy_static-7fe90463f0542b89.rmeta --extern libc=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblibc-5d9b252ed56b1945.rmeta --cap-lints allow`
Running `rustc --crate-name atty /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/atty-0.2.14/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=61c34de20facc8fb -C extra-filename=-61c34de20facc8fb --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern libc=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblibc-5d9b252ed56b1945.rmeta --cap-lints allow`
Running `rustc --edition=2018 --crate-name syn /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/syn-1.0.15/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="clone-impls"' --cfg 'feature="default"' --cfg 'feature="derive"' --cfg 'feature="extra-traits"' --cfg 'feature="full"' --cfg 'feature="parsing"' --cfg 'feature="printing"' --cfg 'feature="proc-macro"' --cfg 'feature="quote"' --cfg 'feature="visit"' -C metadata=fb7a652ed3ecc931 -C extra-filename=-fb7a652ed3ecc931 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro2-ce62abe820ec95ab.rmeta --extern quote=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libquote-5dd3b63b3c37ba50.rmeta --extern unicode_xid=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libunicode_xid-d0c8361b0afb9c55.rmeta --cap-lints allow --cfg syn_disable_nightly_tests`
Compiling clap v2.33.0
Running `rustc --crate-name clap /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/clap-2.33.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="ansi_term"' --cfg 'feature="atty"' --cfg 'feature="color"' --cfg 'feature="default"' --cfg 'feature="strsim"' --cfg 'feature="suggestions"' --cfg 'feature="vec_map"' -C metadata=4d1679758f5cc3c5 -C extra-filename=-4d1679758f5cc3c5 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern ansi_term=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libansi_term-f9c541f6d3ce7af3.rmeta --extern atty=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libatty-61c34de20facc8fb.rmeta --extern bitflags=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libbitflags-95683541c46a6653.rmeta --extern strsim=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libstrsim-97f24fc34d9d28cd.rmeta --extern textwrap=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libtextwrap-0f312c8508ee8e9d.rmeta --extern unicode_width=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libunicode_width-397b6227577b65ae.rmeta --extern vec_map=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libvec_map-135a62f8a977656a.rmeta --cap-lints allow`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/memoffset-8ebd67a7766256e7/build-script-build`
Compiling rand_core v0.5.1
Running `rustc --edition=2018 --crate-name rand_core /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rand_core-0.5.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="alloc"' --cfg 'feature="getrandom"' --cfg 'feature="std"' -C metadata=4adb25904fdd70df -C extra-filename=-4adb25904fdd70df --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern getrandom=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libgetrandom-3bcce62cba29d0a1.rmeta --cap-lints allow`
Running `rustc --crate-name memoffset /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/memoffset-0.5.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=ee652fbed0600815 -C extra-filename=-ee652fbed0600815 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --cap-lints allow --cfg memoffset_maybe_uninit --cfg memoffset_doctests`
Running `rustc --crate-name crossbeam_utils /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-utils-0.7.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="lazy_static"' --cfg 'feature="std"' -C metadata=71c95db82240db48 -C extra-filename=-71c95db82240db48 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern cfg_if=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcfg_if-1b04fa8f4baea64e.rmeta --extern lazy_static=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblazy_static-7fe90463f0542b89.rmeta --cap-lints allow --cfg has_min_const_fn --cfg has_atomic_u8 --cfg has_atomic_u16 --cfg has_atomic_u32 --cfg has_atomic_u64`
Compiling rand_chacha v0.2.1
Running `rustc --edition=2018 --crate-name rand_chacha /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rand_chacha-0.2.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="std"' -C metadata=164a44df65235912 -C extra-filename=-164a44df65235912 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern c2_chacha=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libc2_chacha-db6d7fc899faf453.rmeta --extern rand_core=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/librand_core-4adb25904fdd70df.rmeta --cap-lints allow`
Compiling rand v0.7.3
Running `rustc --edition=2018 --crate-name rand /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rand-0.7.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="alloc"' --cfg 'feature="default"' --cfg 'feature="getrandom"' --cfg 'feature="getrandom_package"' --cfg 'feature="libc"' --cfg 'feature="std"' -C metadata=102d035e4ca6c699 -C extra-filename=-102d035e4ca6c699 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern getrandom_package=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libgetrandom-3bcce62cba29d0a1.rmeta --extern libc=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblibc-5d9b252ed56b1945.rmeta --extern rand_chacha=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/librand_chacha-164a44df65235912.rmeta --extern rand_core=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/librand_core-4adb25904fdd70df.rmeta --cap-lints allow`
Compiling crossbeam-queue v0.2.1
Running `rustc --crate-name crossbeam_epoch /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-epoch-0.8.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="lazy_static"' --cfg 'feature="std"' -C metadata=4cd0c2190306aa4a -C extra-filename=-4cd0c2190306aa4a --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern cfg_if=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcfg_if-1b04fa8f4baea64e.rmeta --extern crossbeam_utils=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcrossbeam_utils-71c95db82240db48.rmeta --extern lazy_static=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblazy_static-7fe90463f0542b89.rmeta --extern maybe_uninit=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libmaybe_uninit-169701ffcef7a104.rmeta --extern memoffset=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libmemoffset-ee652fbed0600815.rmeta --extern scopeguard=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libscopeguard-8a63ed9d96488c18.rmeta --cap-lints allow --cfg has_min_const_fn`
Running `rustc --crate-name crossbeam_queue /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-queue-0.2.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=e5afea70501509a9 -C extra-filename=-e5afea70501509a9 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern cfg_if=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcfg_if-1b04fa8f4baea64e.rmeta --extern crossbeam_utils=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcrossbeam_utils-71c95db82240db48.rmeta --cap-lints allow`
Compiling regex v1.3.4
Running `rustc --crate-name regex /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/regex-1.3.4/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="aho-corasick"' --cfg 'feature="default"' --cfg 'feature="memchr"' --cfg 'feature="perf"' --cfg 'feature="perf-cache"' --cfg 'feature="perf-dfa"' --cfg 'feature="perf-inline"' --cfg 'feature="perf-literal"' --cfg 'feature="std"' --cfg 'feature="thread_local"' --cfg 'feature="unicode"' --cfg 'feature="unicode-age"' --cfg 'feature="unicode-bool"' --cfg 'feature="unicode-case"' --cfg 'feature="unicode-gencat"' --cfg 'feature="unicode-perl"' --cfg 'feature="unicode-script"' --cfg 'feature="unicode-segment"' -C metadata=40c5630aef8afe3e -C extra-filename=-40c5630aef8afe3e --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern aho_corasick=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libaho_corasick-5f8df0f6460a66c2.rmeta --extern memchr=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libmemchr-916f9f60d041f29f.rmeta --extern regex_syntax=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libregex_syntax-feb44197369905d4.rmeta --extern thread_local=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libthread_local-7cdd46b26d4f9805.rmeta --cap-lints allow`
Compiling crossbeam-deque v0.7.3
Running `rustc --crate-name crossbeam_deque /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/crossbeam-deque-0.7.3/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=6fff16ed40375025 -C extra-filename=-6fff16ed40375025 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern crossbeam_epoch=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcrossbeam_epoch-4cd0c2190306aa4a.rmeta --extern crossbeam_utils=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcrossbeam_utils-71c95db82240db48.rmeta --extern maybe_uninit=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libmaybe_uninit-169701ffcef7a104.rmeta --cap-lints allow`
Running `rustc --edition=2018 --crate-name rayon_core /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-core-1.7.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=5e812e4a0a947026 -C extra-filename=-5e812e4a0a947026 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern crossbeam_deque=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcrossbeam_deque-6fff16ed40375025.rmeta --extern crossbeam_queue=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcrossbeam_queue-e5afea70501509a9.rmeta --extern crossbeam_utils=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcrossbeam_utils-71c95db82240db48.rmeta --extern lazy_static=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblazy_static-7fe90463f0542b89.rmeta --extern num_cpus=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libnum_cpus-5375d111fec819b4.rmeta --cap-lints allow`
Compiling rayon v1.3.0
Running `rustc --edition=2018 --crate-name rayon /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/rayon-1.3.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=35b9846c719edc99 -C extra-filename=-35b9846c719edc99 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern crossbeam_deque=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libcrossbeam_deque-6fff16ed40375025.rmeta --extern either=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libeither-6fcca30f288e7f70.rmeta --extern rayon_core=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/librayon_core-5e812e4a0a947026.rmeta --cap-lints allow`
Compiling console v0.9.2
Running `rustc --edition=2018 --crate-name console /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/console-0.9.2/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="unicode-width"' -C metadata=a0e410a25b05d297 -C extra-filename=-a0e410a25b05d297 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern clicolors_control=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libclicolors_control-f95aedfd36305d68.rmeta --extern lazy_static=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblazy_static-7fe90463f0542b89.rmeta --extern libc=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblibc-5d9b252ed56b1945.rmeta --extern regex=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libregex-40c5630aef8afe3e.rmeta --extern termios=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libtermios-d64097ba20dddbc5.rmeta --extern unicode_width=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libunicode_width-397b6227577b65ae.rmeta --cap-lints allow`
Compiling indicatif v0.14.0
Running `rustc --edition=2018 --crate-name indicatif /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/indicatif-0.14.0/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' -C metadata=af81cb79d58cbea4 -C extra-filename=-af81cb79d58cbea4 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern console=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libconsole-a0e410a25b05d297.rmeta --extern lazy_static=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblazy_static-7fe90463f0542b89.rmeta --extern number_prefix=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libnumber_prefix-c617a5b231fd33f2.rmeta --extern regex=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libregex-40c5630aef8afe3e.rmeta --cap-lints allow`
Compiling pyo3-derive-backend v0.8.5
Running `rustc --edition=2018 --crate-name pyo3_derive_backend /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/pyo3-derive-backend-0.8.5/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=4e1d8a522a8e0abc -C extra-filename=-4e1d8a522a8e0abc --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro2-ce62abe820ec95ab.rmeta --extern quote=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libquote-5dd3b63b3c37ba50.rmeta --extern syn=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsyn-fb7a652ed3ecc931.rmeta --cap-lints allow`
Compiling serde_derive v1.0.104
Compiling proc-macro-hack v0.5.11
Compiling ctor v0.1.12
Compiling ghost v0.1.1
Compiling inventory-impl v0.1.5
Compiling pyo3cls v0.8.5
Running `rustc --crate-name serde_derive /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/serde_derive-1.0.104/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 --cfg 'feature="default"' -C metadata=c97a5ca23329a0e7 -C extra-filename=-c97a5ca23329a0e7 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro2-ce62abe820ec95ab.rlib --extern quote=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libquote-5dd3b63b3c37ba50.rlib --extern syn=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsyn-fb7a652ed3ecc931.rlib --cap-lints allow`
Running `rustc --edition=2018 --crate-name proc_macro_hack /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/proc-macro-hack-0.5.11/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=6c301fa525410f51 -C extra-filename=-6c301fa525410f51 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro2-ce62abe820ec95ab.rlib --extern quote=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libquote-5dd3b63b3c37ba50.rlib --extern syn=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsyn-fb7a652ed3ecc931.rlib --cap-lints allow`
Running `rustc --edition=2018 --crate-name ctor /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ctor-0.1.12/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=6d15aecbd8ecf9a9 -C extra-filename=-6d15aecbd8ecf9a9 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern quote=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libquote-5dd3b63b3c37ba50.rlib --extern syn=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsyn-fb7a652ed3ecc931.rlib --cap-lints allow`
Running `rustc --edition=2018 --crate-name ghost /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/ghost-0.1.1/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=a7fa8d8cb581322e -C extra-filename=-a7fa8d8cb581322e --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro2-ce62abe820ec95ab.rlib --extern quote=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libquote-5dd3b63b3c37ba50.rlib --extern syn=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsyn-fb7a652ed3ecc931.rlib --cap-lints allow`
Running `rustc --edition=2018 --crate-name inventory_impl /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/inventory-impl-0.1.5/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=07295296bec98a10 -C extra-filename=-07295296bec98a10 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro2-ce62abe820ec95ab.rlib --extern quote=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libquote-5dd3b63b3c37ba50.rlib --extern syn=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsyn-fb7a652ed3ecc931.rlib --cap-lints allow`
Running `rustc --edition=2018 --crate-name pyo3cls /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/pyo3cls-0.8.5/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=4e128ea32e108d4e -C extra-filename=-4e128ea32e108d4e --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern proc_macro2=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro2-ce62abe820ec95ab.rlib --extern pyo3_derive_backend=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libpyo3_derive_backend-4e1d8a522a8e0abc.rlib --extern quote=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libquote-5dd3b63b3c37ba50.rlib --extern syn=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsyn-fb7a652ed3ecc931.rlib --cap-lints allow`
Compiling paste-impl v0.1.7
Compiling indoc-impl v0.3.4
Running `rustc --edition=2018 --crate-name paste_impl /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/paste-impl-0.1.7/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=0100b2f59cf859eb -C extra-filename=-0100b2f59cf859eb --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern proc_macro_hack=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro_hack-6c301fa525410f51.so --extern proc_macro2=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro2-ce62abe820ec95ab.rlib --extern quote=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libquote-5dd3b63b3c37ba50.rlib --extern syn=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsyn-fb7a652ed3ecc931.rlib --cap-lints allow`
Running `rustc --edition=2018 --crate-name indoc_impl /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/indoc-impl-0.3.4/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type proc-macro --emit=dep-info,link -C prefer-dynamic -C opt-level=3 -C metadata=b6ef86cd971e397d -C extra-filename=-b6ef86cd971e397d --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern proc_macro_hack=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro_hack-6c301fa525410f51.so --extern proc_macro2=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro2-ce62abe820ec95ab.rlib --extern quote=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libquote-5dd3b63b3c37ba50.rlib --extern syn=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libsyn-fb7a652ed3ecc931.rlib --extern unindent=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libunindent-54ee88ab0038e61b.rlib --cap-lints allow`
Compiling inventory v0.1.5
Running `rustc --edition=2018 --crate-name inventory /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/inventory-0.1.5/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=81f198a01c0ec25f -C extra-filename=-81f198a01c0ec25f --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern ctor=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libctor-6d15aecbd8ecf9a9.so --extern ghost=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libghost-a7fa8d8cb581322e.so --extern inventory_impl=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libinventory_impl-07295296bec98a10.so --cap-lints allow`
Compiling indoc v0.3.4
Running `rustc --edition=2018 --crate-name indoc /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/indoc-0.3.4/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=3e8e6670af4a3851 -C extra-filename=-3e8e6670af4a3851 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern indoc_impl=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libindoc_impl-b6ef86cd971e397d.so --extern proc_macro_hack=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro_hack-6c301fa525410f51.so --cap-lints allow`
Compiling paste v0.1.7
Running `rustc --edition=2018 --crate-name paste /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/paste-0.1.7/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=331a39d39ec573ee -C extra-filename=-331a39d39ec573ee --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern paste_impl=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libpaste_impl-0100b2f59cf859eb.so --extern proc_macro_hack=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libproc_macro_hack-6c301fa525410f51.so --cap-lints allow`
Running `rustc --crate-name serde /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/serde-1.0.104/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="derive"' --cfg 'feature="serde_derive"' --cfg 'feature="std"' -C metadata=ef69b68005b70c62 -C extra-filename=-ef69b68005b70c62 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern serde_derive=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libserde_derive-c97a5ca23329a0e7.so --cap-lints allow --cfg ops_bound --cfg core_reverse --cfg de_boxed_c_str --cfg de_boxed_path --cfg de_rc_dst --cfg core_duration --cfg integer128 --cfg range_inclusive --cfg num_nonzero --cfg core_try_from --cfg num_nonzero_signed --cfg std_atomic64 --cfg std_atomic`
Compiling serde_json v1.0.48
Running `rustc --edition=2018 --crate-name serde_json /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/serde_json-1.0.48/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="std"' -C metadata=5007cd62ba6989a5 -C extra-filename=-5007cd62ba6989a5 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern itoa=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libitoa-b9ca519a13df71bf.rmeta --extern ryu=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libryu-81c16096c65f1d25.rmeta --extern serde=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libserde-ef69b68005b70c62.rmeta --cap-lints allow`
Compiling tokenizers v0.7.0 (/tmp/pip-install-d2wcoxbe/tokenizers/tokenizers-lib)
Running `rustc --edition=2018 --crate-name tokenizers tokenizers-lib/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C metadata=02f2af0a4056c877 -C extra-filename=-02f2af0a4056c877 --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern clap=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libclap-4d1679758f5cc3c5.rmeta --extern indicatif=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libindicatif-af81cb79d58cbea4.rmeta --extern lazy_static=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblazy_static-7fe90463f0542b89.rmeta --extern rand=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/librand-102d035e4ca6c699.rmeta --extern rayon=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/librayon-35b9846c719edc99.rmeta --extern regex=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libregex-40c5630aef8afe3e.rmeta --extern regex_syntax=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libregex_syntax-feb44197369905d4.rmeta --extern serde=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libserde-ef69b68005b70c62.rmeta --extern serde_json=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libserde_json-5007cd62ba6989a5.rmeta --extern unicode_normalization_alignments=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libunicode_normalization_alignments-2f95bcef5d770e35.rmeta --extern unicode_categories=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libunicode_categories-3376acf28b9791fe.rmeta`
Compiling pyo3 v0.8.5
Running `rustc --edition=2018 --crate-name build_script_build /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/pyo3-0.8.5/build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 --cfg 'feature="default"' --cfg 'feature="extension-module"' --cfg 'feature="python3"' -C metadata=7ff152acc5305eee -C extra-filename=-7ff152acc5305eee --out-dir /tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/pyo3-7ff152acc5305eee -L dependency=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps --extern lazy_static=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/liblazy_static-7fe90463f0542b89.rlib --extern regex=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libregex-40c5630aef8afe3e.rlib --extern serde=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libserde-ef69b68005b70c62.rlib --extern serde_json=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libserde_json-5007cd62ba6989a5.rlib --extern version_check=/tmp/pip-install-d2wcoxbe/tokenizers/target/release/deps/libversion_check-cb4ea0451d56bc0d.rlib --cap-lints allow`
Running `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/pyo3-7ff152acc5305eee/build-script-build`
error: failed to run custom build command for `pyo3 v0.8.5`
Caused by:
process didn't exit successfully: `/tmp/pip-install-d2wcoxbe/tokenizers/target/release/build/pyo3-7ff152acc5305eee/build-script-build` (exit code: 101)
--- stderr
thread 'main' panicked at 'Error: pyo3 requires a nightly or dev version of Rust.', /home/johnmg/.cargo/registry/src/github.com-1ecc6299db9ec823/pyo3-0.8.5/build.rs:542:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
warning: build failed, waiting for other jobs to finish...
error: build failed
cargo rustc --lib --manifest-path Cargo.toml --features pyo3/extension-module pyo3/python3 --release --verbose -- --crate-type cdylib
error: cargo failed with code: 101
----------------------------------------
ERROR: Failed building wheel for tokenizers
Failed to build tokenizers
ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
A successful install of `transformers`.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: `>2.3.0`
- Platform:
```
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
```
- Python version: `3.7.4`
- PyTorch version (GPU?): `1.4.0`
- Tensorflow version (GPU?): N/A.
- Using GPU in script?: N/A.
- Using distributed or parallel set-up in script?: N/A.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2980/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2979 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2979/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2979/comments | https://api.github.com/repos/huggingface/transformers/issues/2979/events | https://github.com/huggingface/transformers/issues/2979 | 569,504,848 | MDU6SXNzdWU1Njk1MDQ4NDg= | 2,979 | Question about output pipeline(feature-extraction) | {
"login": "Stuffooh",
"id": 50005268,
"node_id": "MDQ6VXNlcjUwMDA1MjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/50005268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Stuffooh",
"html_url": "https://github.com/Stuffooh",
"followers_url": "https://api.github.com/users/Stuffooh/followers",
"following_url": "https://api.github.com/users/Stuffooh/following{/other_user}",
"gists_url": "https://api.github.com/users/Stuffooh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Stuffooh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Stuffooh/subscriptions",
"organizations_url": "https://api.github.com/users/Stuffooh/orgs",
"repos_url": "https://api.github.com/users/Stuffooh/repos",
"events_url": "https://api.github.com/users/Stuffooh/events{/privacy}",
"received_events_url": "https://api.github.com/users/Stuffooh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,582 | 1,582 | 1,582 | NONE | null | Hi,
I'm new to python and transformers so please bare with me. I have the following code to use the pipeline wrapper of transformers.
```
from transformers import (
pipeline
)
import h5py
nlp = pipeline('feature-extraction', model='bert-base-cased', config='bert-base-cased', tokenizer='bert-base-cased', device=-1)
test = nlp("PersonA: Hi . PersonB: How are you doing ? PersonA: I 'm doing alright thank you very much.")
h5f = h5py.File('test.h5', 'a')
h5f.create_dataset('name', data=test)
h5f.close()
```
The above proof of concept works fine for me however I have two questions.
1. If I look at the shape of the h5py dataset it is (1, 270, 768). For my purpose I require the shape (270, 768). How can I make sure the output gets saved to h5 in this format?
2. device=-1 means the code will be executed on CPU correct?
Could someone please help me out with these two? ;) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2979/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2978 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2978/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2978/comments | https://api.github.com/repos/huggingface/transformers/issues/2978/events | https://github.com/huggingface/transformers/issues/2978 | 569,483,225 | MDU6SXNzdWU1Njk0ODMyMjU= | 2,978 | unreadable codes in for utils_glue | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, thank you for your comment. Github allows you to look at older versions of the [code](https://github.com/huggingface/transformers/blob/v1.0.0/examples/utils_glue.py)."
] | 1,582 | 1,582 | 1,582 | NONE | null | Hi
Previously the function "convert_examples_to_features" was implemented very nicely, and now this is implemented with so many nested function calls, which are all very hard to read and IMO unreadable codes. Could you return this implementation back to previous readable version?
thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2978/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2977 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2977/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2977/comments | https://api.github.com/repos/huggingface/transformers/issues/2977/events | https://github.com/huggingface/transformers/pull/2977 | 569,462,503 | MDExOlB1bGxSZXF1ZXN0Mzc4NjgyMTcz | 2,977 | Fix for case of multi-gpu | {
"login": "orena1",
"id": 8983713,
"node_id": "MDQ6VXNlcjg5ODM3MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8983713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orena1",
"html_url": "https://github.com/orena1",
"followers_url": "https://api.github.com/users/orena1/followers",
"following_url": "https://api.github.com/users/orena1/following{/other_user}",
"gists_url": "https://api.github.com/users/orena1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orena1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orena1/subscriptions",
"organizations_url": "https://api.github.com/users/orena1/orgs",
"repos_url": "https://api.github.com/users/orena1/repos",
"events_url": "https://api.github.com/users/orena1/events{/privacy}",
"received_events_url": "https://api.github.com/users/orena1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,588 | 1,588 | CONTRIBUTOR | null | When loading the optimizer and the scheduler in a multi gpu, the loading will put the optimizer and scheduler in cuda:0, it might not have enough mem to temporary store them (until the to.device below). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2977/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2977",
"html_url": "https://github.com/huggingface/transformers/pull/2977",
"diff_url": "https://github.com/huggingface/transformers/pull/2977.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2977.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2976 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2976/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2976/comments | https://api.github.com/repos/huggingface/transformers/issues/2976/events | https://github.com/huggingface/transformers/issues/2976 | 569,441,565 | MDU6SXNzdWU1Njk0NDE1NjU= | 2,976 | XLMRobertaTokenizer vocab size | {
"login": "ssdorsey",
"id": 10930222,
"node_id": "MDQ6VXNlcjEwOTMwMjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/10930222?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ssdorsey",
"html_url": "https://github.com/ssdorsey",
"followers_url": "https://api.github.com/users/ssdorsey/followers",
"following_url": "https://api.github.com/users/ssdorsey/following{/other_user}",
"gists_url": "https://api.github.com/users/ssdorsey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ssdorsey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ssdorsey/subscriptions",
"organizations_url": "https://api.github.com/users/ssdorsey/orgs",
"repos_url": "https://api.github.com/users/ssdorsey/repos",
"events_url": "https://api.github.com/users/ssdorsey/events{/privacy}",
"received_events_url": "https://api.github.com/users/ssdorsey/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | null | [] | [
"Running the following code caused error for me: \r\n\r\n> import transformers\r\n> tokenizer = transformers.AutoTokenizer.from_pretrained(\"xlm-roberta-base\")\r\n> tokenizer.convert_ids_to_tokens(range(tokenizer.vocab_size))\r\n\r\nActually, the `tokenizer.vocab_size` is `250005`, the last id `250004` is `<mask>`, but the ids from `250001` to `250003` do not exist.",
"> Actually, the `tokenizer.vocab_size` is `250005`, the last id `250004` is `<mask>`, but the ids from `250001` to `250003` do not exist.\r\n\r\nYa ok this is definitely the problem. Either way, it's an issue for the current implementation of get_vocab which will crash at 25001:\r\n\r\n```\r\n def get_vocab(self):\r\n vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}\r\n vocab.update(self.added_tokens_encoder)\r\n return vocab\r\n```\r\n",
"I wonder if this issue will be fixed? Currently it is not...",
"This issue is known and will be fixed.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This should have been fixed with https://github.com/huggingface/transformers/pull/3198"
] | 1,582 | 1,593 | 1,593 | NONE | null | I think the XLMRobertaTokenizer vocab_size is off. Currently double counts ```'<unk>' | '<s>' | '</s>'```
Maybe change it to
```
def vocab_size(self):
return len(self.sp_model) + self.fairseq_offset
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2976/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2975 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2975/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2975/comments | https://api.github.com/repos/huggingface/transformers/issues/2975/events | https://github.com/huggingface/transformers/issues/2975 | 569,441,134 | MDU6SXNzdWU1Njk0NDExMzQ= | 2,975 | GPT2 always has largest attention on first token? | {
"login": "jzhoubu",
"id": 20299401,
"node_id": "MDQ6VXNlcjIwMjk5NDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/20299401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jzhoubu",
"html_url": "https://github.com/jzhoubu",
"followers_url": "https://api.github.com/users/jzhoubu/followers",
"following_url": "https://api.github.com/users/jzhoubu/following{/other_user}",
"gists_url": "https://api.github.com/users/jzhoubu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jzhoubu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jzhoubu/subscriptions",
"organizations_url": "https://api.github.com/users/jzhoubu/orgs",
"repos_url": "https://api.github.com/users/jzhoubu/repos",
"events_url": "https://api.github.com/users/jzhoubu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jzhoubu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @sysu-zjw, this is indeed a very interesting observation that you made! \r\nOne thing to notice first is that GPT2 uses casual masking. So when looking at the attention weights ( corresponding to your variable `last_layer_attns_per_head` which I think should actually be called `last_layer_attns_avg_over_heads` ;-) ):\r\n\r\n```\r\n[1.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],\r\n[0.8310, 0.1690, 0.0000, 0.0000, 0.0000, 0.0000],\r\n[0.7752, 0.1165, 0.1083, 0.0000, 0.0000, 0.0000],\r\n[0.6962, 0.1208, 0.1039, 0.0790, 0.0000, 0.0000],\r\n[0.8273, 0.0428, 0.0410, 0.0356, 0.0533, 0.0000],\r\n[0.6496, 0.0758, 0.0439, 0.0328, 0.0853, 0.1125]\r\n```\r\nIt is obvious that the attention weights of the first token for example can only attend to the first token and so on.\r\nBut as you noticed the more interesting part is that also the last token seems to always focus most of its attention on the first token (for the last layer).\r\n\r\nI played around with different inputs and also noticed that pretty much the latter half of the transformer layers focus by far most of \"its attention\" on the first token. After googling a bit, I found that your observations were already put in a paper (check it out [here](https://www.aclweb.org/anthology/W19-4808.pdf) - especially Section 4.2). Looking at Figure 2, you should recognize your observations ;-) \r\n\r\nFrom my point of view, there is nothing wrong with your code. I think it's a pattern that has been observed but its reason is not well understood.\r\nIf you find a good explanation for GPT2's behavior in this case (might also be very similar for other transformer architectures), let me know!\r\n\r\n",
"@patrickvonplaten Thanks much for your detailed comment :) I will post here if I find any good explanation."
] | 1,582 | 1,582 | 1,582 | NONE | null | I am trying to print attentions from GPT2 model, and find something strange.
```
import torch
from transformers import *
device = torch.device("cuda")
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
config = GPT2Config.from_pretrained("gpt2", output_attentions=True)
model = GPT2LMHeadModel.from_pretrained("gpt2", config=config, cache_dir="./cached").to(device)
input_text = "hello, this is Tom ."
input_ids = tokenizer.encode(input_text, return_tensors="pt").to(device)
logits, _, attns = model(input_ids)
last_layer_attns = attns[-1].squeeze(0)
last_layer_attns_per_head = last_layer_attns.mean(dim=0) # (sequence_length, sequence_length)
print(last_layer_attns_per_head[-1])
```
Output:
> tensor([0.6496, 0.0758, 0.0439, 0.0328, 0.0853, 0.1125], device='cuda:0', grad_fn=<SelectBackward>)
I have also tried different sentences but the distributions of attn are the same -- the first token always has the largest attention. Is there anything wrong in my code? Or can somebody explain why this the first token has the largest attn?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2975/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2974 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2974/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2974/comments | https://api.github.com/repos/huggingface/transformers/issues/2974/events | https://github.com/huggingface/transformers/pull/2974 | 569,409,127 | MDExOlB1bGxSZXF1ZXN0Mzc4NjQyODI5 | 2,974 | New CLI using Typer | {
"login": "kabirkhan",
"id": 13891834,
"node_id": "MDQ6VXNlcjEzODkxODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/13891834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kabirkhan",
"html_url": "https://github.com/kabirkhan",
"followers_url": "https://api.github.com/users/kabirkhan/followers",
"following_url": "https://api.github.com/users/kabirkhan/following{/other_user}",
"gists_url": "https://api.github.com/users/kabirkhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kabirkhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kabirkhan/subscriptions",
"organizations_url": "https://api.github.com/users/kabirkhan/orgs",
"repos_url": "https://api.github.com/users/kabirkhan/repos",
"events_url": "https://api.github.com/users/kabirkhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/kabirkhan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closing as discussed in https://github.com/huggingface/transformers/issues/2959#issuecomment-619214142"
] | 1,582 | 1,587 | 1,587 | NONE | null | ## Summary
First pass at a New CLI using [Typer](https://typer.tiangolo.com/)
**745** total lines of code vs 917 for the old CLI, only adding one (optional) dependency (Typer) which is well documented/supported and has 99% test coverage.
Currently, this mostly keeps the same option names for each command but I think we should have a discussion on using CLI Arguments vs Options.
Didn't put a ton of work into actual refactors around defining what should be an Argument and what should be an Option yet, wanted to get some eyes on this before putting in too much time.
In short, my position is everything that's required for a command to run should be an Argument and should be documented in the docstring which is automatically included in the help info
Good information here about using Arguments over Options https://typer.tiangolo.com/tutorial/arguments/#about-cli-arguments-help whenever there are required values.
## Usage
```
pip install transformers[new-cli]
```
and adds the console_script transformers so users can run commands with (e.g.):
```
transformers login
```
or:
```
transformers serve ner
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2974/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2974",
"html_url": "https://github.com/huggingface/transformers/pull/2974",
"diff_url": "https://github.com/huggingface/transformers/pull/2974.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2974.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2973 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2973/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2973/comments | https://api.github.com/repos/huggingface/transformers/issues/2973/events | https://github.com/huggingface/transformers/pull/2973 | 569,403,858 | MDExOlB1bGxSZXF1ZXN0Mzc4NjM5MDI5 | 2,973 | Testing that batch_encode_plus is the same as encode_plus | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2973?src=pr&el=h1) Report\n> Merging [#2973](https://codecov.io/gh/huggingface/transformers/pull/2973?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c36416e53c29da8b6193f4a36d7b024c5f513495?src=pr&el=desc) will **increase** coverage by `0.03%`.\n> The diff coverage is `92.85%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2973?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2973 +/- ##\n==========================================\n+ Coverage 77.14% 77.17% +0.03% \n==========================================\n Files 98 98 \n Lines 16006 16020 +14 \n==========================================\n+ Hits 12348 12364 +16 \n+ Misses 3658 3656 -2\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2973?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2973/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.83% <100%> (+0.11%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2973/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.06% <92.3%> (+0.45%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2973?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2973?src=pr&el=footer). Last update [c36416e...9e3275a](https://codecov.io/gh/huggingface/transformers/pull/2973?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,582 | 1,582 | MEMBER | null | Spoiler alert: it wasn't.
closes #2960
closes #2658
closes #2654 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2973/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2973",
"html_url": "https://github.com/huggingface/transformers/pull/2973",
"diff_url": "https://github.com/huggingface/transformers/pull/2973.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2973.patch",
"merged_at": 1582564187000
} |
https://api.github.com/repos/huggingface/transformers/issues/2972 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2972/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2972/comments | https://api.github.com/repos/huggingface/transformers/issues/2972/events | https://github.com/huggingface/transformers/issues/2972 | 569,401,233 | MDU6SXNzdWU1Njk0MDEyMzM= | 2,972 | Getting the output of the from the forward function of the GPT-2 | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | null | [] | [
"There isn't a way to retrieve that from the current API, but feel free to clone the repository and modify it to fit your needs.",
"Hello,\r\n\r\nThank you for your reply. Can I make a request to the Hugging Face so that the code can be modified for users to extract ```self.merge_heads(a)``` and ```self.c_proj(a)```? If yes, how can I make the request?\r\n\r\nThank you,",
"> Hello,\r\n> \r\n> Thank you for your reply. Can I make a request to the Hugging Face so that the code can be modified for users to extract `self.merge_heads(a)` and `self.c_proj(a)`? If yes, how can I make the request?\r\n> \r\n> Thank you,\r\n\r\nYou can suggest feature requests here, as a feature (so here in this topic), but chances are very slim that it will be picked up. It is not high in priority, I would think. @LysandreJik suggests that you clone the repository and implement it yourself. If you successfully do so, you can do a pull request so that your code becomes part of this repository!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,588 | 1,588 | NONE | null | Hello,
Is there any way that I can extract the output of the ```self.merge_heads(a)``` and the ```self.c_proj(a)``` from the forward function for the Hugging Face GPT-2, which are found[ here](https://github.com/huggingface/transformers/blob/73028c5df0c28ca179fbe565482a9c2143787f61/src/transformers/modeling_gpt2.py#L192) and [here](https://github.com/huggingface/transformers/blob/73028c5df0c28ca179fbe565482a9c2143787f61/src/transformers/modeling_gpt2.py#L193)?
Thank you, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2972/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2971 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2971/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2971/comments | https://api.github.com/repos/huggingface/transformers/issues/2971/events | https://github.com/huggingface/transformers/pull/2971 | 569,397,574 | MDExOlB1bGxSZXF1ZXN0Mzc4NjM0NjYz | 2,971 | fix _update_memory fn call in transformer-xl | {
"login": "andompesta",
"id": 6725612,
"node_id": "MDQ6VXNlcjY3MjU2MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6725612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andompesta",
"html_url": "https://github.com/andompesta",
"followers_url": "https://api.github.com/users/andompesta/followers",
"following_url": "https://api.github.com/users/andompesta/following{/other_user}",
"gists_url": "https://api.github.com/users/andompesta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andompesta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andompesta/subscriptions",
"organizations_url": "https://api.github.com/users/andompesta/orgs",
"repos_url": "https://api.github.com/users/andompesta/repos",
"events_url": "https://api.github.com/users/andompesta/events{/privacy}",
"received_events_url": "https://api.github.com/users/andompesta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2971?src=pr&el=h1) Report\n> Merging [#2971](https://codecov.io/gh/huggingface/transformers/pull/2971?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/92487a1dc03c919afa8a961ed7d8ba78fafa21bd?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2971?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2971 +/- ##\n==========================================\n+ Coverage 77.14% 77.15% +<.01% \n==========================================\n Files 98 98 \n Lines 16003 16003 \n==========================================\n+ Hits 12346 12347 +1 \n+ Misses 3657 3656 -1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2971?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/2971/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `75.63% <100%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2971/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.53% <0%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2971?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2971?src=pr&el=footer). Last update [92487a1...7209cf3](https://codecov.io/gh/huggingface/transformers/pull/2971?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,582 | 1,582 | CONTRIBUTOR | null | Fix bug related to #2970 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2971/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2971",
"html_url": "https://github.com/huggingface/transformers/pull/2971",
"diff_url": "https://github.com/huggingface/transformers/pull/2971.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2971.patch",
"merged_at": 1582584624000
} |
https://api.github.com/repos/huggingface/transformers/issues/2970 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2970/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2970/comments | https://api.github.com/repos/huggingface/transformers/issues/2970/events | https://github.com/huggingface/transformers/issues/2970 | 569,394,455 | MDU6SXNzdWU1NjkzOTQ0NTU= | 2,970 | Bug in transfo_xl function call | {
"login": "andompesta",
"id": 6725612,
"node_id": "MDQ6VXNlcjY3MjU2MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6725612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andompesta",
"html_url": "https://github.com/andompesta",
"followers_url": "https://api.github.com/users/andompesta/followers",
"following_url": "https://api.github.com/users/andompesta/following{/other_user}",
"gists_url": "https://api.github.com/users/andompesta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andompesta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andompesta/subscriptions",
"organizations_url": "https://api.github.com/users/andompesta/orgs",
"repos_url": "https://api.github.com/users/andompesta/repos",
"events_url": "https://api.github.com/users/andompesta/events{/privacy}",
"received_events_url": "https://api.github.com/users/andompesta/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | null | [] | [] | 1,582 | 1,582 | 1,582 | CONTRIBUTOR | null | # 🐛 Bug
Miss-order in the function call.
In line 785 of modeling_transfo_xl.py you call the ``new_mems = self._update_mems(hids, mems, mlen, qlen)``. Yet, the function declaration is ``def _update_mems(self, hids, mems, qlen, mlen)``
Apparently you invert the order of ``qlen`` and ``mlen``.
Yet the training performances are not affected if ``ext_len = 0`` which is the default setting.
## Information
Related to https://github.com/kimiyoung/transformer-xl/issues/96
Model I am using (Bert, XLNet ...): Transformer-XL (modeling_transfo_xl.py)
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
There is no issue during training, but it might affect the performance in evaluation
## Environment info
- `transformers` version: 2.5.0
- Platform: MACOS
- Python version: 3.6.0
- PyTorch version (GPU?): 1.2.0
- Tensorflow version (GPU?):
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: yes
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2970/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2969 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2969/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2969/comments | https://api.github.com/repos/huggingface/transformers/issues/2969/events | https://github.com/huggingface/transformers/pull/2969 | 569,393,660 | MDExOlB1bGxSZXF1ZXN0Mzc4NjMxODMw | 2,969 | Bart: fix layerdrop and caching shapes for generation | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2969?src=pr&el=h1) Report\n> Merging [#2969](https://codecov.io/gh/huggingface/transformers/pull/2969?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c36416e53c29da8b6193f4a36d7b024c5f513495?src=pr&el=desc) will **decrease** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2969?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2969 +/- ##\n==========================================\n- Coverage 77.14% 77.14% -0.01% \n==========================================\n Files 98 98 \n Lines 16006 16003 -3 \n==========================================\n- Hits 12348 12345 -3 \n Misses 3658 3658\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2969?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/2969/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `84.58% <100%> (-0.11%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2969?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2969?src=pr&el=footer). Last update [c36416e...0198f22](https://codecov.io/gh/huggingface/transformers/pull/2969?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,582 | 1,582 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2969/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2969",
"html_url": "https://github.com/huggingface/transformers/pull/2969",
"diff_url": "https://github.com/huggingface/transformers/pull/2969.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2969.patch",
"merged_at": 1582406704000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2968 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2968/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2968/comments | https://api.github.com/repos/huggingface/transformers/issues/2968/events | https://github.com/huggingface/transformers/pull/2968 | 569,388,933 | MDExOlB1bGxSZXF1ZXN0Mzc4NjI4NDk2 | 2,968 | Delete untested, broken Model2LSTM | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2968?src=pr&el=h1) Report\n> Merging [#2968](https://codecov.io/gh/huggingface/transformers/pull/2968?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c36416e53c29da8b6193f4a36d7b024c5f513495?src=pr&el=desc) will **decrease** coverage by `1.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2968?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2968 +/- ##\n==========================================\n- Coverage 77.14% 76.13% -1.02% \n==========================================\n Files 98 98 \n Lines 16006 15998 -8 \n==========================================\n- Hits 12348 12180 -168 \n- Misses 3658 3818 +160\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2968?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/2968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `25.37% <ø> (-1.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2968?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2968?src=pr&el=footer). Last update [c36416e...05560e2](https://codecov.io/gh/huggingface/transformers/pull/2968?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,582 | 1,582 | CONTRIBUTOR | null | If you need it back `git checkout c36416e5` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2968/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2968",
"html_url": "https://github.com/huggingface/transformers/pull/2968",
"diff_url": "https://github.com/huggingface/transformers/pull/2968.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2968.patch",
"merged_at": 1582475329000
} |
https://api.github.com/repos/huggingface/transformers/issues/2967 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2967/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2967/comments | https://api.github.com/repos/huggingface/transformers/issues/2967/events | https://github.com/huggingface/transformers/pull/2967 | 569,385,987 | MDExOlB1bGxSZXF1ZXN0Mzc4NjI2NDMx | 2,967 | missing name entity recognition link | {
"login": "autoih",
"id": 41447049,
"node_id": "MDQ6VXNlcjQxNDQ3MDQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/41447049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/autoih",
"html_url": "https://github.com/autoih",
"followers_url": "https://api.github.com/users/autoih/followers",
"following_url": "https://api.github.com/users/autoih/following{/other_user}",
"gists_url": "https://api.github.com/users/autoih/gists{/gist_id}",
"starred_url": "https://api.github.com/users/autoih/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/autoih/subscriptions",
"organizations_url": "https://api.github.com/users/autoih/orgs",
"repos_url": "https://api.github.com/users/autoih/repos",
"events_url": "https://api.github.com/users/autoih/events{/privacy}",
"received_events_url": "https://api.github.com/users/autoih/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2967?src=pr&el=h1) Report\n> Merging [#2967](https://codecov.io/gh/huggingface/transformers/pull/2967?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/94ff2d6ee8280c5595b92c1128c0f18e44925e56?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2967?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2967 +/- ##\n==========================================\n+ Coverage 77.11% 77.12% +<.01% \n==========================================\n Files 98 98 \n Lines 15977 15977 \n==========================================\n+ Hits 12321 12322 +1 \n+ Misses 3656 3655 -1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2967?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2967/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.53% <0%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2967?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2967?src=pr&el=footer). Last update [94ff2d6...bede5f4](https://codecov.io/gh/huggingface/transformers/pull/2967?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,582 | 1,582 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2967/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2967",
"html_url": "https://github.com/huggingface/transformers/pull/2967",
"diff_url": "https://github.com/huggingface/transformers/pull/2967.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2967.patch",
"merged_at": 1582657618000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2966 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2966/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2966/comments | https://api.github.com/repos/huggingface/transformers/issues/2966/events | https://github.com/huggingface/transformers/pull/2966 | 569,346,756 | MDExOlB1bGxSZXF1ZXN0Mzc4NTk4NjYx | 2,966 | Warning on `add_special_tokens` | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2966?src=pr&el=h1) Report\n> Merging [#2966](https://codecov.io/gh/huggingface/transformers/pull/2966?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cc6775cdf5b20ad382613d3bdbf0dd8364d23219?src=pr&el=desc) will **decrease** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2966?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2966 +/- ##\n==========================================\n- Coverage 77.12% 77.11% -0.01% \n==========================================\n Files 98 98 \n Lines 15977 15979 +2 \n==========================================\n Hits 12322 12322 \n- Misses 3655 3657 +2\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2966?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2966/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.48% <100%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2966/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.2% <0%> (-0.33%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2966?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2966?src=pr&el=footer). Last update [cc6775c...7a09a54](https://codecov.io/gh/huggingface/transformers/pull/2966?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,651 | 1,582 | MEMBER | null | Warning on `add_special_tokens` when passed to `encode`, `encode_plus` and `batch_encode_plus` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2966/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2966",
"html_url": "https://github.com/huggingface/transformers/pull/2966",
"diff_url": "https://github.com/huggingface/transformers/pull/2966.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2966.patch",
"merged_at": 1582551775000
} |
https://api.github.com/repos/huggingface/transformers/issues/2965 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2965/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2965/comments | https://api.github.com/repos/huggingface/transformers/issues/2965/events | https://github.com/huggingface/transformers/pull/2965 | 569,346,648 | MDExOlB1bGxSZXF1ZXN0Mzc4NTk4NTkx | 2,965 | Correct `special_tokens_mask` when `add_special_tokens=False` | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2965?src=pr&el=h1) Report\n> Merging [#2965](https://codecov.io/gh/huggingface/transformers/pull/2965?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cc6775cdf5b20ad382613d3bdbf0dd8364d23219?src=pr&el=desc) will **decrease** coverage by `<.01%`.\n> The diff coverage is `66.66%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2965?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2965 +/- ##\n==========================================\n- Coverage 77.12% 77.11% -0.01% \n==========================================\n Files 98 98 \n Lines 15977 15979 +2 \n==========================================\n+ Hits 12322 12323 +1 \n- Misses 3655 3656 +1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2965?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.33% <66.66%> (-0.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jYW1lbWJlcnQucHk=) | `100% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG1fcm9iZXJ0YS5weQ==) | `100% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `100% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `97.82% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `96.54% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.05% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.84% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `95.11% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.66% <0%> (ø)` | :arrow_up: |\n| ... and [20 more](https://codecov.io/gh/huggingface/transformers/pull/2965/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2965?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2965?src=pr&el=footer). Last update [cc6775c...46b6238](https://codecov.io/gh/huggingface/transformers/pull/2965?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"The code wouldn't be slower if we always returned it, but one of the use-cases of `encode_plus` is that it provides the full array of necessary values for the model, and only those by default. The rest (useful values but that cannot be fed to the model) is optional. This is so we may do something like this:\r\n\r\n```py\r\nvalue = tokenizer.encode_plus(\"First sequence\", \"second sequence\")\r\nmodel(**value)\r\n```\r\n\r\nThis isn't perfect yet as it returns some values which are not usable by some models (e.g. `token_type_ids` for DistilBERT which crash the model).\r\n\r\nSee #2702 and #2871 for more background."
] | 1,582 | 1,582 | 1,582 | MEMBER | null | Don't know of a use case where that would be useful, but this is more consistent | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2965/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2965",
"html_url": "https://github.com/huggingface/transformers/pull/2965",
"diff_url": "https://github.com/huggingface/transformers/pull/2965.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2965.patch",
"merged_at": 1582469439000
} |
https://api.github.com/repos/huggingface/transformers/issues/2964 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2964/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2964/comments | https://api.github.com/repos/huggingface/transformers/issues/2964/events | https://github.com/huggingface/transformers/pull/2964 | 569,329,287 | MDExOlB1bGxSZXF1ZXN0Mzc4NTg2MDg4 | 2,964 | [DOCS] fix hardcoded path in examples readme | {
"login": "saippuakauppias",
"id": 945306,
"node_id": "MDQ6VXNlcjk0NTMwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/945306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saippuakauppias",
"html_url": "https://github.com/saippuakauppias",
"followers_url": "https://api.github.com/users/saippuakauppias/followers",
"following_url": "https://api.github.com/users/saippuakauppias/following{/other_user}",
"gists_url": "https://api.github.com/users/saippuakauppias/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saippuakauppias/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saippuakauppias/subscriptions",
"organizations_url": "https://api.github.com/users/saippuakauppias/orgs",
"repos_url": "https://api.github.com/users/saippuakauppias/repos",
"events_url": "https://api.github.com/users/saippuakauppias/events{/privacy}",
"received_events_url": "https://api.github.com/users/saippuakauppias/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2964?src=pr&el=h1) Report\n> Merging [#2964](https://codecov.io/gh/huggingface/transformers/pull/2964?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/94ff2d6ee8280c5595b92c1128c0f18e44925e56?src=pr&el=desc) will **decrease** coverage by `1.03%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2964?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2964 +/- ##\n==========================================\n- Coverage 77.11% 76.08% -1.04% \n==========================================\n Files 98 98 \n Lines 15977 15977 \n==========================================\n- Hits 12321 12156 -165 \n- Misses 3656 3821 +165\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2964?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2964?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2964?src=pr&el=footer). Last update [94ff2d6...4e2066b](https://codecov.io/gh/huggingface/transformers/pull/2964?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks! cc @LysandreJik "
] | 1,582 | 1,582 | 1,582 | CONTRIBUTOR | null | I think, after merge this PR - need to rebuild docs, right? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2964/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2964",
"html_url": "https://github.com/huggingface/transformers/pull/2964",
"diff_url": "https://github.com/huggingface/transformers/pull/2964.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2964.patch",
"merged_at": 1582387959000
} |
https://api.github.com/repos/huggingface/transformers/issues/2963 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2963/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2963/comments | https://api.github.com/repos/huggingface/transformers/issues/2963/events | https://github.com/huggingface/transformers/issues/2963 | 569,258,295 | MDU6SXNzdWU1NjkyNTgyOTU= | 2,963 | Length of special_tokens_mask doesn't align with the input_ids | {
"login": "hinthornw",
"id": 13333726,
"node_id": "MDQ6VXNlcjEzMzMzNzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/13333726?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hinthornw",
"html_url": "https://github.com/hinthornw",
"followers_url": "https://api.github.com/users/hinthornw/followers",
"following_url": "https://api.github.com/users/hinthornw/following{/other_user}",
"gists_url": "https://api.github.com/users/hinthornw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hinthornw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hinthornw/subscriptions",
"organizations_url": "https://api.github.com/users/hinthornw/orgs",
"repos_url": "https://api.github.com/users/hinthornw/repos",
"events_url": "https://api.github.com/users/hinthornw/events{/privacy}",
"received_events_url": "https://api.github.com/users/hinthornw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,582 | 1,582 | 1,582 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
data = tokenizer.encode_plus("Hello, world!", add_special_tokens=False, return_special_tokens_mask=True)
assert len(data['input_ids']) == len(data['special_tokens_mask'])
>> AssertionError
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Expect there to be no assertion error. The mask should be of the same shape as the input_ids.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.4.1
- Platform: Linux
- Python version: 3.5.2
- PyTorch version (GPU?): 1.2.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2963/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2962 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2962/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2962/comments | https://api.github.com/repos/huggingface/transformers/issues/2962/events | https://github.com/huggingface/transformers/pull/2962 | 569,248,316 | MDExOlB1bGxSZXF1ZXN0Mzc4NTI1NTUw | 2,962 | [WIP] Proposal for Migrating to Typer for CLI and Examples | {
"login": "kabirkhan",
"id": 13891834,
"node_id": "MDQ6VXNlcjEzODkxODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/13891834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kabirkhan",
"html_url": "https://github.com/kabirkhan",
"followers_url": "https://api.github.com/users/kabirkhan/followers",
"following_url": "https://api.github.com/users/kabirkhan/following{/other_user}",
"gists_url": "https://api.github.com/users/kabirkhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kabirkhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kabirkhan/subscriptions",
"organizations_url": "https://api.github.com/users/kabirkhan/orgs",
"repos_url": "https://api.github.com/users/kabirkhan/repos",
"events_url": "https://api.github.com/users/kabirkhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/kabirkhan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closing as discussed in https://github.com/huggingface/transformers/issues/2959#issuecomment-619214142"
] | 1,582 | 1,587 | 1,587 | NONE | null | This doesn't update dependencies on purpose. This is just meant to demonstrate migrating an example from using argparse to using Typer.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2962/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2962",
"html_url": "https://github.com/huggingface/transformers/pull/2962",
"diff_url": "https://github.com/huggingface/transformers/pull/2962.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2962.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2961 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2961/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2961/comments | https://api.github.com/repos/huggingface/transformers/issues/2961/events | https://github.com/huggingface/transformers/pull/2961 | 569,242,251 | MDExOlB1bGxSZXF1ZXN0Mzc4NTIwNDcz | 2,961 | Fix max_length not taken into account when using pad_to_max_length on fast tokenizers | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Should fix #2950",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2961?src=pr&el=h1) Report\n> Merging [#2961](https://codecov.io/gh/huggingface/transformers/pull/2961?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/94ff2d6ee8280c5595b92c1128c0f18e44925e56?src=pr&el=desc) will **decrease** coverage by `<.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2961?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2961 +/- ##\n==========================================\n- Coverage 77.11% 77.11% -0.01% \n==========================================\n Files 98 98 \n Lines 15977 15977 \n==========================================\n- Hits 12321 12320 -1 \n- Misses 3656 3657 +1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2961?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.45% <ø> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.23% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.2% <0%> (-0.17%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2961?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2961?src=pr&el=footer). Last update [94ff2d6...bf62e7c](https://codecov.io/gh/huggingface/transformers/pull/2961?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,582 | 1,582 | MEMBER | null | On fast tokenizer calling encode/ encode_plus / batch_encode_plus was not taking into account max_length when setting the padding strategy. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2961/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2961",
"html_url": "https://github.com/huggingface/transformers/pull/2961",
"diff_url": "https://github.com/huggingface/transformers/pull/2961.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2961.patch",
"merged_at": 1582381668000
} |
https://api.github.com/repos/huggingface/transformers/issues/2960 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2960/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2960/comments | https://api.github.com/repos/huggingface/transformers/issues/2960/events | https://github.com/huggingface/transformers/issues/2960 | 569,241,478 | MDU6SXNzdWU1NjkyNDE0Nzg= | 2,960 | Python Tokenizer batch_encode_plus doesn't pad input if asked to do so. | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,582 | 1,582 | 1,582 | MEMBER | null | ```python
input_p = tokenizer_p.batch_encode_plus(
["This is a simple input 1", "This is a simple input 2"],
max_length=15,
pad_to_max_length=True
)
```
Output is not padded | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2960/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2959 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2959/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2959/comments | https://api.github.com/repos/huggingface/transformers/issues/2959/events | https://github.com/huggingface/transformers/issues/2959 | 569,224,311 | MDU6SXNzdWU1NjkyMjQzMTE= | 2,959 | Switching from argparse to Typer | {
"login": "kabirkhan",
"id": 13891834,
"node_id": "MDQ6VXNlcjEzODkxODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/13891834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kabirkhan",
"html_url": "https://github.com/kabirkhan",
"followers_url": "https://api.github.com/users/kabirkhan/followers",
"following_url": "https://api.github.com/users/kabirkhan/following{/other_user}",
"gists_url": "https://api.github.com/users/kabirkhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kabirkhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kabirkhan/subscriptions",
"organizations_url": "https://api.github.com/users/kabirkhan/orgs",
"repos_url": "https://api.github.com/users/kabirkhan/repos",
"events_url": "https://api.github.com/users/kabirkhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/kabirkhan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
}
] | closed | false | null | [] | [
"Looks like [sacremoses](https://github.com/alvations/sacremoses) already uses [Click](https://click.palletsprojects.com/en/7.x/) as a dependency so using Typer would be ideal as it's only dependency is Click and it provides a much nicer interface than Click using Python 3.6+ Type Hints",
"Also, since Typer is using Python 3.6+ Type Hints this would require dropping Python 3.5. While that's not ideal it is the general trend for a lot projects these days as the benefits of using type hints and proper dict ordering are really high when it comes to testing and general usability.\r\n\r\nMore discussion here about Python 3.5 and whether it's worth supporting since only about a little over 1% of installs come from Python 3.5: https://github.com/huggingface/transformers/issues/2608",
"So I don't think the issue is argument parsing, that code is pretty much declarative and will look the same in all implementations. \r\n\r\nWe have been experimenting with using lightning to simplify the interface for these models. Do you find the code here to be more readable?\r\n\r\nhttps://github.com/huggingface/transformers/blob/master/examples/ner/run_pl_ner.py\r\n\r\n",
"### Code Readability\r\n\r\nI actually think we're talking about 2 different problems:\r\n\r\n1. Readability of the examples overall is not excellent. (This is addressed nicely with Pytorch Lightning)\r\n2. The management of command line args with argparse creates confusion as you're passing around untyped arguments to arbitrary functions. And declaring arguments is very verbose. Typer basically makes typed python functions into CLI's mostly automatically and cleans up the code a lot.\r\n\r\nI've implemented a couple draft PRs for using Typer.\r\n\r\n1. Example of migrating one example to Typer https://github.com/huggingface/transformers/pull/2962\r\n2. Full rewrite of the transformers-cli using Typer: https://github.com/huggingface/transformers/pull/2974\r\n\r\nTyper reduces the amount of code significantly in both cases while also making the functions easier to read and understand. \r\n\r\n\r\n### CLI Usability \r\n\r\nThere's a larger discussion about using CLI Arguments vs Options that should be had as well. \r\nMost of the examples are overly verbose to run using the existing CLI options.\r\n\r\nFor instance, in the run_generation.py example I migrated (PR 1 above) there are only 2 required options (model_type and model_name_or_path). I made these Typer Options to not break convention for now but they should both be arguments.\r\n\r\n\r\nThat way, instead of writing:\r\n\r\n```bash\r\npython examples/run_generation.py --model_type gpt2 --model_name_or_path distilgpt2\r\n```\r\n\r\nthe user can write something like:\r\n\r\n```bash\r\npython examples/run_generation.py gpt2 distilgpt2\r\n```\r\n\r\nAnd the docstring for the function can document that these arguments are required. Typer automatically uses the docstring in the help.\r\n\r\nSo here's the automatic help docs for run_generation\r\n\r\n```bash\r\npython examples/run_generation.py --help\r\n```\r\n\r\n```console\r\nUsage: run_generation.py [OPTIONS] MODEL_TYPE MODEL_NAME_OR_PATH\r\n\r\n Generate text based on a prompt using one of [gpt2, ctrl, openai-gpt,\r\n xlnet, transfo-xl, xlm] as the model_type and a a supported model name or\r\n path for that model_type\r\n\r\n e.g.\r\n\r\n $ python examples/run_generation.py gpt2 distilgpt2\r\n\r\nOptions:\r\n --prompt TEXT\r\n --length INTEGER\r\n --stop-token TEXT Token at which text generation is stopped\r\n --temperature FLOAT temperature of 1.0 has no effect, lower tend\r\n toward greedy sampling\r\n --repetition-penalty FLOAT primarily useful for CTRL model; in that\r\n case, use 1.2\r\n --k INTEGER\r\n --p FLOAT\r\n --padding-text TEXT Padding text for Transfo-XL and XLNet.\r\n --xlm-language TEXT Optional language when used with the XLM\r\n model\r\n --seed INTEGER random seed for initialization\r\n --no-cuda Don't use CUDA and run on CPU.\r\n --num-return-sequences INTEGER The number of samples to generate.\r\n --help Show this message and exit.\r\n```",
"The work on transformers-cli (2) seems interesting as there are complex types there. I am pretty unconvinced on (1). The code reduction is mostly aesthetic, I don't see any really complexity wins. Given that I'm apt to stick with argparse as it is standard. (The argument/options thing could also be done in argparse. )",
"Thanks for the feedback\r\n\r\nActually I think it's more standard to use a CLI parsing dependency over argparse these days. Not a huge deal and it's not my library but I've just heard the same feedback about argparse in the examples from a few colleagues around Microsoft which is why I decided to propose the change.\r\n\r\nIf you do have some time to give a quick review on (2) that would be awesome. I think the changes there offer a lot of clarity particularly with using the Enum types.",
"@julien-c any thoughts on this? I don't think we want another dependency, but @kabirkhan did put a lot of work into restructuring CLI https://github.com/huggingface/transformers/pull/2974",
"My two cents, or maybe just one cent: I have always been torn with this, the same with plac. It feels more verbose than argparse but also, it doesn't. \r\n\r\nHere, in this case, we currently already have the `register_subcommand`s so I think Typer actually makes sense. Looking at the code, it does greatly reduce redundancy (except for the app.command() calls). However, it is a lot less known (I think) and I still feel it is less intuitive than good ol' argparse. So if this were a vote, I'd vote aye, keeping in mind that it might be a \"learning curve\". On top of that, all future CLI scripts should also use this library to streamline the experience and development. As a bonus, dropping 3.5 support is an excellent side-effect in my opinion (Typing, f-strings, ordered dicts), but one might want to check how many users are on 3.5.",
"@BramVanroy thanks for the input. Yeah the app.command() calls are annoying, fixed that with a for loop just now. I hear your point on the \"learning curve\" but Typer is truly very easy to learn and really well documented. It's built by tiangolo (same guy who made FastAPI). Also the current CLI already requires python 3.6",
"@BramVanroy less than 1% of pip installs are on Python 3.5 according to https://pypistats.org/packages/transformers – we will probably drop it in the next couple of weeks or months\r\n\r\n@kabirkhan Thanks for the work you've put into those PRs! This is interesting, and analogous to ideas we've been discussing internally and here in other issues. More generally we are in the process of thinking about rebuilding our training example scripts in a more scalable/re-usable way. We'll link our first thoughts from here.",
"I'll close this as we now have a very simple (100 lines of code) built-in argument parser named [HfArgumentParser](https://github.com/huggingface/transformers/blob/master/src/transformers/hf_argparser.py).\r\n\r\nExample scripts are now way cleaner after #3800.\r\n\r\nWill also close associated PR #2974. \r\n\r\nWould love to get your feedback @kabirkhan, thanks for prompting our reflection."
] | 1,582 | 1,587 | 1,587 | NONE | null | # 🚀 Feature request
I'm pretty new to transformers and I'm finding the examples a bit hard to read. Mainly I think this is due to argparse being so verbose. Is there any interest in migrating to something like [Plac](https://micheles.github.io/plac/) or even better [Typer](https://typer.tiangolo.com/)?
I think it would make all of the examples a lot easier to grasp right away and friendlier to newer users.
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
It will make the examples easier to read and a lot shorter. Currently, in an example like https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py, just parsing CLI arguments takes 140 lines of the 799 lines of code (17.5 % of the total code).
Then the args object is passed around to all of the other functions and that becomes hard to deal with when first looking at the example and not having an understanding of exactly what values each function needs.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
I'm happy to contribute the change across the transformers-cli and all the examples over time. Want to better understand the project requirements around adding a new dependency before submitting any PR.
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2959/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2958 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2958/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2958/comments | https://api.github.com/repos/huggingface/transformers/issues/2958/events | https://github.com/huggingface/transformers/pull/2958 | 569,209,795 | MDExOlB1bGxSZXF1ZXN0Mzc4NDkzODc2 | 2,958 | Remove double bias | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,582 | 1,582 | 1,582 | MEMBER | null | Bias is currently applied twice in BERT, RoBERTa and ALBERT. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2958/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2958/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2958",
"html_url": "https://github.com/huggingface/transformers/pull/2958",
"diff_url": "https://github.com/huggingface/transformers/pull/2958.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2958.patch",
"merged_at": 1582323019000
} |
https://api.github.com/repos/huggingface/transformers/issues/2957 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2957/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2957/comments | https://api.github.com/repos/huggingface/transformers/issues/2957/events | https://github.com/huggingface/transformers/issues/2957 | 569,200,027 | MDU6SXNzdWU1NjkyMDAwMjc= | 2,957 | Bias in `BertLMPredictionHead ` is added twice | {
"login": "jzbjyb",
"id": 5134761,
"node_id": "MDQ6VXNlcjUxMzQ3NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5134761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jzbjyb",
"html_url": "https://github.com/jzbjyb",
"followers_url": "https://api.github.com/users/jzbjyb/followers",
"following_url": "https://api.github.com/users/jzbjyb/following{/other_user}",
"gists_url": "https://api.github.com/users/jzbjyb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jzbjyb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jzbjyb/subscriptions",
"organizations_url": "https://api.github.com/users/jzbjyb/orgs",
"repos_url": "https://api.github.com/users/jzbjyb/repos",
"events_url": "https://api.github.com/users/jzbjyb/events{/privacy}",
"received_events_url": "https://api.github.com/users/jzbjyb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is a known issue (cf. https://github.com/huggingface/transformers/pull/2928). It will be fixed in another (cleaner) PR, though."
] | 1,582 | 1,582 | 1,582 | NONE | null | # 🐛 Bug
According to this PR #2521, a link is created between the linear layer bias and the model attribute bias in `BertLMPredictionHead`. In the `__init__`:
```python
self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
self.bias = nn.Parameter(torch.zeros(config.vocab_size))
self.decoder.bias = self.bias # here is the problem
```
in the `forward`:
```
hidden_states = self.decoder(hidden_states) + self.bias
```
I am afraid this will cause the `self.decoder` (which is a `nn.Linear`) to have bias and as a result the bias is added twice in the `forward` function.
## To reproduce
Steps to reproduce the behavior:
For version 2.4.0, where the PR is merged and thus has this bug:
```python
from transformers import *
model = BertForMaskedLM.from_pretrained('bert-base-cased')
print(model.cls.predictions.bias)
# tensor([-0.1788, -0.1758, -0.1752, ..., -0.3448, -0.3574, -0.3483], requires_grad=True)
print(model(torch.tensor([[0]])))
# (tensor([[[-12.2630, -12.2798, -12.1221, ..., -10.2729, -10.8859, -11.1733]]], grad_fn=<AddBackward0>),)
```
For version 2.3.0, which is before the PR being merged and thus no bug:
```python
from transformers import *
model = BertForMaskedLM.from_pretrained('bert-base-cased')
print(model.cls.predictions.bias)
# tensor([-0.1788, -0.1758, -0.1752, ..., -0.3448, -0.3574, -0.3483], requires_grad=True)
print(model(torch.tensor([[0]])))
# (tensor([[[-12.0842, -12.1040, -11.9469, ..., -9.9281, -10.5285, -10.8251]]], grad_fn=<AddBackward0>),)
```
Comparing the above output, you can clearly see that for version 2.4.0, the bias is added twice.
## Environment info
- `transformers` version: 2.4.0
- Platform: Ubuntu
- Python version: 3.7
- PyTorch version (GPU?): 1.0.1 (with GPU)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2957/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2957/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2956 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2956/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2956/comments | https://api.github.com/repos/huggingface/transformers/issues/2956/events | https://github.com/huggingface/transformers/pull/2956 | 569,188,312 | MDExOlB1bGxSZXF1ZXN0Mzc4NDc2MDA5 | 2,956 | adding support for commonsense qa for multiple choice question answering | {
"login": "nrjvarshney",
"id": 19836137,
"node_id": "MDQ6VXNlcjE5ODM2MTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/19836137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nrjvarshney",
"html_url": "https://github.com/nrjvarshney",
"followers_url": "https://api.github.com/users/nrjvarshney/followers",
"following_url": "https://api.github.com/users/nrjvarshney/following{/other_user}",
"gists_url": "https://api.github.com/users/nrjvarshney/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nrjvarshney/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nrjvarshney/subscriptions",
"organizations_url": "https://api.github.com/users/nrjvarshney/orgs",
"repos_url": "https://api.github.com/users/nrjvarshney/repos",
"events_url": "https://api.github.com/users/nrjvarshney/events{/privacy}",
"received_events_url": "https://api.github.com/users/nrjvarshney/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Can somebody help me in fixing the failed test?",
"Hi, thanks for your addition, that's really cool! Did you check the [contributions guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests)?\r\n\r\nYou should make sure you have the correct versions of `black`, `isort` and `flake8` installed and then run `make style`/`make quality` to identify the code quality issues.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,588 | 1,588 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2956/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2956",
"html_url": "https://github.com/huggingface/transformers/pull/2956",
"diff_url": "https://github.com/huggingface/transformers/pull/2956.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2956.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2955 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2955/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2955/comments | https://api.github.com/repos/huggingface/transformers/issues/2955/events | https://github.com/huggingface/transformers/pull/2955 | 569,181,977 | MDExOlB1bGxSZXF1ZXN0Mzc4NDcwNjg4 | 2,955 | Only use F.gelu for torch >=1.4.0 | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2955?src=pr&el=h1) Report\n> Merging [#2955](https://codecov.io/gh/huggingface/transformers/pull/2955?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3e98f27e4a9bb0ac3d0fe24b94d30da42cdae8a7?src=pr&el=desc) will **decrease** coverage by `<.01%`.\n> The diff coverage is `66.66%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2955?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2955 +/- ##\n=========================================\n- Coverage 76.1% 76.1% -0.01% \n=========================================\n Files 98 98 \n Lines 15946 15948 +2 \n=========================================\n+ Hits 12136 12137 +1 \n- Misses 3810 3811 +1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2955?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/2955/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `87.5% <66.66%> (-5.36%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2955?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2955?src=pr&el=footer). Last update [3e98f27...49c66bd](https://codecov.io/gh/huggingface/transformers/pull/2955?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,583 | 1,582 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2955/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2955",
"html_url": "https://github.com/huggingface/transformers/pull/2955",
"diff_url": "https://github.com/huggingface/transformers/pull/2955.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2955.patch",
"merged_at": 1582319421000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2954 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2954/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2954/comments | https://api.github.com/repos/huggingface/transformers/issues/2954/events | https://github.com/huggingface/transformers/issues/2954 | 569,171,746 | MDU6SXNzdWU1NjkxNzE3NDY= | 2,954 | Distillation throws CUDA out of memory even with available GPU memory | {
"login": "sultanovazamat",
"id": 26954978,
"node_id": "MDQ6VXNlcjI2OTU0OTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/26954978?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sultanovazamat",
"html_url": "https://github.com/sultanovazamat",
"followers_url": "https://api.github.com/users/sultanovazamat/followers",
"following_url": "https://api.github.com/users/sultanovazamat/following{/other_user}",
"gists_url": "https://api.github.com/users/sultanovazamat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sultanovazamat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sultanovazamat/subscriptions",
"organizations_url": "https://api.github.com/users/sultanovazamat/orgs",
"repos_url": "https://api.github.com/users/sultanovazamat/repos",
"events_url": "https://api.github.com/users/sultanovazamat/events{/privacy}",
"received_events_url": "https://api.github.com/users/sultanovazamat/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1838876023,
"node_id": "MDU6TGFiZWwxODM4ODc2MDIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Distillation",
"name": "Distillation",
"color": "d4c5f9",
"default": false,
"description": "Related to model distillation"
}
] | closed | false | null | [] | [
"An error trace would be useful.",
"> An error trace would be useful.\r\n\r\nThis is an error trace:\r\n\r\nF.softmax(t_logits_slct / self.temperature, dim=-1),\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/nn/modules/loss.py\", line 366, in forward\r\n return F.kl_div(input, target, reduction=self.reduction)\r\n File \"/opt/conda/lib/python3.7/site-packages/apex/amp/wrap.py\", line 28, in wrapper\r\n return orig_fn(*new_args, **kwargs)\r\n File \"/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py\", line 1987, in kl_div\r\n reduced = torch.kl_div(input, target, reduction_enum)\r\nRuntimeError: CUDA out of memory. Tried to allocate 818.00 MiB (GPU 0; 10.76 GiB total capacity; 8.61 GiB already allocated; 787.44 MiB free; 9.19 GiB reserved in total by PyTorch)",
"> 787.44 MiB free\r\n\r\nSo your GPU doesn't have enough memory available at that point. (Even if nvidia-smi says it is only using 70%.)\r\n\r\nThere are known issues with apex that it doesn't work well when you reload checkpoints and continue training in the same Python session. Does the same issue occur when you use torch DDP (not apex), no FP16, no amp?",
"> So your GPU doesn't have enough memory available at that point. (Even if nvidia-smi says it is only using 70%.)\r\n\r\nYour point is right, but the strange thing is that this error can occur accidentally even if 99% of training time GPU consumption is less than 70%. (This happens even with tiny batch size).\r\n\r\nThe same error occurs with DDP, no FP16, no amp, moreover, I've also tried to run the distillation on a single GPU without distribution and the result is the same. ",
"> There are known issues with apex that it doesn't work well when you reload checkpoints and continue training in the same Python session.\r\n\r\nBTW, I didn't reload checkpoint in the same python session. The distillation script was relaunched with loading the last checkpoint as soon as a new checkpoint was made, so the session is new.",
"Hello @AzamatSultonov,\r\nAs far as I know, the memory leak mentioned in #1179 was fixed and was released a couple of updates ago in PyTorch. I didn't encounter similar problems recently.\r\n\r\nCan I ask what is your batch size? Have you tried a batch size of 1 (and slowly increase it)? 11GB is not a lot to fit two models (and train one of them).",
"Hello @VictorSanh, the minimum batch size that I've tried was 3 (1 takes too much time), but OOM threw again (with available GPU memory).\r\n\r\nThe #1179 fix helped to prolong the training time for bigger batch size, but didn't solve the problem completely.\r\n\r\nBTW, I turned off the distributed way of training and launched the distillation on a single GPU with batch size 5 (periodically emptying CUDA cache) and the training goes for almost 48 hours without crashes.\r\nThis is still slow, but at least without OOM and losses are going down. \r\nI'll let you know as soon as the training will finish.",
"Are your batches of constant total size? i.e. do you need always need the exact same amount of gpu memory to do your intermediate computations?\r\nThe reason why I suggested to start with a batch size of 1 is to detect this. You can always use gradient accumulation to simulate a bigger batch size.\r\nSomething that can help is also tracking the memory in a tensorboard.",
"> Are your batches of constant total size? i.e. do you need always need the exact same amount of gpu memory to do your intermediate computations?\r\n\r\nYes, they are. \r\nAlso, in the last version of the script, I've changed the padding to the max length within the whole dataset instead of max length withing the current batch, to avoid tensor's memory reallocation by torch and reusing already allocated one.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I am facing this issue again, gpu usage is around 60% checked using `nvidia-smi`. In this tuning `batch_size` dosen't make sense, but still I had changed it but problem didn't solved.\r\n\r\nTrying to fine tune XLM roberta for urdu classification\r\ntransformers: 4.9.1\r\ntorch: 1.9.0+cu102"
] | 1,582 | 1,628 | 1,590 | NONE | null | # ❓ Questions & Help
## Details
<!-- Description of your issue -->
Hi! I am trying to run the distillation of XLM-Roberta to Albert (also little XLM-Roberta) on 4 GPUs (RTX 2080 Ti) after a bit of correction of the distillation script, so the training process goes through little chunks of dataset due to the difficulties with dataset preprocessing at once, but the problem is that running training throws CUDA OOM, although the GPU memory consumption is at max 70%.
I've discovered the close issue [#1179 ](https://github.com/huggingface/transformers/issues/1179) and tried to install torch from the source to avoid some bugs as it was suggested, but OOM comes again just a little bit later.
I've also tried several things, but all were unsuccessful:
1) Reducing the batch size and max length don't help, these just prolong the training process, but at some point distillation crashes again;
2) Run distillation in such manner: train on one chunk -> make checkpoint -> rerun distillation from pretrained;
3) Run with torch/apex distributed learning;
4) Run with --fp16 / --fp32;
5) Run with/without amp optimization;
Is it possible that the problem is related to the dataset? (Running training on different chunks throws OOM in different moments. BTW some chunks are processed fully without any errors).
I appreciate any help, no more guesses on how to solve this problem. Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2954/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2954/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2953 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2953/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2953/comments | https://api.github.com/repos/huggingface/transformers/issues/2953/events | https://github.com/huggingface/transformers/issues/2953 | 569,170,680 | MDU6SXNzdWU1NjkxNzA2ODA= | 2,953 | Migrating from `pytorch-pretrained-bert` to `pytorch-transformers` issue regarding model() output | {
"login": "ArashAskary",
"id": 37027721,
"node_id": "MDQ6VXNlcjM3MDI3NzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/37027721?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArashAskary",
"html_url": "https://github.com/ArashAskary",
"followers_url": "https://api.github.com/users/ArashAskary/followers",
"following_url": "https://api.github.com/users/ArashAskary/following{/other_user}",
"gists_url": "https://api.github.com/users/ArashAskary/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArashAskary/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArashAskary/subscriptions",
"organizations_url": "https://api.github.com/users/ArashAskary/orgs",
"repos_url": "https://api.github.com/users/ArashAskary/repos",
"events_url": "https://api.github.com/users/ArashAskary/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArashAskary/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You need to tell the model that you wish to get all the hidden states\r\n\r\n```python\r\nmodel = BertModel.from_pretrained('bert-base-uncased', output_hidden_states=True)\r\n```\r\n\r\nThen, you'll find your expected output as the third item in the output tuple:\r\n```python\r\n encoded_layers = model(tokens_tensor, segments_tensors)[2]\r\n```\r\n\r\nIIRC those layers now also include the embeddings (so 13 items in total), so you might need to update the index to get the second last layer. Might be better to use a negative index to be sure (-2)."
] | 1,582 | 1,582 | 1,582 | NONE | null | I'm having trouble migrating my code from `pytorch_pretrained_bert` to `pytorch_transformers`. I'm attempting to run a cosine similarity exercise. I want to extract text embeddings values of the second-to-last of the 12 hidden embedding layer.
```
import torch
from pytorch_pretrained_bert import BertTokenizer, BertModel
#from pytorch_transofmers import BertTokenizer, BertModel
import pandas as pd
import numpy as np
model = BertModel.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# This is done by default in the pytorch_transformers
model.eval()
input_query = "This is my test input query text"
marked_text = "[CLS] " + input_query + " [SEP]"
tokenized_text = tokenizer.tokenize(marked_text)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
segments_ids = [1] * len(tokenized_text)
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
with torch.no_grad():
encoded_layers, _ = model(tokens_tensor, segments_tensors)
sentence_embedding = torch.mean(encoded_layers[10], 1)
```
Using the pytorch_pretrained_bert works perfectly fine with the above code. My `encoded_layers` object is a list of 12 hidden layer tensors, allowing me to pick and reduce the 11th layer by taking an average, resulting in `sentence_embedding` object I can run cosine similarities against.
However, when I migrate my code to the `pytorch_transformers` library, the resulting `encoded_layers` object is no longer the full list of 12 hidden layers, but a single torch tensor object of shape `torch.Size([1, 7, 768])`, which results in the following error when I attempt to create the `sentence_embedding` object:
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-23-7f877a7d2f9c> in <module>
9 encoded_layers, _ = model(tokens_tensor, segments_tensors)
---> 10 sentence_embedding = torch.mean(test[10], 1)
11
IndexError: index 10 is out of bounds for dimension 0 with size 7
```
The migration documentation (https://huggingface.co/transformers/migration.html) states that I should take the first element of the `encoded_layers` object as a replacement but that does not provide me with access to the second to last hidden layer of embeddings.
How can I access it?
Thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2953/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2952 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2952/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2952/comments | https://api.github.com/repos/huggingface/transformers/issues/2952/events | https://github.com/huggingface/transformers/issues/2952 | 569,065,993 | MDU6SXNzdWU1NjkwNjU5OTM= | 2,952 | RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.IntTensor instead (while checking arguments for embedding) | {
"login": "Aidanlochbihler",
"id": 18289342,
"node_id": "MDQ6VXNlcjE4Mjg5MzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/18289342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aidanlochbihler",
"html_url": "https://github.com/Aidanlochbihler",
"followers_url": "https://api.github.com/users/Aidanlochbihler/followers",
"following_url": "https://api.github.com/users/Aidanlochbihler/following{/other_user}",
"gists_url": "https://api.github.com/users/Aidanlochbihler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aidanlochbihler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aidanlochbihler/subscriptions",
"organizations_url": "https://api.github.com/users/Aidanlochbihler/orgs",
"repos_url": "https://api.github.com/users/Aidanlochbihler/repos",
"events_url": "https://api.github.com/users/Aidanlochbihler/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aidanlochbihler/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
}
] | closed | false | null | [] | [
"It is weird that there is a discrepancy between Windows and Linux.\r\n\r\nCould you try casting your variables `b_input_ids`, `b_input_mask` and `b_labels` to `torch.long`?\r\n\r\nAre you defining some of your variables on GPU? Does it fail if everything stays on CPU?",
"I often prototype on Windows and push to Linux for final processing and I've never had this issue. Can you post a minimal working example that I can copy-paste to test? ",
"Ok update I got the error to go away but to do it I had to do some janky fixes that I don't think should be necessary\r\n- So if I cast all my variables as ex: b_labels = b_labels.type(torch.LongTensor) and I train on CPU it works (but its super slow)\r\n- If I want to train on GPU I again cast the tensors to long but then have to cast all of my tensors to GPU (.to(device)) even though I already did it\r\n\r\n\r\n\r\n`\r\n\r\n device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\r\n model = BertForSequenceClassification.from_pretrained(\"bert-base-uncased\", num_labels=numlabels)\r\n \r\n model.cuda()\r\n #model = nn.DataParallel(model)\r\n\r\n # This variable contains all of the hyperparemeter information our training loop needs\r\n # Parameters:\r\n lr = 2e-5\r\n max_grad_norm = 1.0\r\n num_training_steps = 1000\r\n num_warmup_steps = 100\r\n warmup_proportion = float(num_warmup_steps) / float(num_training_steps) # 0.1\r\n\r\n ### In Transformers, optimizer and schedules are splitted and instantiated like this:\r\n optimizer = AdamW(model.parameters(), lr=lr, correct_bias=False) # To reproduce BertAdam specific behavior set correct_bias=False\r\n scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=num_warmup_steps, num_training_steps=num_training_steps) # PyTorch scheduler\r\n\r\n t = [] \r\n\r\n # Store our loss and accuracy for plotting\r\n train_loss_set = []\r\n\r\n # Number of training epochs (authors recommend between 2 and 4)\r\n epochs = 5 #5:0.96\r\n\r\n # trange is a tqdm wrapper around the normal python range\r\n for _ in trange(epochs, desc=\"Epoch\"):\r\n # Training\r\n # Set our model to training mode (as opposed to evaluation mode)\r\n model.train()\r\n # Tracking variables\r\n tr_loss = 0\r\n nb_tr_examples, nb_tr_steps = 0, 0\r\n\r\n # Train the data for one epoch\r\n for step, batch in enumerate(train_dataloader):\r\n # Add batch to GPU\r\n batch = tuple(t.to(device) for t in batch)\r\n\r\n # Unpack the inputs from our dataloader\r\n b_input_ids, b_input_mask, b_labels = batch\r\n\r\n ###############Bug fix code####################\r\n b_input_ids = b_input_ids.type(torch.LongTensor)\r\n b_input_mask = b_input_mask.type(torch.LongTensor)\r\n b_labels = b_labels.type(torch.LongTensor)\r\n\r\n b_input_ids = b_input_ids.to(device)\r\n b_input_mask = b_input_mask.to(device)\r\n b_labels = b_labels.to(device)\r\n ############################################\r\n # Clear out the gradients (by default they accumulate)\r\n optimizer.zero_grad()\r\n \r\n # Forward pass\r\n outputs = model(input_ids = b_input_ids, attention_mask=b_input_mask, labels=b_labels)\r\n loss, logits = outputs[:2]\r\n\r\n loss.backward()\r\n torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm) # Gradient clipping is not in AdamW anymore (so you can use amp without issue)\r\n optimizer.step()\r\n scheduler.step()\r\n`\r\nVery strange \r\n(posted the code I thought would be useful to see let me know if you need to see more)",
"You're doing `.to(device)` twice for your data (once in the tuple, once separately). It is hard to reproduce this because we don't have your data, so we don't know how you encode your data. What is example contents of `batch` to reproduce your issue?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\r\n\r\nHad similar issue:\r\nYoung Sheldon's solution on below stackoverflow thread worked well.\r\n\r\nhttps://stackoverflow.com/questions/56360644/pytorch-runtimeerror-expected-tensor-for-argument-1-indices-to-have-scalar-t",
"Having the same issue, funny thing is the whole model worked for training, but while running inference on test data the error automatically showed up",
"\r\n> Having the same issue, funny thing is the whole model worked for training, but while running inference on test data the error automatically showed up\r\n\r\n\r\nExactly the same issue I am facing. I am using Amazon SageMaker notebook instance\r\n",
"Hi,\r\n\r\nI'm working with transformers version 4.4.2 and getting this error when not passing in the `position_ids` kwarg to the model. Adding the following line in `transformers/models/bert/modeling_bert.py` on line 207 fixes the issue for me:\r\n```python\r\n position_ids = position_ids.to(torch.long)\r\n```\r\n\r\nOf course, you can do this work by passing in your own `position_ids`, but that's no fun.",
"hi,I have met the same problem, just because use torch.Tensor( ),.when I check,I change it into torch.tensor,and it's OK.",
"@doris-art \r\nHere's my work around. Assuming `params` is a dict that is passed to the `__call__` method of the model as `**kwargs`:\r\n\r\n```python\r\n# a bug in transformers 4.4.2 requires this\r\n# https://github.com/huggingface/transformers/issues/2952\r\ninput_ids = params['input_ids']\r\nseq_length = input_ids.size()[1]\r\nposition_ids = model.embeddings.position_ids\r\nposition_ids = position_ids[:, 0: seq_length].to(torch.long)\r\nparams['position_ids'] = position_ids\r\n```",
"I am getting the same error. I am unable to resolve it. \r\n\r\n\r\n\r\n### I am using:\r\nPython implementation: CPython\r\nPython version : 3.7.12\r\nIPython version : 7.29.0\r\n\r\nnumpy : 1.19.5\r\npandas : 1.3.4\r\ntorch : 1.9.1\r\ntransformers: 4.12.5\r\n\r\nAny help would be greatly appreciated.",
"I had the same issue in the past. after checking for the many issue for this error. i did some reverse engineering and found that my input been going as empty in the modal train. \r\nIf you pass the input sentence as empty then also faced the same error. I have resolved by filtering my dataset with null/empty sentence data point."
] | 1,582 | 1,658 | 1,588 | NONE | null | # 🐛 Bug
```
File "C:\Users\temp\Aida\aida\agents\bertbot\Bert\bert_intent_classifier_pytorch.py", line 298, in process
logits = self.model(prediction_inputs, token_type_ids=None, attention_mask=prediction_masks)
File "C:\Users\temp\Anaconda3\envs\fresh\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\temp\Anaconda3\envs\fresh\lib\site-packages\transformers\modeling_bert.py", line 897, in forward
head_mask=head_mask)
File "C:\Users\temp\Anaconda3\envs\fresh\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\temp\Anaconda3\envs\fresh\lib\site-packages\transformers\modeling_bert.py", line 624, in forward
embedding_output = self.embeddings(input_ids, position_ids=position_ids, token_type_ids=token_type_ids)
File "C:\Users\temp\Anaconda3\envs\fresh\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\temp\Anaconda3\envs\fresh\lib\site-packages\transformers\modeling_bert.py", line 167, in forward
words_embeddings = self.word_embeddings(input_ids)
File "C:\Users\temp\Anaconda3\envs\fresh\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\temp\Anaconda3\envs\fresh\lib\site-packages\torch\nn\modules\sparse.py", line 114, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "C:\Users\temp\Anaconda3\envs\fresh\lib\site-packages\torch\nn\functional.py", line 1484, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.IntTensor instead (while checking arguments for embedding)
```
## Issue
Hi everyone when I run the line:
```py
outputs = model(input_ids = b_input_ids, attention_mask=b_input_mask, labels=b_labels)
```
with model defined as,
```py
model = BertForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=numlabels)
```
It returns the stated error. However this only happens when I am on my windows computer.
When I run the exact same code with the same python version and libraries it works perfectly fine.
I have the most up to date version of pytorch (1.4) and transformers installed.
Any help would be greatly appreciated
## Information
Using the latest version of pytorch and transformers
Model I am using (Bert, XLNet ...): BertForSequenceClassification
Language I am using the model on (English, Chinese ...): English
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2952/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2952/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2951 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2951/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2951/comments | https://api.github.com/repos/huggingface/transformers/issues/2951/events | https://github.com/huggingface/transformers/pull/2951 | 569,032,815 | MDExOlB1bGxSZXF1ZXN0Mzc4MzQ2ODMx | 2,951 | Update modelcard of bert-base-german-cased | {
"login": "tholor",
"id": 1563902,
"node_id": "MDQ6VXNlcjE1NjM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1563902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tholor",
"html_url": "https://github.com/tholor",
"followers_url": "https://api.github.com/users/tholor/followers",
"following_url": "https://api.github.com/users/tholor/following{/other_user}",
"gists_url": "https://api.github.com/users/tholor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tholor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tholor/subscriptions",
"organizations_url": "https://api.github.com/users/tholor/orgs",
"repos_url": "https://api.github.com/users/tholor/repos",
"events_url": "https://api.github.com/users/tholor/events{/privacy}",
"received_events_url": "https://api.github.com/users/tholor/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2951?src=pr&el=h1) Report\n> Merging [#2951](https://codecov.io/gh/huggingface/transformers/pull/2951?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3e98f27e4a9bb0ac3d0fe24b94d30da42cdae8a7?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2951?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2951 +/- ##\n======================================\n Coverage 76.1% 76.1% \n======================================\n Files 98 98 \n Lines 15946 15946 \n======================================\n Hits 12136 12136 \n Misses 3810 3810\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2951?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2951?src=pr&el=footer). Last update [3e98f27...d3f1583](https://codecov.io/gh/huggingface/transformers/pull/2951?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"👍 "
] | 1,582 | 1,582 | 1,582 | CONTRIBUTOR | null | Add image | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2951/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2951/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2951",
"html_url": "https://github.com/huggingface/transformers/pull/2951",
"diff_url": "https://github.com/huggingface/transformers/pull/2951.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2951.patch",
"merged_at": 1582387724000
} |
https://api.github.com/repos/huggingface/transformers/issues/2950 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2950/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2950/comments | https://api.github.com/repos/huggingface/transformers/issues/2950/events | https://github.com/huggingface/transformers/issues/2950 | 569,016,025 | MDU6SXNzdWU1NjkwMTYwMjU= | 2,950 | BertTokenizerFast ignores `pad_to_max_length` | {
"login": "ranamihir",
"id": 8270471,
"node_id": "MDQ6VXNlcjgyNzA0NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8270471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ranamihir",
"html_url": "https://github.com/ranamihir",
"followers_url": "https://api.github.com/users/ranamihir/followers",
"following_url": "https://api.github.com/users/ranamihir/following{/other_user}",
"gists_url": "https://api.github.com/users/ranamihir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ranamihir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ranamihir/subscriptions",
"organizations_url": "https://api.github.com/users/ranamihir/orgs",
"repos_url": "https://api.github.com/users/ranamihir/repos",
"events_url": "https://api.github.com/users/ranamihir/events{/privacy}",
"received_events_url": "https://api.github.com/users/ranamihir/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"Duplicate of #2947",
"Thanks @ranamihir @fte10kso, \r\n\r\nI'll have a look today. ",
"Thanks @mfuntowicz!"
] | 1,582 | 1,582 | 1,582 | NONE | null | # 🐛 Bug
Hi,
I noticed some strange behavior with the fast tokenizers in `v2.5.0`, which I think is a bug:
It seems `BertTokenizerFast` is ignoring the `pad_to_max_length` argument, as shown below:
```python
>>> from transformers import AutoTokenizer, BertTokenizer
>>> tok_auto = AutoTokenizer.from_pretrained('bert-base-uncased')
>>> tok_bert = BertTokenizer.from_pretrained('bert-base-uncased')
>>> a, b = 'Sentence 1', 'Sentence 2'
>>>
>>> tok_bert.encode(a, b, max_length=10, pad_to_max_length=True)
[101, 6251, 1015, 102, 6251, 1016, 102, 0, 0, 0] # <-- Expected behavior
>>> tok_auto.encode(a, b, max_length=10, pad_to_max_length=True)
[101, 6251, 1015, 102, 6251, 1016, 102] # <-- Actual behavior
```
Also, can some please explain the reason for the warning below that's raised when I set `pad_to_max_length=False` (which is only there in the fast tokenizer)?
```python
>>> tok_auto.encode(a, b, max_length=10, pad_to_max_length=False)
Disabled padding because no padding token set (pad_token: [PAD], pad_token_id: 0).
To remove this error, you can add a new pad token and then resize model embedding:
tokenizer.pad_token = '<PAD>'
model.resize_token_embeddings(len(tokenizer))
[101, 6251, 1015, 102, 6251, 1016, 102]
>>> tok_bert.encode(a, b, max_length=10, pad_to_max_length=False)
[101, 6251, 1015, 102, 6251, 1016, 102] # <-- No warning
```
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2950/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2949 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2949/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2949/comments | https://api.github.com/repos/huggingface/transformers/issues/2949/events | https://github.com/huggingface/transformers/issues/2949 | 568,992,083 | MDU6SXNzdWU1Njg5OTIwODM= | 2,949 | Change the model type after fine-tuning? | {
"login": "jwallat",
"id": 24674150,
"node_id": "MDQ6VXNlcjI0Njc0MTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/24674150?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jwallat",
"html_url": "https://github.com/jwallat",
"followers_url": "https://api.github.com/users/jwallat/followers",
"following_url": "https://api.github.com/users/jwallat/following{/other_user}",
"gists_url": "https://api.github.com/users/jwallat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jwallat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jwallat/subscriptions",
"organizations_url": "https://api.github.com/users/jwallat/orgs",
"repos_url": "https://api.github.com/users/jwallat/repos",
"events_url": "https://api.github.com/users/jwallat/events{/privacy}",
"received_events_url": "https://api.github.com/users/jwallat/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"Yes, you should be able to just load the finetuned model into another architecture. The weights of overlapping parameter names will be loaded, and the others will be ignored.",
"Awesome, thanks for the fast response ",
"Okay, I came across a followup question: \r\nSo from what I can tell, the pre-trained models come with the weights for the BertForMaskedLM (as this has been used to do the pre-training).\r\nIs there a way to first fine-tune a model on a downstream task (like sentence classification) and than reload it as BertForMaskedLM, reusing the trained LM head? ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@BramVanroy could you provide a code example of how to this? I want to do sequential training on different tasks without having to save and load models from disk, or change layers directly myself. Ideally I would use some head switcher function or simply have `.from_pretrained()` accept model instances.",
"@timellemeet Probably best to post a new issue or [a forum post](https://discuss.huggingface.co/)."
] | 1,582 | 1,611 | 1,588 | NONE | null | # ❓ Questions & Help
Is there a way in the transformer library to fine-tune a transformer on one downstream task and change to another model type (i.e. another downstream task architecture)?
An example:
We train a BertForQuestionAnswering model (with a linear layer for the span prediction in addition to the regular BERT). Once we finished our training, I want to disregard the linear layer on top and use the adjusted weights of the 12 BERT-layers in sentiment analysis (BertForSentenceClassification).
I am aware that this would result in a not initialized linear layer on top the BertForSentenceClassification, but that would be not problem in my case. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2949/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2948 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2948/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2948/comments | https://api.github.com/repos/huggingface/transformers/issues/2948/events | https://github.com/huggingface/transformers/issues/2948 | 568,969,959 | MDU6SXNzdWU1Njg5Njk5NTk= | 2,948 | Some questions about change the BertForSequenceClassification | {
"login": "WenTingTseng",
"id": 32416416,
"node_id": "MDQ6VXNlcjMyNDE2NDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/32416416?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WenTingTseng",
"html_url": "https://github.com/WenTingTseng",
"followers_url": "https://api.github.com/users/WenTingTseng/followers",
"following_url": "https://api.github.com/users/WenTingTseng/following{/other_user}",
"gists_url": "https://api.github.com/users/WenTingTseng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WenTingTseng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WenTingTseng/subscriptions",
"organizations_url": "https://api.github.com/users/WenTingTseng/orgs",
"repos_url": "https://api.github.com/users/WenTingTseng/repos",
"events_url": "https://api.github.com/users/WenTingTseng/events{/privacy}",
"received_events_url": "https://api.github.com/users/WenTingTseng/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | null | [] | [
"Please don't post screenshots. Post your code instead. Use code blocks to do that: https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks",
"sorry, I changed\r\n",
"Could you give us the content of the `/share/nas165/Wendy/BERTYiChen2.0/trained_model/3371筆_word2Vec/config.json` file? I suspect it might set `vocab_size` to 768.",
"ok, the config file like this , but vocab_size is 21128\r\n```\r\n{\r\n \"architectures\": [\r\n \"BertForMaskedLM\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"bos_token_id\": 0,\r\n \"directionality\": \"bidi\",\r\n \"do_sample\": false,\r\n \"eos_token_ids\": 0,\r\n \"finetuning_task\": null,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"id2label\": {\r\n \"0\": \"LABEL_0\",\r\n \"1\": \"LABEL_1\"\r\n },\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"is_decoder\": false,\r\n \"label2id\": {\r\n \"LABEL_0\": 0,\r\n \"LABEL_1\": 1\r\n },\r\n \"layer_norm_eps\": 1e-12,\r\n \"length_penalty\": 1.0,\r\n \"max_length\": 20,\r\n \"max_position_embeddings\": 512,\r\n \"num_attention_heads\": 12,\r\n \"num_beams\": 1,\r\n \"num_hidden_layers\": 12,\r\n \"num_labels\": 267,\r\n \"num_return_sequences\": 1,\r\n \"output_attentions\": false,\r\n \"output_hidden_states\": false,\r\n \"output_past\": true,\r\n \"pad_token_id\": 0,\r\n \"pooler_fc_size\": 768,\r\n \"pooler_num_attention_heads\": 12,\r\n \"pooler_num_fc_layers\": 3,\r\n \"pooler_size_per_head\": 128,\r\n \"pooler_type\": \"first_token_transform\",\r\n \"pruned_heads\": {},\r\n \"repetition_penalty\": 1.0,\r\n \"temperature\": 1.0,\r\n \"top_k\": 50,\r\n \"top_p\": 1.0,\r\n \"torchscript\": false,\r\n \"type_vocab_size\": 2,\r\n \"use_bfloat16\": false,\r\n \"vocab_size\": 21128\r\n}\r\n```\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,588 | 1,588 | NONE | null | # 🐛 Bug
## Information
Model I am using Bert:
Language I am using the model on (Chinese ):
The problem arises when using BertForSequenceClassification:
I want to Concatenation pooled_output and word2vec. I change the code of BertForSequenceClassification like this and successfully trained the model.
I use **merge=torch.cat((pooled_output,Word2Vec),1)**
```
def __init__(self, config):
super(BertForSequenceClassificationEZAI, self).__init__(config)
self.num_labels = config.num_labels
self.bert = BertModel(config)#載入預訓練BERT Model
self.dropout = nn.Dropout(config.hidden_dropout_prob)
# 簡單 linear 層
self.classifier = nn.Linear(99072, self.config.num_labels)#config.hidden_size
self.init_weights()
def forward(self, input_ids=None, attention_mask=None, token_type_ids=None,
position_ids=None, head_mask=None, inputs_embeds=None, labels=None):
#BERT 輸入就是 tokens, segments, masks
outputs = self.bert(input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds)
#線性分類器將 dropout 後的 BERT repr. 轉成類別 logits
pooled_output = outputs[1]
####在這裡加上word embedding資訊######
model = word2vec.load('/share/nas165/Wendy/WordbreakCKIP/corpusWord2Vec.bin')
Word2Vec=torch.from_numpy(model.vectors.flatten()).cuda().float().expand(len(pooled_output),98304)
print(Word2Vec.size())
print(pooled_output.size())
merge=torch.cat((pooled_output,Word2Vec),1)
pooled_output = self.dropout(merge)
# pooled_output = self.dropout(pooled_output)
# print(pooled_output.size())
logits = self.classifier(pooled_output)
outputs = (logits,) + outputs[1:] # add hidden states and attention if they are here
# 輸入有 labels 的話直接計算 Cross Entropy 回傳,方便!
if labels is not None:
if self.num_labels == 1:
# We are doing regression
loss_fct = MSELoss()
loss = loss_fct(logits.view(-1), labels.view(-1))
else:
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
outputs = (loss,) + outputs
return outputs # (loss), logits, (hidden_states), (attentions)
```
But when I try to predict and import the model like this
```py
bert_config, bert_class, bert_tokenizer = (BertConfig, BertForSequenceClassification, BertTokenizer)
config = bert_config.from_pretrained('/share/nas165/Wendy/BERTYiChen2.0/trained_model/3371筆_word2Vec/config.json')
model = bert_class.from_pretrained('/share/nas165/Wendy/BERTYiChen2.0/trained_model/3371筆_word2Vec/pytorch_model.bin', from_tf=bool('.ckpt' in 'bert-base-chinese'),config=config)
```
It always appears BUG RuntimeError: Error(s) in loading state_dict for BertForSequenceClassification:
size mismatch for classifier.weight: copying a param with shape torch.Size([267, 99072]) from checkpoint, the shape in current model is torch.Size([267, 768]).
I do not know how to fix it.
Thanks a lot for your help.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2948/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2947 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2947/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2947/comments | https://api.github.com/repos/huggingface/transformers/issues/2947/events | https://github.com/huggingface/transformers/issues/2947 | 568,939,750 | MDU6SXNzdWU1Njg5Mzk3NTA= | 2,947 | Fast tokenizers padding and prefix space | {
"login": "updbqn",
"id": 10659104,
"node_id": "MDQ6VXNlcjEwNjU5MTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/10659104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/updbqn",
"html_url": "https://github.com/updbqn",
"followers_url": "https://api.github.com/users/updbqn/followers",
"following_url": "https://api.github.com/users/updbqn/following{/other_user}",
"gists_url": "https://api.github.com/users/updbqn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/updbqn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/updbqn/subscriptions",
"organizations_url": "https://api.github.com/users/updbqn/orgs",
"repos_url": "https://api.github.com/users/updbqn/repos",
"events_url": "https://api.github.com/users/updbqn/events{/privacy}",
"received_events_url": "https://api.github.com/users/updbqn/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"This should be fixed in master on commit cc6775cdf5b20ad382613d3bdbf0dd8364d23219.\r\n\r\nIf you want to give it a try, otherwise the first maintenance release will soon be live.",
"That fixed it for me, thank you!\r\n\r\nAlso figured out what i was doing wrong with `add_prefix_space`. Instead of passing it to encode as for the regular tokenizers it should be provided at init e.g.\r\n\r\n```python\r\nfrom transformers import RobertaTokenizerFast\r\nt = RobertaTokenizerFast.from_pretrained('roberta-base', add_prefix_space=True)\r\nt.decode(t.encode('hello huggingface'))\r\n# '<s> hello huggingface</s>'\r\n```"
] | 1,582 | 1,582 | 1,582 | NONE | null | # 🐛 Bug
`pad_to_max_length=True` Doesn't seem to do anything when using fast tokenizers.
```python
tokenizer_roberta = RobertaTokenizer.from_pretrained('roberta-base')
tokenizer_roberta_fast = RobertaTokenizerFast.from_pretrained('roberta-base')
tokenizer_bert = BertTokenizer.from_pretrained('bert-base-uncased')
tokenizer_bert_fast = BertTokenizerFast.from_pretrained('bert-base-uncased')
def test_encode_decode(name, text, tokenizer):
encoded = tokenizer.encode(text, max_length=10, pad_to_max_length=True)
decoded = tokenizer.decode(encoded)
print(name)
print(encoded)
print(decoded)
print()
text = 'hello huggingface'
test_encode_decode('bert', text, tokenizer_bert)
test_encode_decode('bert_fast', text, tokenizer_bert_fast)
test_encode_decode('roberta', text, tokenizer_roberta)
test_encode_decode('roberta_fast', text, tokenizer_roberta_fast)
```
**Output:**
```
bert
[101, 7592, 17662, 12172, 102, 0, 0, 0, 0, 0]
[CLS] hello huggingface [SEP] [PAD] [PAD] [PAD] [PAD] [PAD]
bert_fast
[101, 7592, 17662, 12172, 102]
[CLS] hello huggingface [SEP]
roberta
[0, 20760, 31164, 9021, 2, 1, 1, 1, 1, 1]
<s> hello huggingface</s><pad><pad><pad><pad><pad>
roberta_fast
[0, 42891, 31164, 9021, 2]
<s>hello huggingface</s>
```
Additionally I can't seem to make `add_prefix_space=True` to work with `RobertaTokenizerFast`. I can only get the same output as `RobertaTokenizer` if i manually prepend a space. I saw the warning and link to #2778 but I'm not sure if I follow completely.
Thanks for taking the time!
Edit: I'm up to date with master, latest commit 53ce3854a16ad2a715bc6ac8af3e30c18b5a1d11 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2947/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2947/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2946 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2946/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2946/comments | https://api.github.com/repos/huggingface/transformers/issues/2946/events | https://github.com/huggingface/transformers/issues/2946 | 568,912,102 | MDU6SXNzdWU1Njg5MTIxMDI= | 2,946 | On masked-lm labels and computing the loss | {
"login": "shashankMadan-designEsthetics",
"id": 45225143,
"node_id": "MDQ6VXNlcjQ1MjI1MTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/45225143?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shashankMadan-designEsthetics",
"html_url": "https://github.com/shashankMadan-designEsthetics",
"followers_url": "https://api.github.com/users/shashankMadan-designEsthetics/followers",
"following_url": "https://api.github.com/users/shashankMadan-designEsthetics/following{/other_user}",
"gists_url": "https://api.github.com/users/shashankMadan-designEsthetics/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shashankMadan-designEsthetics/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shashankMadan-designEsthetics/subscriptions",
"organizations_url": "https://api.github.com/users/shashankMadan-designEsthetics/orgs",
"repos_url": "https://api.github.com/users/shashankMadan-designEsthetics/repos",
"events_url": "https://api.github.com/users/shashankMadan-designEsthetics/events{/privacy}",
"received_events_url": "https://api.github.com/users/shashankMadan-designEsthetics/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> `labels[~masked_indices] = -100 # We only compute loss on masked tokens`\r\n\r\n-100 is the default value that gets ignored by the PyTorch `CrossEntropyLoss` method. When doing masked language modeling, we only compute the loss on the masked tokens, as is said in the comment. \r\n\r\n> `Tokens with indices set to ``-100`` are ignored (masked), the loss is only computed for the tokens with labels in ``[0, ..., config.vocab_size]`\r\n\r\nThe wording could be improved here, but it means the same thing. The tokens with indices set to `-100` are seen as masked from the loss point of view, which means they will not be computed. These are not masked indices in the sense of masked language modeling.",
"<img width=\"748\" alt=\"Screenshot 2020-02-22 at 3 22 40 PM\" src=\"https://user-images.githubusercontent.com/45225143/75090283-441e3e00-5587-11ea-92da-0411b0752fa5.png\">\r\n\r\nI opened up a collab and tried to simulate what is going on thanks to your comment I followed through...\r\nWhat I understand from this is we do label -100 to ignore some values that the `torch.bernoulli` has given probability of being `False`\r\nand the ones which are left are then again fed into `torch.bernoulli` and given 0.8 percent to be masked ie we convert their ids as `tokenzer.mask_tokens`\r\nand now as seen in the screenshot 80% was the chance to be masked hence both of the ids where masked and we have built our labels tensor in such a way that it will compute the masked tokens and leave -100 be as cross-entropy will simply ignore these value, in some way we assert that these values (-100) are already right and they are used in self-attention and hence dont compute their loss which also will be simply expensive\r\nPls review on this",
"@LysandreJik I agree that the wording could be improved. I'm still not totally sure about the utility of the `masked_lm_labels` argument. I presume it is mainly for ignoring common words and special tokens during the MLM pretraining?",
"The `masked_lm_labels` are the labels used for computing the masked language modeling loss. There are examples in the documentation showing how to use these, for example [at the end of the `BertForMaskedLM` documentation.](https://huggingface.co/transformers/model_doc/bert.html#bertformaskedlm)",
"@LysandreJik Isn't the example mentioned in the official documentation missing the following line of code before feeding _labels_ into model?\r\n\r\n `labels[inputs.input_ids != tokenizer.mask_token_id] = -100 ` \r\n\r\nI believe, with this we calculate the negative log likelihood, just for the masked token which is `Paris' in the given example.\r\n\r\n\r\n",
"> @LysandreJik Isn't the example mentioned in the official documentation missing the following line of code before feeding _labels_ into model?\r\n> \r\n> `labels[inputs.input_ids != tokenizer.mask_token_id] = -100 `\r\n> \r\n> I believe, with this we calculate the negative log likelihood, just for the masked token which is `Paris' in the given example.\r\n\r\nYes, I was wondering why this is missing as well. There doesn't seem to be any documentation indicating that this is happening automatically before the loss is computed. And, based on some limited testing on my end I get different values for the loss when I do this."
] | 1,582 | 1,637 | 1,582 | NONE | null | Recently I was using bert for my own project, and going through the function mask_tokens I found this line of code
`labels[~masked_indices] = -100 # We only compute loss on masked tokens`
I wonder why we do this?
like i get the part where we do
```
indices_replaced = torch.bernoulli(torch.full(labels.shape, 0.8)).bool() & masked_indices
inputs[indices_replaced] = tokenizer.convert_tokens_to_ids(tokenizer.mask_token)
```
To mask the input tokens but is it necessary for labels?
Like if I had a constant -100 as ground truth and the actual id maybe say 1000 the loss may never converge
And I've found two contradictory comments ie
`labels[~masked_indices] = -100 # We only compute loss on masked tokens`
and
```
```(run_language_modeling)
masked_lm_labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`, defaults to :obj:`None`):
Labels for computing the masked language modeling loss.
Indices should be in ``[-100, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring)
Tokens with indices set to ``-100`` are ignored (masked), the loss is only computed for the tokens with labels
in ``[0, ..., config.vocab_size]
```(modeling_bert)
```
One says loss will be computed on masked and another says will be ignored...
Could anyone please let me know about it... Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2946/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2945 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2945/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2945/comments | https://api.github.com/repos/huggingface/transformers/issues/2945/events | https://github.com/huggingface/transformers/pull/2945 | 568,902,215 | MDExOlB1bGxSZXF1ZXN0Mzc4MjM4NDkz | 2,945 | Labels are now added to model config under id2label and label2id | {
"login": "marma",
"id": 144026,
"node_id": "MDQ6VXNlcjE0NDAyNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/144026?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marma",
"html_url": "https://github.com/marma",
"followers_url": "https://api.github.com/users/marma/followers",
"following_url": "https://api.github.com/users/marma/following{/other_user}",
"gists_url": "https://api.github.com/users/marma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marma/subscriptions",
"organizations_url": "https://api.github.com/users/marma/orgs",
"repos_url": "https://api.github.com/users/marma/repos",
"events_url": "https://api.github.com/users/marma/events{/privacy}",
"received_events_url": "https://api.github.com/users/marma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2945?src=pr&el=h1) Report\n> Merging [#2945](https://codecov.io/gh/huggingface/transformers/pull/2945?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/53ce3854a16ad2a715bc6ac8af3e30c18b5a1d11?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2945?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2945 +/- ##\n======================================\n Coverage 76.1% 76.1% \n======================================\n Files 98 98 \n Lines 15946 15946 \n======================================\n Hits 12136 12136 \n Misses 3810 3810\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2945?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2945?src=pr&el=footer). Last update [53ce385...0a5ab3f](https://codecov.io/gh/huggingface/transformers/pull/2945?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I had noticed this indeed, thanks for fixing!",
"@marma So the labels are supposed to be the actual labels instead of placeholders like \"Label_0\", \"Label_1\" ... right? I tried it out in 3.3.1 but it still generates the placeholder labels in config. I guess I am missing something? I would be grateful if you could help! :) \r\n\r\n",
"I would also be interested in learning how I can store the actual label2id and id2label dictionaries along with a pre-trained model. Is this possible?",
"@konstantinmiller & @naveenjafer You can pass label2id and id2label to config then pass that config to model like in below snippet:\r\n```py\r\nconfig = AutoConfig.from_pretrained(\r\n model_args.config_name if model_args.config_name else model_args.model_name_or_path,\r\n num_labels=num_labels,\r\n id2label={i: label for i, label in enumerate(label_list)},\r\n label2id={label: i for i, label in enumerate(label_list)},\r\n finetuning_task=data_args.task_name)\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\r\n model_args.model_name_or_path,\r\n from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\r\n config=config )\r\n```"
] | 1,582 | 1,613 | 1,582 | CONTRIBUTOR | null | Fixes huggingface/transformers#2487 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2945/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2945/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2945",
"html_url": "https://github.com/huggingface/transformers/pull/2945",
"diff_url": "https://github.com/huggingface/transformers/pull/2945.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2945.patch",
"merged_at": 1582293186000
} |
https://api.github.com/repos/huggingface/transformers/issues/2944 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2944/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2944/comments | https://api.github.com/repos/huggingface/transformers/issues/2944/events | https://github.com/huggingface/transformers/issues/2944 | 568,882,544 | MDU6SXNzdWU1Njg4ODI1NDQ= | 2,944 | output padding different to zero in embedding layer | {
"login": "akamnev",
"id": 19535300,
"node_id": "MDQ6VXNlcjE5NTM1MzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19535300?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akamnev",
"html_url": "https://github.com/akamnev",
"followers_url": "https://api.github.com/users/akamnev/followers",
"following_url": "https://api.github.com/users/akamnev/following{/other_user}",
"gists_url": "https://api.github.com/users/akamnev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akamnev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akamnev/subscriptions",
"organizations_url": "https://api.github.com/users/akamnev/orgs",
"repos_url": "https://api.github.com/users/akamnev/repos",
"events_url": "https://api.github.com/users/akamnev/events{/privacy}",
"received_events_url": "https://api.github.com/users/akamnev/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"Why? Typically, padding tokens are ignored in the model by the use of an attention mask. I don't understand why you want to only get the output of the embedding layer. (Perhaps you are confused with the good ol' word2vec embedding models, but you cannot/should not extract features from the embedding layer in a LM.)"
] | 1,582 | 1,582 | 1,582 | NONE | null | # 🐛 Bug
On embedding layer, the token corresponding to the padding does not return 0.
## Information
Model I am using (Bert):
Language I am using the model on (English):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
import torch
from transformers.tokenization_bert import BertTokenizer
from transformers.modeling_bert import BertEmbeddings
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
class Config:
vocab_size = tokenizer.vocab_size
hidden_size = 768
max_position_embeddings = 512
type_vocab_size = 2
hidden_dropout_prob = 0.1
layer_norm_eps = 1e-12
max_length = 10
sentence = "I eat a green apple"
tokens = tokenizer.encode(sentence)
tokens += [tokenizer.pad_token_id] * (max_length - len(tokens))
print(tokens)
embedding = BertEmbeddings(Config)
input_ids = torch.tensor([tokens])
emb = embedding(input_ids)
print(emb[0][-1]) # the last step should return 0 tensor
```
## Expected behavior
I expect to get a zero output tensor
## Environment info
- `transformers` version: 2.3.0
- Platform: Ubuntu
- Python version: 3.7
- PyTorch version (GPU?): 1.3.1 (CPU)
- Tensorflow version (GPU?): no
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2944/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2943 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2943/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2943/comments | https://api.github.com/repos/huggingface/transformers/issues/2943/events | https://github.com/huggingface/transformers/issues/2943 | 568,720,888 | MDU6SXNzdWU1Njg3MjA4ODg= | 2,943 | fp16 is not compatible with the current activation code when pytorch is less than 1.4.0 | {
"login": "bcmi220",
"id": 39052744,
"node_id": "MDQ6VXNlcjM5MDUyNzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/39052744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bcmi220",
"html_url": "https://github.com/bcmi220",
"followers_url": "https://api.github.com/users/bcmi220/followers",
"following_url": "https://api.github.com/users/bcmi220/following{/other_user}",
"gists_url": "https://api.github.com/users/bcmi220/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bcmi220/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bcmi220/subscriptions",
"organizations_url": "https://api.github.com/users/bcmi220/orgs",
"repos_url": "https://api.github.com/users/bcmi220/repos",
"events_url": "https://api.github.com/users/bcmi220/events{/privacy}",
"received_events_url": "https://api.github.com/users/bcmi220/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is a duplicate of https://github.com/huggingface/transformers/issues/2940"
] | 1,582 | 1,582 | 1,582 | NONE | null | [gelu = getattr(F, "gelu", _gelu_python)](https://github.com/huggingface/transformers/blob/53ce3854a16ad2a715bc6ac8af3e30c18b5a1d11/src/transformers/activations.py#L21)
should be changed to:
```python
if torch.__version__ < '1.4.0':
gelu = _gelu_python
else:
gelu = getattr(F, "gelu", _gelu_python)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2943/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2942 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2942/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2942/comments | https://api.github.com/repos/huggingface/transformers/issues/2942/events | https://github.com/huggingface/transformers/pull/2942 | 568,703,254 | MDExOlB1bGxSZXF1ZXN0Mzc4MDc3MTg1 | 2,942 | Create README.md for xlnet_large_squad | {
"login": "ahotrod",
"id": 44321615,
"node_id": "MDQ6VXNlcjQ0MzIxNjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/44321615?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahotrod",
"html_url": "https://github.com/ahotrod",
"followers_url": "https://api.github.com/users/ahotrod/followers",
"following_url": "https://api.github.com/users/ahotrod/following{/other_user}",
"gists_url": "https://api.github.com/users/ahotrod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahotrod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahotrod/subscriptions",
"organizations_url": "https://api.github.com/users/ahotrod/orgs",
"repos_url": "https://api.github.com/users/ahotrod/repos",
"events_url": "https://api.github.com/users/ahotrod/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahotrod/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2942?src=pr&el=h1) Report\n> Merging [#2942](https://codecov.io/gh/huggingface/transformers/pull/2942?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/53ce3854a16ad2a715bc6ac8af3e30c18b5a1d11?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2942?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2942 +/- ##\n======================================\n Coverage 76.1% 76.1% \n======================================\n Files 98 98 \n Lines 15946 15946 \n======================================\n Hits 12136 12136 \n Misses 3810 3810\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2942?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.15% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `94.27% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0%> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2942?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2942?src=pr&el=footer). Last update [53ce385...cfa068f](https://codecov.io/gh/huggingface/transformers/pull/2942?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,582 | 1,582 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2942/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2942",
"html_url": "https://github.com/huggingface/transformers/pull/2942",
"diff_url": "https://github.com/huggingface/transformers/pull/2942.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2942.patch",
"merged_at": 1582293282000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2941 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2941/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2941/comments | https://api.github.com/repos/huggingface/transformers/issues/2941/events | https://github.com/huggingface/transformers/issues/2941 | 568,653,260 | MDU6SXNzdWU1Njg2NTMyNjA= | 2,941 | pipeline("sentiment-analysis")() can't handle more than 2 sentences | {
"login": "xxbidiao",
"id": 1439638,
"node_id": "MDQ6VXNlcjE0Mzk2Mzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1439638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xxbidiao",
"html_url": "https://github.com/xxbidiao",
"followers_url": "https://api.github.com/users/xxbidiao/followers",
"following_url": "https://api.github.com/users/xxbidiao/following{/other_user}",
"gists_url": "https://api.github.com/users/xxbidiao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xxbidiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xxbidiao/subscriptions",
"organizations_url": "https://api.github.com/users/xxbidiao/orgs",
"repos_url": "https://api.github.com/users/xxbidiao/repos",
"events_url": "https://api.github.com/users/xxbidiao/events{/privacy}",
"received_events_url": "https://api.github.com/users/xxbidiao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
}
] | closed | false | null | [] | [
"`scores = np.exp(outputs) / np.exp(outputs).sum(-1).reshape(-1,1)` works for me, but I'm not sure whether it breaks other things.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This still happens on the latest version. I still have to apply\r\n`scores = np.exp(outputs) / np.exp(outputs).sum(-1).reshape(-1,1)`\r\nFor the code to work.",
"Yes, this is in the process of being solved by @mfuntowicz "
] | 1,582 | 1,588 | 1,588 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): pipeline("sentiment-analysis")
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
>>> from transformers import pipeline
>>> analyzer = pipeline('sentiment-analysis')
Downloading: 100%|██████████████████████████████| 230/230 [00:00<00:00, 146kB/s]
>>> analyzer(["OK"]*10)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../lib/python3.6/site-packages/transformers/pipelines.py", line 490, in __call__
scores = np.exp(outputs) / np.exp(outputs).sum(-1)
ValueError: operands could not be broadcast together with shapes (10,2) (10,)
>>>
```
`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Getting 10 results
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.0
- Platform: ubuntu 19.04
- Python version: 3.6
- PyTorch version (GPU?): 1.4.0 GPU
- Tensorflow version (GPU?): 1.14.0 GPU
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2941/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2940 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2940/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2940/comments | https://api.github.com/repos/huggingface/transformers/issues/2940/events | https://github.com/huggingface/transformers/issues/2940 | 568,624,741 | MDU6SXNzdWU1Njg2MjQ3NDE= | 2,940 | BERT model breaks during FP16 Apex training on the latest update (2.5.0) - due to gelu function | {
"login": "Laksh1997",
"id": 59830552,
"node_id": "MDQ6VXNlcjU5ODMwNTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/59830552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Laksh1997",
"html_url": "https://github.com/Laksh1997",
"followers_url": "https://api.github.com/users/Laksh1997/followers",
"following_url": "https://api.github.com/users/Laksh1997/following{/other_user}",
"gists_url": "https://api.github.com/users/Laksh1997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Laksh1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laksh1997/subscriptions",
"organizations_url": "https://api.github.com/users/Laksh1997/orgs",
"repos_url": "https://api.github.com/users/Laksh1997/repos",
"events_url": "https://api.github.com/users/Laksh1997/events{/privacy}",
"received_events_url": "https://api.github.com/users/Laksh1997/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,582 | 1,582 | 1,582 | NONE | null | # 🐛 Bug
BERT breaks during FP16 training due to the gelu function.
```
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/transformers/modeling_bert.py", line 407, in forward
hidden_states, attention_mask, head_mask[i], encoder_hidden_states, encoder_attention_mask
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/transformers/modeling_bert.py", line 379, in forward
intermediate_output = self.intermediate(attention_output)
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/transformers/modeling_bert.py", line 332, in forward
hidden_states = self.intermediate_act_fn(hidden_states)
File "/home/user/miniconda/envs/py36/lib/python3.6/site-packages/torch/nn/functional.py", line 1125, in gelu
return torch._C._nn.gelu(input)
RuntimeError: "GeluCUDAKernelImpl" not implemented for 'Half'
```
## Information
The reason this is happening is because in `modeling_bert.py`, in 2.4.1 we had:
```
def gelu(x):
""" Original Implementation of the gelu activation function in Google Bert repo when initially created.
For information: OpenAI GPT's gelu is slightly different (and gives slightly different results):
0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3))))
Also see https://arxiv.org/abs/1606.08415
"""
return x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0)))
...
ACT2FN = {"gelu": gelu, "relu": torch.nn.functional.relu, "swish": swish, "gelu_new": gelu_new, "mish": mish}
```
whereas in 2.5.0 we have:
```
ACT2FN = {"gelu": gelu, "relu": torch.nn.functional.relu, "swish": swish, "gelu_new": gelu_new, "mish": mish}
```
where `gelu` now comes from `activations.py` as:
`gelu = getattr(F, "gelu", _gelu_python)` on line 21.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Linux
- Python version: 3.6
- PyTorch version (GPU?): 1.2.0 CUDA 10.0
- Tensorflow version (GPU?): 2.5.0
- Using GPU in script?: V100
- Using distributed or parallel set-up in script?: No, but using Apex FP16 training
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2940/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2939 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2939/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2939/comments | https://api.github.com/repos/huggingface/transformers/issues/2939/events | https://github.com/huggingface/transformers/pull/2939 | 568,558,438 | MDExOlB1bGxSZXF1ZXN0Mzc3OTY0NDQ1 | 2,939 | Add standardized get_vocab method to tokenizers | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2939?src=pr&el=h1) Report\n> Merging [#2939](https://codecov.io/gh/huggingface/transformers/pull/2939?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ea8eba35e2984882c3cd522ff669eb8060941a94?src=pr&el=desc) will **decrease** coverage by `1.02%`.\n> The diff coverage is `89.65%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2939?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2939 +/- ##\n==========================================\n- Coverage 75.35% 74.32% -1.03% \n==========================================\n Files 94 94 \n Lines 15445 15474 +29 \n==========================================\n- Hits 11638 11501 -137 \n- Misses 3807 3973 +166\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2939?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.71% <100%> (+1.77%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `83.39% <100%> (+0.13%)` | :arrow_up: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.19% <100%> (+0.07%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `97.02% <100%> (+0.02%)` | :arrow_up: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/2939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `89.52% <100%> (+0.41%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.5% <100%> (+0.22%)` | :arrow_up: |\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `96.87% <100%> (+0.04%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.61% <100%> (+0.16%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.17% <100%> (+0.36%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/2939/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.22% <100%> (+0.31%)` | :arrow_up: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/2939/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2939?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2939?src=pr&el=footer). Last update [ea8eba3...197d74f](https://codecov.io/gh/huggingface/transformers/pull/2939?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@sshleifer thanks for the review! The trouble is that different tokenizers store their vocabs pretty differently (thus this PR) – only BERT-inhereted tokenizers currently have a `self.vocab`, for example. I'd argue it's better to make it explicit that subclasses need to implement it rather than risk a silent error (i.e. if a subclass defines a `self.vocab` property differently than BERT's tokenizer does)."
] | 1,582 | 1,598 | 1,582 | CONTRIBUTOR | null | This PR adds a `get_vocab` method to the `PretrainedTokenizers` to standardize extracting vocabularies from tokenizers.
Comments:
- I didn't do anything with fast tokenizers. cc'ing @mfuntowicz for his thoughts there.
- I opted to keep it a method rather than a `@property` in order to encourage users to primarily use existing methods like `convert_tokens_to_ids` for general encoding/decoding purposes and use `get_vocab` only when they need the entire vocabulary.
- For tokenizers which rely on `sentencepiece`, I was unable to figure out a better way to get the vocabs than to loop through it. If someone knows a better way, please let me know. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2939/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2939",
"html_url": "https://github.com/huggingface/transformers/pull/2939",
"diff_url": "https://github.com/huggingface/transformers/pull/2939.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2939.patch",
"merged_at": 1582391342000
} |
https://api.github.com/repos/huggingface/transformers/issues/2938 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2938/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2938/comments | https://api.github.com/repos/huggingface/transformers/issues/2938/events | https://github.com/huggingface/transformers/issues/2938 | 568,557,334 | MDU6SXNzdWU1Njg1NTczMzQ= | 2,938 | OpenAIGPTDoubleHeadsModel throws CUDA OOM with large number of candidates | {
"login": "g-karthik",
"id": 3851993,
"node_id": "MDQ6VXNlcjM4NTE5OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3851993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-karthik",
"html_url": "https://github.com/g-karthik",
"followers_url": "https://api.github.com/users/g-karthik/followers",
"following_url": "https://api.github.com/users/g-karthik/following{/other_user}",
"gists_url": "https://api.github.com/users/g-karthik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-karthik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-karthik/subscriptions",
"organizations_url": "https://api.github.com/users/g-karthik/orgs",
"repos_url": "https://api.github.com/users/g-karthik/repos",
"events_url": "https://api.github.com/users/g-karthik/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-karthik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | null | [] | [
"Isn't that to be expected? The large model just doesn't fit on your GPU's memory.",
"@BramVanroy is that expected behavior with just 1 training example though?\r\n\r\nI initially suspected that this is a training-specific behavior (due to the need to store gradients, etc.), so I decided to fix the number of candidates to something small during training.\r\n\r\nI then attempted to do inference with this trained model, but I used all candidates during inference. I took 1 example to do inference on, and I observed memory issues with that too.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,588 | 1,588 | NONE | null | # 🐛 Bug
I am trying to train the `OpenAIGPTDoubleHeadsModel`. I find that large number of candidates can cause CUDA OOM errors.
Case 1 (single training example with 67 candidates): CUDA OOM
```
input_ids.shape: torch.Size([1, 67, 275])
mc_token_ids.shape: torch.Size([1, 67])
lm_labels.shape: torch.Size([1, 67, 275])
mc_labels.shape: torch.Size([1])
token_type_ids.shape: torch.Size([1, 67, 275])
```
Case 2 (single training example with 3 candidates): works fine!
```
input_ids.shape: torch.Size([1, 3, 275])
mc_token_ids.shape: torch.Size([1, 3])
lm_labels.shape: torch.Size([1, 3, 275])
mc_labels.shape: torch.Size([1])
token_type_ids.shape: torch.Size([1, 3, 275])
```
## Information
Model I am using: `OpenAIGPTDoubleHeadsModel`
Language I am using the model on: English
The problem arises when using my own modified scripts based on the `transfer-learning-conv-ai` repo by Hugging Face.
## To reproduce
Simply try training `OpenAIGPTDoubleHeadsModel` with larger number of candidates (such as 67).
## Expected behavior
3 or 67 candidates shouldn't matter, both cases 1 and 2 should work fine without CUDA OOM.
## Environment info
- `transformers` version: 2.3.0
- Platform: Amazon Linux (Deep Learning AMI)
- Python version: 3.6
- PyTorch version (GPU?): the one shipped with the pytorch_p36 conda env in Amazon DL AMI
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes and No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2938/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2937 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2937/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2937/comments | https://api.github.com/repos/huggingface/transformers/issues/2937/events | https://github.com/huggingface/transformers/pull/2937 | 568,546,409 | MDExOlB1bGxSZXF1ZXN0Mzc3OTU0NTc5 | 2,937 | Small fix: default args for torch-lightning | {
"login": "srush",
"id": 35882,
"node_id": "MDQ6VXNlcjM1ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srush",
"html_url": "https://github.com/srush",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"organizations_url": "https://api.github.com/users/srush/orgs",
"repos_url": "https://api.github.com/users/srush/repos",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"received_events_url": "https://api.github.com/users/srush/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2937?src=pr&el=h1) Report\n> Merging [#2937](https://codecov.io/gh/huggingface/transformers/pull/2937?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e2a6445ebbc36121817c1f605d9a09a335f5fba5?src=pr&el=desc) will **decrease** coverage by `1.06%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2937?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2937 +/- ##\n==========================================\n- Coverage 75.35% 74.28% -1.07% \n==========================================\n Files 94 94 \n Lines 15445 15445 \n==========================================\n- Hits 11638 11473 -165 \n- Misses 3807 3972 +165\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2937?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.91% <0%> (-9.86%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.86% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2937/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2937?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2937?src=pr&el=footer). Last update [e2a6445...34e9098](https://codecov.io/gh/huggingface/transformers/pull/2937?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,582 | 1,582 | CONTRIBUTOR | null | Fix to the default argument passing to torch-lightning. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2937/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2937",
"html_url": "https://github.com/huggingface/transformers/pull/2937",
"diff_url": "https://github.com/huggingface/transformers/pull/2937.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2937.patch",
"merged_at": 1582230678000
} |
https://api.github.com/repos/huggingface/transformers/issues/2936 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2936/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2936/comments | https://api.github.com/repos/huggingface/transformers/issues/2936/events | https://github.com/huggingface/transformers/issues/2936 | 568,530,878 | MDU6SXNzdWU1Njg1MzA4Nzg= | 2,936 | New tokenizers issue in NER demo | {
"login": "srush",
"id": 35882,
"node_id": "MDQ6VXNlcjM1ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srush",
"html_url": "https://github.com/srush",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"organizations_url": "https://api.github.com/users/srush/orgs",
"repos_url": "https://api.github.com/users/srush/repos",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"received_events_url": "https://api.github.com/users/srush/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | null | [] | [
"This might be https://github.com/huggingface/transformers/issues/2917",
"@srush I'm looking at it right now ",
"I have the same issue while running run_tf_ner.py on the German NER dataset. I got the same AssertionError as below:\r\n> File \"run_tf_ner.py\", line 655, in <module>\r\n> app.run(main)\r\n> File \"/usr/local/lib/python3.6/dist-packages/absl/app.py\", line 299, in run\r\n> _run_main(main, args)\r\n> File \"/usr/local/lib/python3.6/dist-packages/absl/app.py\", line 250, in _run_main\r\n> sys.exit(main(argv))\r\n> File \"run_tf_ner.py\", line 540, in main\r\n> args, tokenizer, labels, pad_token_label_id, train_batch_size, mode=\"train\"\r\n> File \"run_tf_ner.py\", line 451, in load_and_cache_examples\r\n> pad_token_label_id=pad_token_label_id,\r\n> File \"/content/transformers/examples/ner/utils_ner.py\", line 182, in convert_examples_to_features\r\n> assert len(label_ids) == max_seq_length\r\n> AssertionError\r\n\r\nMy idea is that the [pad_token_label_id = 0](https://github.com/huggingface/transformers/blob/94ff2d6ee8280c5595b92c1128c0f18e44925e56/examples/ner/run_tf_ner.py#L511) may conflict with the orginal label_list id. Becase in utils_ner.py (line 104), `label_map = {label: i for i, label in enumerate(label_list)}` . \r\nBy the way, I run the same code on CoNLL-2003 dataset with default labels:\r\n[\"O\", \"B-MISC\", \"I-MISC\", \"B-PER\", \"I-PER\", \"B-ORG\", \"I-ORG\", \"B-LOC\", \"I-LOC\"]\r\nNo such error message... It may be 'O' is the first token but in Germen dataset 'O' was the last label in label.text. \r\nI don't know if this is the real problem, but I hope this bug can be fixed soon. Thx.\r\n \r\n",
"> @srush I'm looking at it right now\r\n\r\nI have the same issue as @srush. Any idea yet what the issue could be?",
"It works fine with 2.4 so it is likely an issue with the new tokenizer.",
"Hi @srush ,\r\n\r\nI am using transformers==2.4.1, but still facing the problem with a custom data set (with extra labels). It is working with the german dataset though. Could you be more specific about the version of transformers that you use.\r\nThanks\r\n",
"Hi @cibinjohn , \r\n\r\nCan you tell us which tokenizers / model you're using ? We fixed something for `bert-base-multilingual` should be merge in master quite soon.\r\n",
"Any one got solution for this yet? I have the same issue with this. I used transformer 2.5.1 and the latest tokenizer come with the transformers installation by default. \r\nLooks like the assert len(label_ids) == max_seq_length because len(label_ids) is one more than max_seq_length while 3 other asserts before it pass the tests. ",
"@yuyongze Have you made any progress on this?\r\n\r\nI think `pad_token_label_id = 0` is actually another issue: #3332 \r\nAnd related: https://stackoverflow.com/questions/60732509/label-handling-confusion-in-run-tf-ner-example",
"Experiencing the same problem launching run_ner.py on the WNUT17 dataset, although on German and CONLL-2003 everything works fine.",
"I just want to run the ner demo (TF-version) but the same issue/ error raises.. Tried with transformers version 2.4/5/6 still the same error raises. Has anyone a solution? \r\n\r\nEdit: PyTorch script seems to work",
"@mfuntowicz I am using `bert-base-multilingual` model & transformers==2.4.1. \r\n\r\nCould you let me know when the solution will be merged with master. Looking forward to hear from you.\r\nThanks in advance\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,592 | 1,592 | CONTRIBUTOR | null | # 🐛 Bug
When I use transformers 2.5 I get the following error when running the run_ner demo. It seems to work from when I use 2.4. I am guessing this is because of a slight difference in the tokenization script? It seems to fail even when running with the base German run.sh in the ner/ directory.
```Traceback (most recent call last):
File "run_pl_ner.py", line 233, in <module>
trainer = generic_train(model, args)
File "/content/transformers/examples/ner/transformer_base.py", line 268, in generic_train
trainer.fit(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 911, in fit
self.single_gpu_train(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/distrib_parts.py", line 464, in single_gpu_train
self.optimizers, self.lr_schedulers = self.init_optimizers(model.configure_optimizers())
File "/content/transformers/examples/ner/transformer_base.py", line 92, in configure_optimizers
* float(self.hparams.num_train_epochs)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/decorators.py", line 19, in _get_data_loader
value = fn(self) # Lazy evaluation, done only once.
File "/content/transformers/examples/ner/transformer_base.py", line 132, in train_dataloader
return self.load_dataset("train", self.hparams.train_batch_size)
File "run_pl_ner.py", line 50, in load_dataset
dataset = self.load_and_cache_examples(labels, self.pad_token_label_id, mode)
File "run_pl_ner.py", line 175, in load_and_cache_examples
pad_token_label_id=pad_token_label_id,
File "/content/transformers/examples/ner/utils_ner.py", line 182, in convert_examples_to_features
assert len(label_ids) == max_seq_length
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2936/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2936/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2935 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2935/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2935/comments | https://api.github.com/repos/huggingface/transformers/issues/2935/events | https://github.com/huggingface/transformers/pull/2935 | 568,482,750 | MDExOlB1bGxSZXF1ZXN0Mzc3OTA0MTEx | 2,935 | Optimized squad.py multi-threading | {
"login": "birdmw",
"id": 2925772,
"node_id": "MDQ6VXNlcjI5MjU3NzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2925772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/birdmw",
"html_url": "https://github.com/birdmw",
"followers_url": "https://api.github.com/users/birdmw/followers",
"following_url": "https://api.github.com/users/birdmw/following{/other_user}",
"gists_url": "https://api.github.com/users/birdmw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/birdmw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/birdmw/subscriptions",
"organizations_url": "https://api.github.com/users/birdmw/orgs",
"repos_url": "https://api.github.com/users/birdmw/repos",
"events_url": "https://api.github.com/users/birdmw/events{/privacy}",
"received_events_url": "https://api.github.com/users/birdmw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This was a big performance hiccup for us. We managed to take a `QuestionAnswerAnsweringPipeline` down from 12 seconds to 6 seconds (with CUDA) and then to ~1 second (with the serialization removal optimization).",
"The code quality check is literally just that one line is too long.",
"> The code quality check is literally just that one line is too long.\r\n\r\nHave a look at the contributing guidelines, in particular step 5. https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests\r\n\r\nDoing style and quality checks locally makes sure that your pull request doesn't get that annoying '1 failing' note.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Thank you, this is huge. On a g4dn.xlarge, with distilbert-base-cased-distilled-squad, I get about 60% speedup."
] | 1,582 | 1,595 | 1,588 | NONE | null | If there are few examples, don't bother multi-threading . The base cost of multi-threading in Python is expensive. I can't upload a picture of the call-stack visual profile because of my stupid company firewall, but multi-threading in python depends on serializing each object before sending it off. That process has a 5 second overhead coming from <method 'dump' of '_pickle.Pickler' objects>. Don't take my word for it - test it yourself. This small optimization reduces invocation run-time from 6s down to 1.1s for a single example inference where the len(examples) == 1. This optimization singular but it should be echoed across all pipelines in the Transformers repo.
ps: I also see a lot of lists with appends across the Transformers repository as a whole. This is a big speed suck. Look into using collections.deques more - deques are like lists but are highly optimized for appends.
Good Luck HuggingFace - you guys rock! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2935/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2935",
"html_url": "https://github.com/huggingface/transformers/pull/2935",
"diff_url": "https://github.com/huggingface/transformers/pull/2935.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2935.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2934 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2934/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2934/comments | https://api.github.com/repos/huggingface/transformers/issues/2934/events | https://github.com/huggingface/transformers/pull/2934 | 568,460,504 | MDExOlB1bGxSZXF1ZXN0Mzc3ODg2NDM2 | 2,934 | Update README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2934?src=pr&el=h1) Report\n> Merging [#2934](https://codecov.io/gh/huggingface/transformers/pull/2934?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e2a6445ebbc36121817c1f605d9a09a335f5fba5?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2934?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2934 +/- ##\n=======================================\n Coverage 75.35% 75.35% \n=======================================\n Files 94 94 \n Lines 15445 15445 \n=======================================\n Hits 11638 11638 \n Misses 3807 3807\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2934?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2934/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG1fcm9iZXJ0YS5weQ==) | `100% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2934/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2934/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.62% <0%> (ø)` | :arrow_up: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/2934/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `96% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2934/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.77% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2934/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `94.27% <0%> (ø)` | :arrow_up: |\n| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/2934/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `92.85% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2934/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.16% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2934/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `86.37% <0%> (ø)` | :arrow_up: |\n| [src/transformers/modeling\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/2934/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `83.82% <0%> (ø)` | :arrow_up: |\n| ... and [19 more](https://codecov.io/gh/huggingface/transformers/pull/2934/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2934?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2934?src=pr&el=footer). Last update [e2a6445...fe93e6a](https://codecov.io/gh/huggingface/transformers/pull/2934?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"(cc @mfuntowicz for the temporary workaround)"
] | 1,582 | 1,582 | 1,582 | CONTRIBUTOR | null | - I added an example using the model with pipelines to show that we have set```{"use_fast": False}``` in the tokenizer for Q&A as noticed in [issue](https://github.com/huggingface/transformers/issues/2920)
- I added a Colab to play with the model and pipelines
- I added a Colab to discover Huggingface pipelines at the end of the document | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2934/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2934",
"html_url": "https://github.com/huggingface/transformers/pull/2934",
"diff_url": "https://github.com/huggingface/transformers/pull/2934.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2934.patch",
"merged_at": 1582387602000
} |
https://api.github.com/repos/huggingface/transformers/issues/2933 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2933/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2933/comments | https://api.github.com/repos/huggingface/transformers/issues/2933/events | https://github.com/huggingface/transformers/pull/2933 | 568,455,104 | MDExOlB1bGxSZXF1ZXN0Mzc3ODgyMDA2 | 2,933 | Fix for fast tokenizers save_pretrained compatibility with Python. | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2933?src=pr&el=h1) Report\n> Merging [#2933](https://codecov.io/gh/huggingface/transformers/pull/2933?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6a50d501ec54fd28eed57031ddbba6480768f9bc?src=pr&el=desc) will **decrease** coverage by `1%`.\n> The diff coverage is `95%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2933?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2933 +/- ##\n=========================================\n- Coverage 77.21% 76.2% -1.01% \n=========================================\n Files 98 98 \n Lines 16030 16040 +10 \n=========================================\n- Hits 12377 12224 -153 \n- Misses 3653 3816 +163\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2933?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/2933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `41.1% <100%> (+1.3%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91% <66.66%> (+0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.03% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.54% <0%> (+0.32%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2933?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2933?src=pr&el=footer). Last update [6a50d50...f22083c](https://codecov.io/gh/huggingface/transformers/pull/2933?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,651 | 1,582 | MEMBER | null | The name of generated file doesn't match between tokenizers and transformers tokenizers, so transformers is not able to load model saved with tokenizers models. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2933/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2933",
"html_url": "https://github.com/huggingface/transformers/pull/2933",
"diff_url": "https://github.com/huggingface/transformers/pull/2933.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2933.patch",
"merged_at": 1582586443000
} |
https://api.github.com/repos/huggingface/transformers/issues/2932 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2932/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2932/comments | https://api.github.com/repos/huggingface/transformers/issues/2932/events | https://github.com/huggingface/transformers/pull/2932 | 568,388,811 | MDExOlB1bGxSZXF1ZXN0Mzc3ODI3Nzc5 | 2,932 | [WIP] Add a trainer tool class to make the TF2 model training easier | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,582 | 1,587 | 1,587 | CONTRIBUTOR | null | Hello,
I decided to open a cleaner PR because the other was a bit too messy in my opinion. Here the features that this Trainer class will be able to handle:
- [x] Single / Multiple GPU training. Distributed is across GPUs in the same host. Distribution across multiple machines will be for a future version.
- [ ] The training can be configured with a JSON file.
- [x] Handle multiple data processor to be able to train a model over different datasets.
- [x] Select and configure a specific loss/optimizer for a training
- [x] Create multiple checkpoints during the training in order to make it fault-tolerant.
- [x] Create the logs to be able to visualize the training in Tensorboard
- [x] The final model is saved in Hugging face transformer format and in TF saved model
- [x] Able to give a data directory where to find the datasets
- [x] Automatically handle dataset/model caching
- [ ] Run an evaluation over a test dataset with proper printed results such as the one proposed by the `seqeval` package
Currently the trainer class can be used over glue and xnli datasets with the available examples `examples/run_tf_xnli_with_trainer.py` and `examples/run_tf_glue_with_trainer.py`. I will add new examples for differents tasks and datasets.
The list of features above will be checked as things progress.
Ping @julien-c @LysandreJik @thomwolf : Do not hesitate to make proposals if you have new ideas of features or advices on a better implementation of this trainer. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2932/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2932",
"html_url": "https://github.com/huggingface/transformers/pull/2932",
"diff_url": "https://github.com/huggingface/transformers/pull/2932.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2932.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2931 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2931/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2931/comments | https://api.github.com/repos/huggingface/transformers/issues/2931/events | https://github.com/huggingface/transformers/pull/2931 | 568,317,784 | MDExOlB1bGxSZXF1ZXN0Mzc3NzY5NzAx | 2,931 | Fix spell: EsperBERTo, not EspertBERTo | {
"login": "vochicong",
"id": 123111,
"node_id": "MDQ6VXNlcjEyMzExMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/123111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vochicong",
"html_url": "https://github.com/vochicong",
"followers_url": "https://api.github.com/users/vochicong/followers",
"following_url": "https://api.github.com/users/vochicong/following{/other_user}",
"gists_url": "https://api.github.com/users/vochicong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vochicong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vochicong/subscriptions",
"organizations_url": "https://api.github.com/users/vochicong/orgs",
"repos_url": "https://api.github.com/users/vochicong/repos",
"events_url": "https://api.github.com/users/vochicong/events{/privacy}",
"received_events_url": "https://api.github.com/users/vochicong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2931?src=pr&el=h1) Report\n> Merging [#2931](https://codecov.io/gh/huggingface/transformers/pull/2931?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d490b5d5003654f104af3abd0556e598335b5650?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2931?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2931 +/- ##\n=======================================\n Coverage 75.35% 75.35% \n=======================================\n Files 94 94 \n Lines 15444 15444 \n=======================================\n Hits 11638 11638 \n Misses 3806 3806\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2931?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2931?src=pr&el=footer). Last update [d490b5d...b5607ba](https://codecov.io/gh/huggingface/transformers/pull/2931?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"You're right, thanks for fixing!"
] | 1,582 | 1,582 | 1,582 | CONTRIBUTOR | null | This misspelling almost drove me crazy :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2931/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2931",
"html_url": "https://github.com/huggingface/transformers/pull/2931",
"diff_url": "https://github.com/huggingface/transformers/pull/2931.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2931.patch",
"merged_at": 1582210928000
} |
https://api.github.com/repos/huggingface/transformers/issues/2930 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2930/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2930/comments | https://api.github.com/repos/huggingface/transformers/issues/2930/events | https://github.com/huggingface/transformers/pull/2930 | 568,284,544 | MDExOlB1bGxSZXF1ZXN0Mzc3NzQzMDc0 | 2,930 | Add local_files_only parameter to pretrained items | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2930?src=pr&el=h1) Report\n> Merging [#2930](https://codecov.io/gh/huggingface/transformers/pull/2930?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/59c23ad9c931ac4fe719abeb3c3851df046ef3a6?src=pr&el=desc) will **decrease** coverage by `1.05%`.\n> The diff coverage is `92.3%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2930?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2930 +/- ##\n==========================================\n- Coverage 75.3% 74.24% -1.06% \n==========================================\n Files 94 94 \n Lines 15424 15430 +6 \n==========================================\n- Hits 11615 11456 -159 \n- Misses 3809 3974 +165\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2930?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.49% <100%> (+0.03%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.1% <100%> (+0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `61.46% <100%> (+0.13%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68% <88.88%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.91% <0%> (-9.86%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `70.86% <0%> (-2.3%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `92.07% <0%> (-2.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2930?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2930?src=pr&el=footer). Last update [59c23ad...826eced](https://codecov.io/gh/huggingface/transformers/pull/2930?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I think this feature is reasonable but I'm not sure about the param name. Maybe something like `disable_networking` or `local_files_only`?",
"Yeah, wasn't sure about a parameter name. I like `local_files_only`, even though it's quite long.",
"Would it make sense to utilise this in the examples? I am thinking about multi-GPU set-ups where the online lookup only has to be done by the main process (local_rank == 0). For all other processes, local_files_only can be True. Might avoid some redundant look-ups - even though in practice it won't matter much in terms of speed (couple of seconds at most)."
] | 1,582 | 1,584 | 1,582 | COLLABORATOR | null | closes https://github.com/huggingface/transformers/issues/2867
Setting local_files_only=True disables outgoing traffic:
- etags are not looked up
- files are not downloaded (config, tokenizer, model)
An appropriate error is thrown when this argument may be the cause why a model cannot be loaded.
```python
import pyinstrument
from transformers import DistilBertConfig, DistilBertModel, DistilBertTokenizer
class TreeProfiler():
def __init__(self, show_all=False):
self.profiler = pyinstrument.Profiler()
self.show_all = show_all # verbose output of pyinstrument profiler
def __enter__(self):
print("WITH TREE_PROFILER:")
self.profiler.start()
def __exit__(self, *args):
self.profiler.stop()
print(self.profiler.output_text(unicode=True, color=True, show_all=self.show_all))
def main():
with TreeProfiler(show_all=True):
config = DistilBertConfig.from_pretrained('distilbert-base-uncased', local_files_only=True)
model = DistilBertModel.from_pretrained('distilbert-base-uncased', local_files_only=True)
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased', local_files_only=True)
if __name__ == '__main__':
main()
```
The above snippet will throw an error message when the expected files are not present in the cache. When they are, though, everything is loaded fine without the need of any additional lookups. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2930/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2930",
"html_url": "https://github.com/huggingface/transformers/pull/2930",
"diff_url": "https://github.com/huggingface/transformers/pull/2930.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2930.patch",
"merged_at": 1582574296000
} |
https://api.github.com/repos/huggingface/transformers/issues/2929 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2929/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2929/comments | https://api.github.com/repos/huggingface/transformers/issues/2929/events | https://github.com/huggingface/transformers/issues/2929 | 568,281,258 | MDU6SXNzdWU1NjgyODEyNTg= | 2,929 | Getting the same results when evaluating Model2Model with different encoder inputs. | {
"login": "dimi1357",
"id": 22443447,
"node_id": "MDQ6VXNlcjIyNDQzNDQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/22443447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dimi1357",
"html_url": "https://github.com/dimi1357",
"followers_url": "https://api.github.com/users/dimi1357/followers",
"following_url": "https://api.github.com/users/dimi1357/following{/other_user}",
"gists_url": "https://api.github.com/users/dimi1357/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dimi1357/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dimi1357/subscriptions",
"organizations_url": "https://api.github.com/users/dimi1357/orgs",
"repos_url": "https://api.github.com/users/dimi1357/repos",
"events_url": "https://api.github.com/users/dimi1357/events{/privacy}",
"received_events_url": "https://api.github.com/users/dimi1357/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843738573,
"node_id": "MDU6TGFiZWwxODQzNzM4NTcz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Encoder-Decoder",
"name": "Core: Encoder-Decoder",
"color": "ef536d",
"default": false,
"description": ""
},
{
"id": 1845609017,
"node_id": "MDU6TGFiZWwxODQ1NjA5MDE3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq",
"name": "seq2seq",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"It's in the forward pass of the EncoderDecoder:\r\n\r\nhttps://github.com/huggingface/transformers/blob/d490b5d5003654f104af3abd0556e598335b5650/src/transformers/modeling_encoder_decoder.py#L205-L237",
"> It's in the forward pass of the EncoderDecoder:\r\n> \r\n> https://github.com/huggingface/transformers/blob/d490b5d5003654f104af3abd0556e598335b5650/src/transformers/modeling_encoder_decoder.py#L205-L237\r\n\r\nYes but if I do not pass an \"encoder_ hidden_states\" then the encoder input does not affect the decoder output?",
"If you pass encoder_hidden_states, the encoder is skipped (not called at all). If you do not explicitly pass encoder_hidden_states, the inputs will go through the encoder and the hidden states will be used as encoder_hidden_states.\r\n\r\nhttps://github.com/huggingface/transformers/blob/d490b5d5003654f104af3abd0556e598335b5650/src/transformers/modeling_encoder_decoder.py#L228-L232",
"Oh I guess I have an old version, for me the line \r\n\r\n`encoder_hidden_states = encoder_outputs[0]`\r\ndoes not exists, I'll update and try again.\r\n\r\nThanks",
"Let me know if you run into other issues.",
"> Let me know if you run into other issues.\r\n\r\nStill, I don't see any difference when I am changing the encoder input, for example:\r\n`>>>model(torch.tensor([[10,20,300,4,500,600]]), torch.tensor([[400,500]]), decoder_lm_labels=torch.tensor([[400,500]]))[0]`\r\n`tensor(17.1395, grad_fn=<NllLossBackward>)`\r\n\r\n`>>>model(torch.tensor([[100,200,300,400]]), torch.tensor([[400,500]]), decoder_lm_labels=torch.tensor([[400,500]]))[0]`\r\n`tensor(17.1395, grad_fn=<NllLossBackward>)`",
"I'm seeing the exact same issue. \r\nyou can reproduce it here: https://colab.research.google.com/drive/1DH07pETO_F0eoxE7HaErooEWwptWf_SQ\r\n\r\nWhat's interesting is that if I train the model, it will output something different, but it will output that same thing regardless of what the input to the trained model is. \r\n\r\nAlso the same thing happens if I use `PreTrainedEncoderDecoder`",
"This is what I have discovered (I think):\r\nwhen this calculation is evaluated in Model2Model forward:\r\n`decoder_outputs = self.decoder(decoder_input_ids, **kwargs_decoder)`\r\nwe are getting to:\r\n`outputs = self.bert(\r\n input_ids,\r\n attention_mask=attention_mask,\r\n token_type_ids=token_type_ids,\r\n position_ids=position_ids,\r\n head_mask=head_mask,\r\n inputs_embeds=inputs_embeds,\r\n encoder_hidden_states=encoder_hidden_states,\r\n encoder_attention_mask=encoder_attention_mask,\r\n )`\r\nand then:\r\n`encoder_outputs = self.encoder(\r\n embedding_output,\r\n attention_mask=extended_attention_mask,\r\n head_mask=head_mask,\r\n encoder_hidden_states=encoder_hidden_states,\r\n encoder_attention_mask=encoder_extended_attention_mask,\r\n )`\r\nto:\r\n`layer_outputs = layer_module(\r\n hidden_states, attention_mask, head_mask[i], encoder_hidden_states, encoder_attention_mask\r\n )`\r\nto:\r\n` if self.is_decoder and encoder_hidden_states is not None:\r\n cross_attention_outputs = self.crossattention(\r\n attention_output, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask\r\n )\r\n attention_output = cross_attention_outputs[0]\r\n outputs = outputs + cross_attention_outputs[1:] # add cross attentions if we output attention weights\r\n`\r\n\r\nand in this condition `if self.is_decoder and encoder_hidden_states is not None`, is decoder is always \r\n`False` so we never go into this clause, and never using the `encoder_hidden_states`, so we get always the same results, that does not depends on the encoder input or output.",
"@dimi1357 It looks like the issue is that the model is getting `is_decoder` set to True **after** it has been initialized to False, but at that point `BertLayer` has `is_decoder` set to False and so it stays like that. \r\nThis seems to be a workaround: \r\n```\r\ndecoder_config = config = AutoConfig.from_pretrained('bert-base-uncased', is_decoder=True)\r\nmodel = PreTrainedEncoderDecoder.from_pretrained('bert-base-uncased', 'bert-base-uncased', decoder_config=decoder_config)\r\n```",
"> @dimi1357 It looks like the issue is that the model is getting `is_decoder` set to True **after** it has been initialized to False, but at that point `BertLayer` has `is_decoder` set to False and so it stays like that.\r\n> This seems to be a workaround:\r\n> \r\n> ```\r\n> decoder_config = config = AutoConfig.from_pretrained('bert-base-uncased', is_decoder=True)\r\n> model = PreTrainedEncoderDecoder.from_pretrained('bert-base-uncased', 'bert-base-uncased', decoder_config=decoder_config)\r\n> ```\r\n\r\nYes, that seems to fix the issue,\r\nThanks a lot!!"
] | 1,582 | 1,582 | 1,582 | NONE | null | After fine-tuning the Model2Model with 'bert-base-uncased', I am getting the same losses values, no matter what is the encoder input. On the PreTrainedEncoderDecoder documentation, it's said that "During prediction, we perform one forward pass through the encoder,
and then perform several forward passes with the encoder's hidden
state through the decoder to decode a full sequence."
I couldn't find the place on the source code, that does the connection between the encoder and the decoder. If someone could help me to show me where this is happening, it will be a great help,
thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2929/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2928 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2928/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2928/comments | https://api.github.com/repos/huggingface/transformers/issues/2928/events | https://github.com/huggingface/transformers/pull/2928 | 568,280,437 | MDExOlB1bGxSZXF1ZXN0Mzc3NzM5ODMw | 2,928 | Make RobertaForMaskedLM implementation identical to fairseq | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | null | [] | [
"TODO: https://github.com/huggingface/transformers/pull/2913#issuecomment-588508153",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2928?src=pr&el=h1) Report\n> Merging [#2928](https://codecov.io/gh/huggingface/transformers/pull/2928?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/59c23ad9c931ac4fe719abeb3c3851df046ef3a6?src=pr&el=desc) will **decrease** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2928?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2928 +/- ##\n=========================================\n- Coverage 75.3% 75.3% -0.01% \n=========================================\n Files 94 94 \n Lines 15424 15423 -1 \n=========================================\n- Hits 11615 11614 -1 \n Misses 3809 3809\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2928?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2928/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.75% <100%> (-0.02%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2928?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2928?src=pr&el=footer). Last update [59c23ad...1f290e5](https://codecov.io/gh/huggingface/transformers/pull/2928?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Looks good. I tested it out and the outputs match exactly everywhere I can see. Requested review from @LysandreJik as well.\r\n\r\nRegarding the test mentioned by @sshleifer, you can just test that a slice of the outputs match rather than the entire tensor. See [here](https://github.com/huggingface/transformers/blob/2184f87003c18ad8a172ecab9a821626522cf8e7/tests/test_modeling_roberta.py#L323) for an example.",
"> Looks good. I tested it out and the outputs match exactly everywhere I can see. Requested review from @LysandreJik as well.\r\n> \r\n> Regarding the test mentioned by @sshleifer, you can just test that a slice of the outputs match rather than the entire tensor. See [here](https://github.com/huggingface/transformers/blob/2184f87003c18ad8a172ecab9a821626522cf8e7/tests/test_modeling_roberta.py#L323) for an example.\r\n\r\nThanks, will add tests later. I am still a bit confused why the weights of the embeddings are tied to the LMHead in the original implementation, though. I don't quite get the intention there.",
"Hm, perhaps this warning message should not be there.\r\n\r\n> Weights of RobertaForMaskedLM not initialized from pretrained model: ['lm_head.weight']\r\n> Weights from pretrained model not used in RobertaForMaskedLM: ['lm_head.decoder.weight']\r\n\r\n- lm_head.weight is initialised because it takes the embedding weights\r\n- the weights from the pretrained model are not used because they are not required\r\n",
"@BramVanroy Where are you getting that warning? I don't see it when I call `RobertaForMaskedLM.from_pretrained`",
"You can only see it if your logging level is set to INFO or lower. So you can put the following before loading the model.\r\n\r\n```python\r\nimport logging\r\nlogging.basicConfig(\r\n format=\"%(asctime)s - %(levelname)s - %(name)s - %(message)s\",\r\n datefmt=\"%m/%d/%Y %H:%M:%S\",\r\n level=logging.INFO)\r\n```` ",
"Oh I see. Looks like the problem is just that the weight param introduced has a different name format than before. Rather than using the functional API as you did here, I would just manually override `decoder.weight` when `weight` is passed. I.e.,\r\n\r\n```python\r\nself.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False)\r\nif weight is not None:\r\n self.decoder.weight = weight\r\n```\r\nAs you mentioned, it's not a huge issue since the weights are correctly loaded from the embeddings anyway, but probably a bit cleaner if the names align.",
"For those interested, I found the answer to the why on Twitter because of a helpful comment. Apparently this is common practice and has been introduced a while back in [Using the output of embeddings to improve language models](https://arxiv.org/abs/1608.05859).",
"> Hi @BramVanroy! I can see there's an issue here but I don't think this is the way to solve it.\r\n> \r\n> We actually _do_ tie the weights together, so there's no need to do any additional tying. We actually tie the weights for every model that has an LM head (Masked or causal).\r\n> \r\n> The issue here is because of the `bias` I introduced a few weeks ago with #2521. The way I did it means that the bias was actually applied twice.\r\n> \r\n> The correct way to fix it would be to change\r\n> \r\n> ```python\r\n> x = self.decoder(x) + self.bias\r\n> ```\r\n> \r\n> to\r\n> \r\n> ```python\r\n> x = self.decoder(x)\r\n> ```\r\n> \r\n> in the forward method. The bias is already part of the decoder, so no need to apply it once more.\r\n> \r\n> Do you want to update your PR, or should I do one to fix it?\r\n\r\nAha, my bad. I thought I finally contributed something useful! :flushed: You can add a PR, I'll close this one. (Perhaps the updated test is still useful so that something like this doesn't happen in the future.)\r\n\r\nCan you link to the lines where the weight tying is happening, though? I must have completely missed it.",
"Your contributions are more than useful, @BramVanroy, and I'm glad you tried to fix an issue when you discovered one, thank you.\r\n\r\nTo answer your question, the `PreTrainedModel` abstract class has an [`init_weights` method](https://github.com/huggingface/transformers/blob/53ce3854a16ad2a715bc6ac8af3e30c18b5a1d11/src/transformers/modeling_utils.py#L156) which ties the input embeddings to the output embeddings.\r\n\r\nThis method is not directly called by any model class, but it is called by the [`init_weights` method](https://github.com/huggingface/transformers/blob/53ce3854a16ad2a715bc6ac8af3e30c18b5a1d11/src/transformers/modeling_utils.py#L251) of that same abstract class.\r\n\r\nIt is this last method that is called by every model during their instantiation, for example with [`RobertaModel`](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_roberta.py#L152).\r\n\r\nThis is only the PyTorch way though, the TensorFlow way is different. In TensorFlow, we use a single layer that can be called as an `embedding` or a `linear` layer, as you may see in the [`BertEmbeddings` class](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_bert.py#L132-L152). Please note the `mode` flag which makes possible the choice between the layers."
] | 1,582 | 1,582 | 1,582 | COLLABORATOR | null | closes https://github.com/huggingface/transformers/issues/1874
The implementation of RoBERTa in `transformers` differs from the original implementation in [fairseq](https://github.com/pytorch/fairseq/tree/master/fairseq/models/roberta), as results showed (cf. https://github.com/huggingface/transformers/issues/1874). I have documented my findings here https://github.com/huggingface/transformers/issues/1874#issuecomment-588359143 and made the corresponding changes accordingly in this PR.
Someone should check, however, that removing `get_output_embeddings()` does not have any adverse side-effects.
In addition, someone who is knowledgeable about Tensorflow should check the TF implementation of RoBERTa, too. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2928/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2928",
"html_url": "https://github.com/huggingface/transformers/pull/2928",
"diff_url": "https://github.com/huggingface/transformers/pull/2928.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2928.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2927 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2927/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2927/comments | https://api.github.com/repos/huggingface/transformers/issues/2927/events | https://github.com/huggingface/transformers/issues/2927 | 568,225,778 | MDU6SXNzdWU1NjgyMjU3Nzg= | 2,927 | What does ## mean in the bert vocab? | {
"login": "NapsterLong",
"id": 17425788,
"node_id": "MDQ6VXNlcjE3NDI1Nzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/17425788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NapsterLong",
"html_url": "https://github.com/NapsterLong",
"followers_url": "https://api.github.com/users/NapsterLong/followers",
"following_url": "https://api.github.com/users/NapsterLong/following{/other_user}",
"gists_url": "https://api.github.com/users/NapsterLong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NapsterLong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NapsterLong/subscriptions",
"organizations_url": "https://api.github.com/users/NapsterLong/orgs",
"repos_url": "https://api.github.com/users/NapsterLong/repos",
"events_url": "https://api.github.com/users/NapsterLong/events{/privacy}",
"received_events_url": "https://api.github.com/users/NapsterLong/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"This question is better suited for [Stack Overflow](https://stackoverflow.com/). Please ask similar questions there in the future.\r\n\r\nIt indicates that the token is a subword unit, i.e. part of a larger word. For instance, the word \"potatoes\" might be tokenised as \"po, ##ta, ##toes\". If you want to learn more about this kind of tokenisation, I suggest you read up on byte-pair encoding and the like."
] | 1,582 | 1,582 | 1,582 | NONE | null | What does ## mean in the bert vocab?
some words are starts with ##, such as ##a ##m ##er ##h ,I don't quite understand。 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2927/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2926 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2926/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2926/comments | https://api.github.com/repos/huggingface/transformers/issues/2926/events | https://github.com/huggingface/transformers/issues/2926 | 568,169,280 | MDU6SXNzdWU1NjgxNjkyODA= | 2,926 | Masked LM implementation details | {
"login": "luozhouyang",
"id": 34032031,
"node_id": "MDQ6VXNlcjM0MDMyMDMx",
"avatar_url": "https://avatars.githubusercontent.com/u/34032031?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luozhouyang",
"html_url": "https://github.com/luozhouyang",
"followers_url": "https://api.github.com/users/luozhouyang/followers",
"following_url": "https://api.github.com/users/luozhouyang/following{/other_user}",
"gists_url": "https://api.github.com/users/luozhouyang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luozhouyang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luozhouyang/subscriptions",
"organizations_url": "https://api.github.com/users/luozhouyang/orgs",
"repos_url": "https://api.github.com/users/luozhouyang/repos",
"events_url": "https://api.github.com/users/luozhouyang/events{/privacy}",
"received_events_url": "https://api.github.com/users/luozhouyang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | null | [] | [
"The method from google-research/bert you're showing returns the loss. We don't return the loss with `TFBertForMaskedLM`, we return the output distribution over the vocabulary. You can then use this output distribution to compute the loss, with a cross entropy loss for example.",
"I see, thanks."
] | 1,582 | 1,583 | 1,583 | NONE | null | I read the source code of `TFBertMLMHead`, it seems that, this layer just predict the whole sequence, rather than predict the `MASKED` tokens.
`TFBertMLMHead` just do these things:
* transform the `hidden state` from the last encoder layer
* `predictions = tf.matmul(hidden_state, input_embedding_matrix)`, with shape (batch_size, sequence_length, vocab_size)
* return the `predictions` to calculate loss
The inputs are simply the `hidden state` of the last encoder layer.
But the implemenmtation from [google-research/bert](https://github.com/google-research/bert/blob/cc7051dc592802f501e8a6f71f8fb3cf9de95dc9/run_pretraining.py#L240) needs extra inputs `masked_lm_positions` and `masked_lm_weights`, and then use these inputs to calculate the masked lm loss.
So, does the `TFBertMLMHead` miss something?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2926/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2925 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2925/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2925/comments | https://api.github.com/repos/huggingface/transformers/issues/2925/events | https://github.com/huggingface/transformers/issues/2925 | 568,156,444 | MDU6SXNzdWU1NjgxNTY0NDQ= | 2,925 | DistilRoberta Model fine tuning on Squad dataset | {
"login": "graviraja",
"id": 7556119,
"node_id": "MDQ6VXNlcjc1NTYxMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7556119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/graviraja",
"html_url": "https://github.com/graviraja",
"followers_url": "https://api.github.com/users/graviraja/followers",
"following_url": "https://api.github.com/users/graviraja/following{/other_user}",
"gists_url": "https://api.github.com/users/graviraja/gists{/gist_id}",
"starred_url": "https://api.github.com/users/graviraja/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/graviraja/subscriptions",
"organizations_url": "https://api.github.com/users/graviraja/orgs",
"repos_url": "https://api.github.com/users/graviraja/repos",
"events_url": "https://api.github.com/users/graviraja/events{/privacy}",
"received_events_url": "https://api.github.com/users/graviraja/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834053813,
"node_id": "MDU6TGFiZWwxODM0MDUzODEz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/PyTorch",
"name": "PyTorch",
"color": "a12bef",
"default": false,
"description": "Anything PyTorch"
},
{
"id": 1838876023,
"node_id": "MDU6TGFiZWwxODM4ODc2MDIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Distillation",
"name": "Distillation",
"color": "d4c5f9",
"default": false,
"description": "Related to model distillation"
}
] | closed | false | null | [] | [
"Hello @graviraja \r\nYou don't need to remove the `token_type_ids` for RoBERTa models. There is one matrix of token type and it has only one type (a matrix of 0).\r\nYou can remove the `do_lower_case` flag for RoBERTa models. The vocabulary is case sensitive.\r\nHave you tried WITHOUT GPU? (`CUDA_VISIBLE_DEVICES=\"\")",
"Hi @VictorSanh \r\nRunning the code on cpu throws the following error\r\n```python\r\nTraceback (most recent call last):\r\n File \"run_squad_w_distillation.py\", line 871, in <module>\r\n main()\r\n File \"run_squad_w_distillation.py\", line 813, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer, teacher=teacher)\r\n File \"run_squad_w_distillation.py\", line 207, in train\r\n \"input_ids\": batch[0],\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py\", line 146, in forward\r\n \"them on device: {}\".format(self.src_device_obj, t.device))\r\nRuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu\r\n```\r\nI have mentioned `--no_cuda flag` in the command and removed `--do_lower_case`.\r\n\r\nThank you for your help!",
"I was not able to reproduce your bug @graviraja.\r\n\r\nI pushed an update to `run_squad_w_distillation.py` on master to include RoBERTa, let me know if that works.\r\n\r\nAs I suggested, to test without GPU, you should use the `CUDA_VISIBLE_DEVICES=\"\"` as I believe there is an inconsistency between the `--no_cuda` flag and the `args.n_gpu = torch.cuda.device_count()`. I'll correct it.",
"Hi @VictorSanh still it is not working for me after pulling the latest code. I have tried with `CUDA_VISIBLE_DEVICES=\"\"` without `--no_cuda` flag and with `--no_cuda` flag also. I am getting the below error.\r\n\r\n```python\r\n03/03/2020 13:26:59 - INFO - __main__ - Num examples = 135302\r\n03/03/2020 13:26:59 - INFO - __main__ - Num Epochs = 3\r\n03/03/2020 13:26:59 - INFO - __main__ - Instantaneous batch size per GPU = 8\r\n03/03/2020 13:26:59 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 8\r\n03/03/2020 13:26:59 - INFO - __main__ - Gradient Accumulation steps = 1\r\n03/03/2020 13:26:59 - INFO - __main__ - Total optimization steps = 50739\r\nEpoch: 0%| | 0/3 [00:00<?, ?it/s] Traceback (most recent call last):\r\n File \"run_squad_w_distillation.py\", line 868, in <module>\r\n main()\r\n File \"run_squad_w_distillation.py\", line 810, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer, teacher=teacher)\r\n File \"run_squad_w_distillation.py\", line 217, in train\r\n outputs = model(**inputs)\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/transformers/modeling_roberta.py\", line 708, in forward\r\n head_mask=head_mask,\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/transformers/modeling_bert.py\", line 801, in forward\r\n input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/transformers/modeling_roberta.py\", line 64, in forward\r\n input_ids, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/transformers/modeling_bert.py\", line 190, in forward\r\n token_type_embeddings = self.token_type_embeddings(token_type_ids)\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/sparse.py\", line 114, in forward\r\n self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/functional.py\", line 1484, in embedding\r\n return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\nRuntimeError: index out of range: Tried to access index 1 out of table with 0 rows. at /opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:418\r\n```\r\nWith gpu, by setting `CUDA_VISIBLE_DEVICES=1`, I am getting the following error:\r\n\r\n```python\r\n File \"run_squad_w_distillation.py\", line 868, in <module>\r\n main()\r\n File \"run_squad_w_distillation.py\", line 810, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer, teacher=teacher)\r\n File \"run_squad_w_distillation.py\", line 217, in train\r\n outputs = model(**inputs)\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/transformers/modeling_roberta.py\", line 708, in forward\r\n head_mask=head_mask,\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/transformers/modeling_bert.py\", line 808, in forward\r\n encoder_attention_mask=encoder_extended_attention_mask,\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/transformers/modeling_bert.py\", line 422, in forward\r\n hidden_states, attention_mask, head_mask[i], encoder_hidden_states, encoder_attention_mask\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/transformers/modeling_bert.py\", line 383, in forward\r\n self_attention_outputs = self.attention(hidden_states, attention_mask, head_mask)\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/transformers/modeling_bert.py\", line 329, in forward\r\n hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/transformers/modeling_bert.py\", line 231, in forward\r\n mixed_query_layer = self.query(hidden_states)\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/modules/linear.py\", line 87, in forward\r\n return F.linear(input, self.weight, self.bias)\r\n File \"/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/nn/functional.py\", line 1372, in linear\r\n output = input.matmul(weight.t())\r\nRuntimeError: cublas runtime error : library not initialized at /opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/THC/THCGeneral.cpp:216\r\n```\r\n\r\nI have trained the model `roberta` on `squad 2.0` dataset using GPU. Does this cause an issue?\r\n",
"Reading the error when it's running on CPU, It looks like that you have a tokenization problem (it tries to access an index that is out of range). Are you on master? Could you make sure you tokenize the dataset and run the inference with the same version? --> Add a `--overwrite_cache` to retokenize.",
"@VictorSanh same issue is happening with `--overwrite_cache`. May I know the command you are using for training the distilroberta model?",
"@VictorSanh any update on this ?",
"Updating the transformers version fixes it."
] | 1,582 | 1,588 | 1,588 | NONE | null | # 🐛 Bug
I am trying to train the **distilroberta-base** model on **SQuAD** dataset using distillation code. I have trained the Roberta model on SQuAD 2.0 dataset, so that I can use it as a teacher model. I am using the **distilroberta-base** model as student.
## Information
Model I am using : Roberta
Language I am using the model on: English
The problem arises when using:
* [x] my own modified scripts: (give details below)
I am using the run_squad_w_distillation.py code, with the following modification
- Added the imports relevant to roberta
- Removed token_type_ids from inputs while sending to student model
```python
MODEL_CLASSES = {
"bert": (BertConfig, BertForQuestionAnswering, BertTokenizer),
"xlnet": (XLNetConfig, XLNetForQuestionAnswering, XLNetTokenizer),
"xlm": (XLMConfig, XLMForQuestionAnswering, XLMTokenizer),
"distilbert": (DistilBertConfig, DistilBertForQuestionAnswering, DistilBertTokenizer),
"roberta": (RobertaConfig, RobertaForQuestionAnswering, RobertaTokenizer)
}
```
```python
if args.model_type in ["roberta"]:
del inputs["token_type_ids"]
```
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQuAD 2.0
## To reproduce
Steps to reproduce the behavior:
1. python run_squad_w_distillation.py --model_type roberta --model_name_or_path distilroberta-base --output_dir ./distil_roberta --teacher_type roberta --teacher_name_or_path $ROBERTA_MODEL --train_file $SQUAD_DIR/train-v2.0.json --predict_file $SQUAD_DIR/dev-v2.0.json --version_2_with_negative --do_train --do_eval --do_lower_case --save_steps 5000 --logging_steps 5000
```python
/opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [206,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
Traceback (most recent call last):
File "run_squad_w_distillation.py", line 871, in <module>
main()
File "run_squad_w_distillation.py", line 813, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer, teacher=teacher)
File "run_squad_w_distillation.py", line 207, in train
"input_ids": batch[0],
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/tensor.py", line 166, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/media/data2/anaconda/envs/distill2/lib/python3.6/site-packages/torch/autograd/__init__.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: transform: failed to synchronize: cudaErrorAssert: device-side assert triggered
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
To train properly and evaluate on predict file dataset with f1_score near roberta model f1_score(81.5)
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.3.0
- Platform: "CentOS Linux 7"
- Python version: 3.6.9
- PyTorch version (GPU?): Yes
- Tensorflow version (GPU?): No
- Using GPU in script?: Yes, CUDA Version: 10.2, Driver Version: 440.33.01
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2925/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2924 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2924/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2924/comments | https://api.github.com/repos/huggingface/transformers/issues/2924/events | https://github.com/huggingface/transformers/pull/2924 | 568,151,234 | MDExOlB1bGxSZXF1ZXN0Mzc3NjMzMDkx | 2,924 | Update modeling_tf_utils.py | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This is great, thanks @BramVanroy !!"
] | 1,582 | 1,582 | 1,582 | COLLABORATOR | null | Tensorflow does not use .eval() vs .train().
closes https://github.com/huggingface/transformers/issues/2906 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2924/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2924",
"html_url": "https://github.com/huggingface/transformers/pull/2924",
"diff_url": "https://github.com/huggingface/transformers/pull/2924.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2924.patch",
"merged_at": 1582302513000
} |
https://api.github.com/repos/huggingface/transformers/issues/2923 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2923/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2923/comments | https://api.github.com/repos/huggingface/transformers/issues/2923/events | https://github.com/huggingface/transformers/issues/2923 | 568,139,724 | MDU6SXNzdWU1NjgxMzk3MjQ= | 2,923 | Loading tensorflow first and then loading transformers errors | {
"login": "emillykkejensen",
"id": 8842355,
"node_id": "MDQ6VXNlcjg4NDIzNTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8842355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emillykkejensen",
"html_url": "https://github.com/emillykkejensen",
"followers_url": "https://api.github.com/users/emillykkejensen/followers",
"following_url": "https://api.github.com/users/emillykkejensen/following{/other_user}",
"gists_url": "https://api.github.com/users/emillykkejensen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emillykkejensen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emillykkejensen/subscriptions",
"organizations_url": "https://api.github.com/users/emillykkejensen/orgs",
"repos_url": "https://api.github.com/users/emillykkejensen/repos",
"events_url": "https://api.github.com/users/emillykkejensen/events{/privacy}",
"received_events_url": "https://api.github.com/users/emillykkejensen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
}
] | closed | false | null | [] | [
"The import error seems to originate here:\r\n\r\n> tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED\r\n\r\nCan you try the answers provided here:\r\n\r\nhttps://stackoverflow.com/questions/38303974/tensorflow-running-error-with-cublas\r\n\r\nBy the way, I would assume that when you actually try to run a model, the changed order would also trigger errors.",
"Have just tried it (had to modify the code to TF2) - ran this:\r\n```\r\nimport tensorflow as tf\r\nconfig = tf.compat.v1.ConfigProto()\r\nconfig.gpu_options.allow_growth = True\r\n\r\nfrom transformers import TFBertForSequenceClassification\r\nmodel = TFBertForSequenceClassification.from_pretrained(path_to_my_model, from_pt = True)\r\n```\r\nBut still got the same error code....\r\n```\r\n>>> import tensorflow as tf\r\n2020-02-20 10:36:32.394175: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6\r\n2020-02-20 10:36:32.395376: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer_plugin.so.6\r\nconfig = tf.compat.v1.ConfigProto()\r\n>>> config = tf.compat.v1.ConfigProto()\r\n>>> config.gpu_options.allow_growth = True\r\n>>> from transformers import TFBertForSequenceClassification\r\n>>> model = TFBertForSequenceClassification.from_pretrained(path_to_my_model, from_pt = True)\r\n2020-02-20 10:36:35.742499: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1\r\n2020-02-20 10:36:35.746188: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-02-20 10:36:35.746521: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: \r\npciBusID: 0000:02:02.0 name: GRID RTX6000-24Q computeCapability: 7.5\r\ncoreClock: 1.77GHz coreCount: 72 deviceMemorySize: 23.88GiB deviceMemoryBandwidth: 625.94GiB/s\r\n2020-02-20 10:36:35.746558: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\r\n2020-02-20 10:36:35.746583: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\r\n2020-02-20 10:36:35.747935: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\r\n2020-02-20 10:36:35.748182: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\r\n2020-02-20 10:36:35.749540: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\r\n2020-02-20 10:36:35.750324: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\r\n2020-02-20 10:36:35.750367: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\r\n2020-02-20 10:36:35.750480: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-02-20 10:36:35.750878: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-02-20 10:36:35.751142: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0\r\n2020-02-20 10:36:35.751382: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA\r\n2020-02-20 10:36:35.759634: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3192500000 Hz\r\n2020-02-20 10:36:35.759911: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0xf1b9b70 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\r\n2020-02-20 10:36:35.759927: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version\r\n2020-02-20 10:36:35.947642: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-02-20 10:36:35.948100: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0xf22f990 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\r\n2020-02-20 10:36:35.948123: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GRID RTX6000-24Q, Compute Capability 7.5\r\n2020-02-20 10:36:35.948331: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-02-20 10:36:35.948676: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: \r\npciBusID: 0000:02:02.0 name: GRID RTX6000-24Q computeCapability: 7.5\r\ncoreClock: 1.77GHz coreCount: 72 deviceMemorySize: 23.88GiB deviceMemoryBandwidth: 625.94GiB/s\r\n2020-02-20 10:36:35.948717: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\r\n2020-02-20 10:36:35.948727: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\r\n2020-02-20 10:36:35.948765: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10\r\n2020-02-20 10:36:35.948779: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10\r\n2020-02-20 10:36:35.948792: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10\r\n2020-02-20 10:36:35.948805: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10\r\n2020-02-20 10:36:35.948814: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7\r\n2020-02-20 10:36:35.948896: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-02-20 10:36:35.949244: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-02-20 10:36:35.949538: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0\r\n2020-02-20 10:36:35.949581: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\r\n2020-02-20 10:36:36.435874: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:\r\n2020-02-20 10:36:36.435915: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0 \r\n2020-02-20 10:36:36.435924: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N \r\n2020-02-20 10:36:36.436177: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-02-20 10:36:36.436848: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-02-20 10:36:36.437179: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 21423 MB memory) -> physical GPU (device: 0, name: GRID RTX6000-24Q, pci bus id: 0000:02:02.0, compute capability: 7.5)\r\n2020-02-20 10:36:37.545950: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10\r\n2020-02-20 10:36:37.546193: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED\r\n2020-02-20 10:36:37.546226: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED\r\n2020-02-20 10:36:37.546232: W tensorflow/stream_executor/stream.cc:2041] attempting to perform BLAS operation using StreamExecutor without BLAS support\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/my_lib/python3.7/site-packages/transformers/modeling_tf_utils.py\", line 345, in from_pretrained\r\n return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file, allow_missing_keys=True)\r\n File \"/my_lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py\", line 93, in load_pytorch_checkpoint_in_tf2_model\r\n tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys\r\n File \"/my_lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py\", line 125, in load_pytorch_weights_in_tf2_model\r\n tf_model(tf_inputs, training=False) # Make sure model is built\r\n File \"/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py\", line 822, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py\", line 916, in call\r\n outputs = self.bert(inputs, **kwargs)\r\n File \"/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py\", line 822, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py\", line 567, in call\r\n encoder_outputs = self.encoder([embedding_output, extended_attention_mask, head_mask], training=training)\r\n File \"/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py\", line 822, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py\", line 376, in call\r\n layer_outputs = layer_module([hidden_states, attention_mask, head_mask[i]], training=training)\r\n File \"/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py\", line 822, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py\", line 352, in call\r\n attention_outputs = self.attention([hidden_states, attention_mask, head_mask], training=training)\r\n File \"/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py\", line 822, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py\", line 301, in call\r\n self_outputs = self.self_attention([input_tensor, attention_mask, head_mask], training=training)\r\n File \"/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py\", line 822, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py\", line 230, in call\r\n mixed_query_layer = self.query(hidden_states)\r\n File \"/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py\", line 822, in __call__\r\n outputs = self.call(cast_inputs, *args, **kwargs)\r\n File \"/my_lib/python3.7/site-packages/tensorflow_core/python/keras/layers/core.py\", line 1131, in call\r\n outputs = standard_ops.tensordot(inputs, self.kernel, [[rank - 1], [0]])\r\n File \"/my_lib/python3.7/site-packages/tensorflow_core/python/ops/math_ops.py\", line 4106, in tensordot\r\n ab_matmul = matmul(a_reshape, b_reshape)\r\n File \"/my_lib/python3.7/site-packages/tensorflow_core/python/util/dispatch.py\", line 180, in wrapper\r\n return target(*args, **kwargs)\r\n File \"/my_lib/python3.7/site-packages/tensorflow_core/python/ops/math_ops.py\", line 2798, in matmul\r\n a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)\r\n File \"/my_lib/python3.7/site-packages/tensorflow_core/python/ops/gen_math_ops.py\", line 5616, in mat_mul\r\n _ops.raise_from_not_ok_status(e, name)\r\n File \"/my_lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py\", line 6606, in raise_from_not_ok_status\r\n six.raise_from(core._status_to_exception(e.code, message), None)\r\n File \"<string>\", line 3, in raise_from\r\n```\r\n\r\nAnd yes - as the model fails to load, I can't run it (the object simply dosn't excist...)",
"I don't use Tensorflow daily (I use PyTorch), but my far-fetched guess would be that because of the loading order, in one case two TF sessions are created which both do `Created TensorFlow device` (you can see that in the trace). That might, then, cause that device to not be able to distinguish the sessions or run out of memory to allocate or something like this.\r\n\r\nSomeone else might chip in here.",
"Seems like a valid guess :) And thanks for giving it a try - at least it works as long as I load transformers and then tf...",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,588 | 1,588 | NONE | null | # 🐛 Bug
## Information
Model I am using: Bert
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Run:
```
import tensorflow as tf
from transformers import TFBertForSequenceClassification
model = TFBertForSequenceClassification.from_pretrained('/path/to/my/tf/model/', from_pt = True)
```
Will produce the following output (with error):
```
>>> import tensorflow as tf
2020-02-20 09:36:51.035083: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6
2020-02-20 09:36:51.036337: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer_plugin.so.6
>>> from transformers import TFBertForSequenceClassification
>>> model = TFBertForSequenceClassification.from_pretrained('/path/to/my/tf/model/', from_pt = True)
2020-02-20 09:36:52.226797: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-02-20 09:36:52.230595: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one
NUMA node, so returning NUMA node zero
2020-02-20 09:36:52.231392: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:02:02.0 name: GRID RTX6000-24Q computeCapability: 7.5
coreClock: 1.77GHz coreCount: 72 deviceMemorySize: 23.88GiB deviceMemoryBandwidth: 625.94GiB/s
2020-02-20 09:36:52.231447: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-02-20 09:36:52.231475: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-02-20 09:36:52.233199: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-02-20 09:36:52.233465: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-02-20 09:36:52.234866: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-02-20 09:36:52.235660: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-02-20 09:36:52.235707: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-02-20 09:36:52.235845: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one
NUMA node, so returning NUMA node zero
2020-02-20 09:36:52.236261: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:36:52.236765: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-02-20 09:36:52.237022: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-02-20 09:36:52.241987: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3192500000 Hz
2020-02-20 09:36:52.242277: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0xeb8bae0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-02-20 09:36:52.242294: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-02-20 09:36:52.435669: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:36:52.436129: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0xec01900 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-02-20 09:36:52.436153: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GRID RTX6000-24Q, Compute Capability 7.5
2020-02-20 09:36:52.436350: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:36:52.436672: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:02:02.0 name: GRID RTX6000-24Q computeCapability: 7.5
coreClock: 1.77GHz coreCount: 72 deviceMemorySize: 23.88GiB deviceMemoryBandwidth: 625.94GiB/s
2020-02-20 09:36:52.436706: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-02-20 09:36:52.436716: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-02-20 09:36:52.436744: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-02-20 09:36:52.436755: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-02-20 09:36:52.436765: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-02-20 09:36:52.436774: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-02-20 09:36:52.436781: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-02-20 09:36:52.436861: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:36:52.437204: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:36:52.437493: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-02-20 09:36:52.437528: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-02-20 09:36:52.936429: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-02-20 09:36:52.936466: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0
2020-02-20 09:36:52.936474: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N
2020-02-20 09:36:52.936737: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:36:52.937283: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:36:52.937654: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 21423 MB memory) -> physical GPU (device: 0, name: GRID RTX6000-24Q, pci bus id: 0000:02:02.0, compute capability: 7.5)
2020-02-20 09:36:54.066446: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-02-20 09:36:54.066688: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
2020-02-20 09:36:54.066725: E tensorflow/stream_executor/cuda/cuda_blas.cc:238] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
2020-02-20 09:36:54.066732: W tensorflow/stream_executor/stream.cc:2041] attempting to perform BLAS operation using StreamExecutor without BLAS support
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 345, in from_pretrained
return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file, allow_missing_keys=True)
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 93, in load_pytorch_checkpoint_in_tf2_model
tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 125, in load_pytorch_weights_in_tf2_model
tf_model(tf_inputs, training=False) # Make sure model is built
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py", line 916, in call
outputs = self.bert(inputs, **kwargs)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py", line 567, in call
encoder_outputs = self.encoder([embedding_output, extended_attention_mask, head_mask], training=training)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py", line 376, in call
layer_outputs = layer_module([hidden_states, attention_mask, head_mask[i]], training=training)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py", line 352, in call
attention_outputs = self.attention([hidden_states, attention_mask, head_mask], training=training)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py", line 301, in call
self_outputs = self.self_attention([input_tensor, attention_mask, head_mask], training=training)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/my_lib/python3.7/site-packages/transformers/modeling_tf_bert.py", line 230, in call
mixed_query_layer = self.query(hidden_states)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/keras/layers/core.py", line 1131, in call
outputs = standard_ops.tensordot(inputs, self.kernel, [[rank - 1], [0]])
File "/my_lib/python3.7/site-packages/tensorflow_core/python/ops/math_ops.py", line 4106, in tensordot
ab_matmul = matmul(a_reshape, b_reshape)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/util/dispatch.py", line 180, in wrapper
return target(*args, **kwargs)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/ops/math_ops.py", line 2798, in matmul
a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/ops/gen_math_ops.py", line 5616, in mat_mul
_ops.raise_from_not_ok_status(e, name)
File "/my_lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 6606, in raise_from_not_ok_status
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed : a.shape=(15, 768), b.shape=(768, 768), m=15, n=768, k=768 [Op:MatMul] name: tf_bert_for_sequence_classification/bert/encoder/layer_._0/attention/self/query/Tensordot/MatMul/
>>>
```
However, if I load transformers first and then load tensorflow, there is no problem...
(Output from console):
```
>>> from transformers import TFBertForSequenceClassification
2020-02-20 09:40:54.413603: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6
2020-02-20 09:40:54.414946: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer_plugin.so.6
>>> import tensorflow as tf
>>> model = TFBertForSequenceClassification.from_pretrained('/path/to/my/tf/model/', from_pt = True)
2020-02-20 09:40:55.402943: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-02-20 09:40:55.407404: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one
NUMA node, so returning NUMA node zero
2020-02-20 09:40:55.407771: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:02:02.0 name: GRID RTX6000-24Q computeCapability: 7.5
coreClock: 1.77GHz coreCount: 72 deviceMemorySize: 23.88GiB deviceMemoryBandwidth: 625.94GiB/s
2020-02-20 09:40:55.407828: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-02-20 09:40:55.407858: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-02-20 09:40:55.409288: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-02-20 09:40:55.409560: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-02-20 09:40:55.410954: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-02-20 09:40:55.411852: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-02-20 09:40:55.411906: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-02-20 09:40:55.412020: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one
NUMA node, so returning NUMA node zero
2020-02-20 09:40:55.412437: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:40:55.412712: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-02-20 09:40:55.412957: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-02-20 09:40:55.417720: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3192500000 Hz
2020-02-20 09:40:55.417908: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5be91f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-02-20 09:40:55.417927: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-02-20 09:40:55.604909: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:40:55.605396: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5cc07b0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-02-20 09:40:55.605419: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GRID RTX6000-24Q, Compute Capability 7.5
2020-02-20 09:40:55.605632: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:40:55.605947: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:02:02.0 name: GRID RTX6000-24Q computeCapability: 7.5
coreClock: 1.77GHz coreCount: 72 deviceMemorySize: 23.88GiB deviceMemoryBandwidth: 625.94GiB/s
2020-02-20 09:40:55.605984: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-02-20 09:40:55.606000: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-02-20 09:40:55.606032: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-02-20 09:40:55.606045: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-02-20 09:40:55.606058: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-02-20 09:40:55.606070: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-02-20 09:40:55.606080: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-02-20 09:40:55.606159: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:40:55.606493: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:40:55.606763: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-02-20 09:41:00.803464: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-02-20 09:41:00.803503: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0
2020-02-20 09:41:00.803509: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N
2020-02-20 09:41:00.803804: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:41:00.804291: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-20 09:41:00.804643: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 20754 MB memory) -> physical GPU (device: 0, name: GRID RTX6000-24Q, pci bus id: 0000:02:02.0, compute capability: 7.5)
>>>
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.0
- Platform: Linux
- Python version: 3.7.5
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.1.0
- Using GPU in script?:Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2923/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2923/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2922 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2922/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2922/comments | https://api.github.com/repos/huggingface/transformers/issues/2922/events | https://github.com/huggingface/transformers/pull/2922 | 568,138,533 | MDExOlB1bGxSZXF1ZXN0Mzc3NjIyOTY2 | 2,922 | Tokenizer fast warnings | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2922?src=pr&el=h1) Report\n> Merging [#2922](https://codecov.io/gh/huggingface/transformers/pull/2922?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d490b5d5003654f104af3abd0556e598335b5650?src=pr&el=desc) will **decrease** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2922?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2922 +/- ##\n==========================================\n- Coverage 75.35% 75.35% -0.01% \n==========================================\n Files 94 94 \n Lines 15444 15445 +1 \n==========================================\n Hits 11638 11638 \n- Misses 3806 3807 +1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2922?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2922/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `96.82% <ø> (-0.03%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2922/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.45% <100%> (-0.14%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2922/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `100% <100%> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2922?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2922?src=pr&el=footer). Last update [d490b5d...6a55286](https://codecov.io/gh/huggingface/transformers/pull/2922?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,582 | 1,582 | MEMBER | null | - Warning abut padding should not trigger so often now, especially when no padding strategy is provided by the user.
- RoberTa warning is now in RoberTaTokenizer, not GPT2 base class. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2922/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2922",
"html_url": "https://github.com/huggingface/transformers/pull/2922",
"diff_url": "https://github.com/huggingface/transformers/pull/2922.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2922.patch",
"merged_at": 1582217704000
} |
https://api.github.com/repos/huggingface/transformers/issues/2921 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2921/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2921/comments | https://api.github.com/repos/huggingface/transformers/issues/2921/events | https://github.com/huggingface/transformers/pull/2921 | 568,123,135 | MDExOlB1bGxSZXF1ZXN0Mzc3NjEwMzMw | 2,921 | Expose all constructor parameters for BertTokenizerFast | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2921?src=pr&el=h1) Report\n> Merging [#2921](https://codecov.io/gh/huggingface/transformers/pull/2921?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d490b5d5003654f104af3abd0556e598335b5650?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2921?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2921 +/- ##\n=======================================\n Coverage 75.35% 75.35% \n=======================================\n Files 94 94 \n Lines 15444 15444 \n=======================================\n Hits 11638 11638 \n Misses 3806 3806\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2921?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2921/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `96.99% <ø> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2921?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2921?src=pr&el=footer). Last update [d490b5d...21ac4a0](https://codecov.io/gh/huggingface/transformers/pull/2921?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,582 | 1,582 | MEMBER | null | Signed-off-by: Morgan Funtowicz <[email protected]> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2921/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2921",
"html_url": "https://github.com/huggingface/transformers/pull/2921",
"diff_url": "https://github.com/huggingface/transformers/pull/2921.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2921.patch",
"merged_at": 1582217612000
} |
https://api.github.com/repos/huggingface/transformers/issues/2920 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2920/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2920/comments | https://api.github.com/repos/huggingface/transformers/issues/2920/events | https://github.com/huggingface/transformers/issues/2920 | 568,040,921 | MDU6SXNzdWU1NjgwNDA5MjE= | 2,920 | Error arises when using pipeline with community model | {
"login": "ankandrew",
"id": 61120139,
"node_id": "MDQ6VXNlcjYxMTIwMTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/61120139?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ankandrew",
"html_url": "https://github.com/ankandrew",
"followers_url": "https://api.github.com/users/ankandrew/followers",
"following_url": "https://api.github.com/users/ankandrew/following{/other_user}",
"gists_url": "https://api.github.com/users/ankandrew/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ankandrew/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ankandrew/subscriptions",
"organizations_url": "https://api.github.com/users/ankandrew/orgs",
"repos_url": "https://api.github.com/users/ankandrew/repos",
"events_url": "https://api.github.com/users/ankandrew/events{/privacy}",
"received_events_url": "https://api.github.com/users/ankandrew/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @ankandrew, \r\n\r\nThanks for reporting the issue. Effectively, the QA pipeline is not compatible with fast tokenizers for technical reasons (and I'm currently working on a fix for this).\r\n\r\nAs a workaround for now, you can disable fast tokenizers when allocating the pipeline:\r\n\r\n```python\r\nnlp = pipeline(\r\n 'question-answering', \r\n model='mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es',\r\n tokenizer=(\r\n 'mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es', \r\n {\"use_fast\": False}\r\n )\r\n)\r\n\r\nnlp(\r\n {\r\n 'question': 'que queso es?',\r\n 'context': 'Se utilizo en el dia de hoy un queso Emmental'\r\n }\r\n)\r\n> {'score': 0.36319364208159755, 'start': 31, 'end': 44, 'answer': 'queso Emmental'}\r\n```",
"Also cc'ing @mrm8488 for information while it's in the process of being fixed",
"Thank for the information!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,588 | 1,588 | NONE | null | # 🐛 Bug
## Information
Model I am using is: `mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es`
Language I am using the model on: Spanish
The problem arises when using:
* [ ] the official example scripts:
* [x] my own modified scripts:
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task:
* [x] my own task or dataset:
Steps to reproduce the behavior:
```python
from transformers import *
# Build a pipeline for QA
nlp = pipeline('question-answering', model='mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es',
tokenizer='mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es')
nlp(
{
'question': 'que queso es?',
'context': 'Se utilizo en el dia de hoy un queso Emmental'
}
)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
This was working two days ago.
<details>
<summary>Error log</summary>
```html
convert squad examples to features: 0%| | 0/1 [00:00<?, ?it/s]WARNING:transformers.tokenization_utils:Disabled padding because no padding token set (pad_token: [PAD], pad_token_id: 1).
To remove this error, you can add a new pad token and then resize model embedding:
tokenizer.pad_token = '<PAD>'
model.resize_token_embeddings(len(tokenizer))
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/usr/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/usr/local/lib/python3.6/dist-packages/transformers/data/processors/squad.py", line 141, in squad_convert_example_to_features
truncation_strategy="only_second" if tokenizer.padding_side == "right" else "only_first",
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 1796, in encode_plus
**kwargs,
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 1722, in batch_encode_plus
tokens = self._tokenizer.encode(*batch_text_or_text_pairs[0])
File "/usr/local/lib/python3.6/dist-packages/tokenizers/implementations/base_tokenizer.py", line 141, in encode
return self._tokenizer.encode(sequence, pair)
TypeError
"""
The above exception was the direct cause of the following exception:
TypeError Traceback (most recent call last)
<ipython-input-21-df466dea770c> in <module>()
8 nlp({
9 'question': question,
---> 10 'context': context
11 })
12 )
11 frames
/usr/local/lib/python3.6/dist-packages/tokenizers/implementations/base_tokenizer.py in encode()
139 An Encoding
140 """
--> 141 return self._tokenizer.encode(sequence, pair)
142
143 def encode_batch(self, sequences: List[Union[str, Tuple[str, str]]]) -> List[Encoding]:
TypeError:
```
</details>
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.0
- Python version:3.6.9
- Torch version (GPU?): 1.4.0, running on CPU
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2920/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2919 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2919/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2919/comments | https://api.github.com/repos/huggingface/transformers/issues/2919/events | https://github.com/huggingface/transformers/issues/2919 | 567,947,425 | MDU6SXNzdWU1Njc5NDc0MjU= | 2,919 | Fast tokenizers ignore `add_special_tokens=False` | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"This works now (but I can't close the issue).",
"I see now that it works with some changes:\r\n\r\n```python\r\nfrom transformers import AutoTokenizer\r\n\r\npretrained_model_name = \"bert-base-cased\"\r\n\r\nfast_tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name,\r\n add_special_tokens=False, use_fast=True)\r\nslow_tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name)\r\n\r\ntext = \"hello\"\r\n\r\nassert fast_tokenizer.encode(text) == slow_tokenizer.encode(text, add_special_tokens=False)\r\n```\r\n\r\nHowever, I see `add_special_tokens` needs to be specified differently in the fast version (init) and in the slow version (encode). Can it be made more homogeneous? I'll leave this issue open for this because the fast version still ignores it in `encode` and there's this discrepancy (maybe the slow version can be changed then).",
"It also doesn't work for `roberta-base`.",
"In the last version, available on `master` for now, we actually changed this to match the slow version. So in all cases, `add_special_tokens` should be specified with `tokenize`, `encode` etc, and not during initialization."
] | 1,582 | 1,587 | 1,587 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import AutoTokenizer
pretrained_model_name = "bert-base-cased"
fast_tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name)
slow_tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name, use_fast=False)
text = "hello"
assert fast_tokenizer.encode(text, add_special_tokens=False) == slow_tokenizer.encode(text, add_special_tokens=False)
```
## Expected behavior
The fast tokenizers shouldn't add the special tokens if `add_special_tokens` is equal to `False`.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.0
- Platform: Linux-4.15.0-76-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.4
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.0.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2919/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/2919/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2918 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2918/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2918/comments | https://api.github.com/repos/huggingface/transformers/issues/2918/events | https://github.com/huggingface/transformers/pull/2918 | 567,931,063 | MDExOlB1bGxSZXF1ZXN0Mzc3NDU0NTM4 | 2,918 | Fast Tokenizers save pretrained should return the list of generated file paths. | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2918?src=pr&el=h1) Report\n> Merging [#2918](https://codecov.io/gh/huggingface/transformers/pull/2918?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2708b44ee9c151a2cdb84620d295c997af6fa7f0?src=pr&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2918?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2918 +/- ##\n==========================================\n+ Coverage 75.33% 75.35% +0.01% \n==========================================\n Files 94 94 \n Lines 15444 15444 \n==========================================\n+ Hits 11635 11638 +3 \n+ Misses 3809 3806 -3\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2918?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2918/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.58% <100%> (+0.44%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2918?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2918?src=pr&el=footer). Last update [2708b44...c10fcae](https://codecov.io/gh/huggingface/transformers/pull/2918?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,582 | 1,582 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2918/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2918",
"html_url": "https://github.com/huggingface/transformers/pull/2918",
"diff_url": "https://github.com/huggingface/transformers/pull/2918.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2918.patch",
"merged_at": 1582156685000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2917 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2917/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2917/comments | https://api.github.com/repos/huggingface/transformers/issues/2917/events | https://github.com/huggingface/transformers/issues/2917 | 567,929,157 | MDU6SXNzdWU1Njc5MjkxNTc= | 2,917 | Breaking-change behavior in BERT tokenizer when stripping accents | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"Yeah, I found the same problem in my code. The \"encode\" won't add padding even \"pad_to_max_length = True\".",
"HI @bryant1410, \r\n\r\nThanks for reporting the issue. The parameter `strip_accents` was indeed enabled on `BertTokenizerFast`. \r\n\r\nI've a PR exposing the missing parameters https://github.com/huggingface/transformers/pull/2921, it will land soon on master and will be included in the first maintenance release of 2.5 ",
"I see, thanks! There's an incompatibility still though, which is that you can choose if to strip accents in the fast tokenizers but you can't control that in the previous tokenizers. I believe this should be fixed as well.\r\n\r\nAnd be aware that, IIRC, this is still a breaking change, because in the previous tokenizers you would get stipped accents by default in one way but now it seems to behave in a different way by default.\r\n\r\nI don't know if this also the case for the other params added in #2921, and for other models apart from BERT.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Please don't close it as this is an important issue.",
"Same one reported by @stefan-it, @n1t0 ?",
"Yes same one. Stripping accents is happening only when `do_lower_case=True` for slow tokenizers, and there is no way at the moment to change this behavior.\r\n\r\nWe can probably add an explicit option for this on slow tokenizers, and specify the default values in the configs.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Don't close it!! I want to have control of striping accents when tokenizing"
] | 1,582 | 1,594 | 1,593 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert (could happen with other ones, don't know)
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```python
from transformers import AutoTokenizer
pretrained_model_name = "bert-base-cased"
fast_tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name)
slow_tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name, use_fast=False)
text = "naïve"
assert fast_tokenizer.encode(text) == slow_tokenizer.encode(text)
```
With the slow, it only strips accents if lowercase is enabled (maybe a bug?):
https://github.com/huggingface/transformers/blob/e67676424191e5935362e5fe7e04b5c317d706a9/src/transformers/tokenization_bert.py#L346
With the fast one, it'd never strip accents:
https://github.com/huggingface/tokenizers/blob/python-v0.5.0/bindings/python/tokenizers/implementations/bert_wordpiece.py#L23
https://github.com/huggingface/transformers/blob/e67676424191e5935362e5fe7e04b5c317d706a9/src/transformers/tokenization_bert.py#L557-L565
I'd be cool to have that flag also, in both tokenizers.
Finally, this warning seems odd for the simple code from above:
```pycon
>>> assert fast_tokenizer.encode(text) == slow_tokenizer.encode(text)
Disabled padding because no padding token set (pad_token: [PAD], pad_token_id: 0).
To remove this error, you can add a new pad token and then resize model embedding:
tokenizer.pad_token = '<PAD>'
model.resize_token_embeddings(len(tokenizer))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AssertionError
```
Maybe here the `if pad_to_max_length` should be nesting the rest of the if?
https://github.com/huggingface/transformers/blob/e67676424191e5935362e5fe7e04b5c317d706a9/src/transformers/tokenization_utils.py#L80-L95
Didn't check in the other transformer models.
## Expected behavior
1. The 2 tokenizer outputs (slow and fast) should be the same.
2. The tokenizers should allow you to choose if to strip accents or not.
3. That warning shouldn't appear, IMHO.
## Environment info
- `transformers` version: 2.5.0
- Platform: Linux-4.15.0-76-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.4
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.0.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2917/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2916 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2916/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2916/comments | https://api.github.com/repos/huggingface/transformers/issues/2916/events | https://github.com/huggingface/transformers/issues/2916 | 567,910,296 | MDU6SXNzdWU1Njc5MTAyOTY= | 2,916 | How to train a LM with a custom Dataset? | {
"login": "ohmeow",
"id": 14000,
"node_id": "MDQ6VXNlcjE0MDAw",
"avatar_url": "https://avatars.githubusercontent.com/u/14000?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ohmeow",
"html_url": "https://github.com/ohmeow",
"followers_url": "https://api.github.com/users/ohmeow/followers",
"following_url": "https://api.github.com/users/ohmeow/following{/other_user}",
"gists_url": "https://api.github.com/users/ohmeow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ohmeow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ohmeow/subscriptions",
"organizations_url": "https://api.github.com/users/ohmeow/orgs",
"repos_url": "https://api.github.com/users/ohmeow/repos",
"events_url": "https://api.github.com/users/ohmeow/events{/privacy}",
"received_events_url": "https://api.github.com/users/ohmeow/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834053007,
"node_id": "MDU6TGFiZWwxODM0MDUzMDA3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Pretraining)",
"name": "Ex: LM (Pretraining)",
"color": "76FFAF",
"default": false,
"description": "Related to language modeling pre-training"
}
] | closed | false | null | [] | [
"This is in process of being addressed at huggingface/blog#3\r\n\r\n(You'll need to tweak the code of `run_language_modeling.py`, this is not – yet – a code-free tutorial) "
] | 1,582 | 1,582 | 1,582 | CONTRIBUTOR | null | # ❓ Questions & Help
I'm attempting to build a LM following the tutorial here (https://huggingface.co/blog/how-to-train).
Unfortunately, it is incomplete. It shows how to create a custom `Dataset` but not how to execute `run_language_modeling.py` so that it is used.
**Any chance we can get the full script for training the LM, included how to specify our custom dataset?**
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2916/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2915 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2915/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2915/comments | https://api.github.com/repos/huggingface/transformers/issues/2915/events | https://github.com/huggingface/transformers/issues/2915 | 567,887,664 | MDU6SXNzdWU1Njc4ODc2NjQ= | 2,915 | How to train with variable number of candidates for multiple choice selection? | {
"login": "g-karthik",
"id": 3851993,
"node_id": "MDQ6VXNlcjM4NTE5OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3851993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-karthik",
"html_url": "https://github.com/g-karthik",
"followers_url": "https://api.github.com/users/g-karthik/followers",
"following_url": "https://api.github.com/users/g-karthik/following{/other_user}",
"gists_url": "https://api.github.com/users/g-karthik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-karthik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-karthik/subscriptions",
"organizations_url": "https://api.github.com/users/g-karthik/orgs",
"repos_url": "https://api.github.com/users/g-karthik/repos",
"events_url": "https://api.github.com/users/g-karthik/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-karthik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,582 | 1,588 | 1,588 | NONE | null | # ❓ Questions
## Details
I am trying to train `GPT2DoubleHeadsModel` for the tasks of generation and multiple-choice selection. My dataset has examples with variable number of candidates - some examples have 10 candidates, some have 15, etc.
I wanted to be able to create a single `TensorDataset` object for my dataset and train the model using a `DataLoader` wrapped around this dataset. But clearly, since the number of candidates varies across examples, I am unable to do so.
What is an appropriate way (or best practice) to train `GPT2DoubleHeadsModel` with such a dataset? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2915/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2914 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2914/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2914/comments | https://api.github.com/repos/huggingface/transformers/issues/2914/events | https://github.com/huggingface/transformers/pull/2914 | 567,869,331 | MDExOlB1bGxSZXF1ZXN0Mzc3NDE0Mzk1 | 2,914 | Add syntax highlighting to the BibTeX in README | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2914?src=pr&el=h1) Report\n> Merging [#2914](https://codecov.io/gh/huggingface/transformers/pull/2914?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e67676424191e5935362e5fe7e04b5c317d706a9?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2914?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2914 +/- ##\n=======================================\n Coverage 75.32% 75.32% \n=======================================\n Files 94 94 \n Lines 15438 15438 \n=======================================\n Hits 11629 11629 \n Misses 3809 3809\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2914?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2914?src=pr&el=footer). Last update [e676764...5574050](https://codecov.io/gh/huggingface/transformers/pull/2914?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,582 | 1,582 | 1,582 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2914/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2914",
"html_url": "https://github.com/huggingface/transformers/pull/2914",
"diff_url": "https://github.com/huggingface/transformers/pull/2914.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2914.patch",
"merged_at": 1582211176000
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.