url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/5420 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5420/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5420/comments | https://api.github.com/repos/huggingface/transformers/issues/5420/events | https://github.com/huggingface/transformers/pull/5420 | 648,570,076 | MDExOlB1bGxSZXF1ZXN0NDQyMzU4OTYw | 5,420 | Refactor generation sampling parameters (e.g. top k, temperature) into "Sampling" classes | {
"login": "turtlesoupy",
"id": 448590,
"node_id": "MDQ6VXNlcjQ0ODU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/448590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/turtlesoupy",
"html_url": "https://github.com/turtlesoupy",
"followers_url": "https://api.github.com/users/turtlesoupy/followers",
"following_url": "https://api.github.com/users/turtlesoupy/following{/other_user}",
"gists_url": "https://api.github.com/users/turtlesoupy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/turtlesoupy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/turtlesoupy/subscriptions",
"organizations_url": "https://api.github.com/users/turtlesoupy/orgs",
"repos_url": "https://api.github.com/users/turtlesoupy/repos",
"events_url": "https://api.github.com/users/turtlesoupy/events{/privacy}",
"received_events_url": "https://api.github.com/users/turtlesoupy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@sshleifer thanks for taking a look. The run against the tests you mentioned (bart/t5/marian) passed when I gave them a kick. When you say performance, this approach should have the same amount of compute (each enabled Sampler runs once per generation loop) since it is just moving code around unless I missed something. Let me do a rebase and see if that CI failure goes away -- let me know if you have any other concerns! ",
"@turtlesoupy - thanks a lot for the PR! Cool design choice! \r\n\r\nThe `generate` method definitely needs a bigger refactor sooner or later and this is a cool idea on how to make it easier to add new probability distribution wrap functions. With this design I'm a bit worried that we restrict beam search too much in a sense that only the log_softmax of the \"next_tokens\" distribution can \"wrapped\" but not the summed distribution of the `next_token_scorers + beam_scores`. Here this will break the beam search + sampling case (if I understood the code correctly).\r\n\r\nI guess a method that adapts the `_beam_scores + next_token_scores` could also be used in \"greedy\" beam search in the future and this design choice would block us a bit. But I'm not sure whether there are many use cases one would like to adapt `_beam_scores + next_token_scores` before appling `top_k` for \"greedy\" beam search...what are your thoughts on this? @turtlesoupy @yjernite @sshleifer ",
"@patrickvonplaten I'm un-opinionated since my use cases weren't using beam search; the goal of this PR was so that I could introduce a my own sampler that enforced rules without having to fork the generate function.\r\n\r\nFor beam search, one approach could be to apply the warp to (`next_token_scores + beam_scores`) and then perform sampling afterwards. Then it is sampling from a consistent space and the hypothesis scores would be modified appropriately ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,600 | 1,600 | NONE | null | #4164 has a full description of the intention here. Basically, to avoid exploding generate(...) with more arguments, I've added one generic Sampler parameter that allows for arbitrary transformations of the generation probability distribution conditioned on the past. This allows users to specify custom ways of sampling (e.g. insert a specific token after a previous one, etc.)
In the process, I've added some basic tests around these samplers; existing tests pass otherwise. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5420/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5420/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5420",
"html_url": "https://github.com/huggingface/transformers/pull/5420",
"diff_url": "https://github.com/huggingface/transformers/pull/5420.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5420.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5419 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5419/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5419/comments | https://api.github.com/repos/huggingface/transformers/issues/5419/events | https://github.com/huggingface/transformers/issues/5419 | 648,568,661 | MDU6SXNzdWU2NDg1Njg2NjE= | 5,419 | High Quality EN-DE/EN-FR Translators | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 2009457320,
"node_id": "MDU6TGFiZWwyMDA5NDU3MzIw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/translation",
"name": "translation",
"color": "b2d2f4",
"default": false,
"description": "machine translation utilities and models"
}
] | closed | false | null | [] | [
"Excuse me.\r\nWill this model be added in the future, how long will it take?\r\nIs currently only T5 and Bart can do machine translation?",
"I would guess that I get around to this by the end of July, but I can't be sure.\r\n\r\nWe also have `MarianMTModel` and 1000+ pretrained weights from `Helsinki-NLP/` that do translation. Here is the list:\r\nhttps://huggingface.co/Helsinki-NLP\r\n",
"I will work on this one. ",
"Here is a lazy man's implementation that uses a simple proxy to the fairseq implementation and makes the spec test pass:\r\n```\r\nimport torch\r\n\r\nclass FairseqProxy():\r\n def __init__(self, module):\r\n self.module = module\r\n \r\n @classmethod\r\n def from_pretrained(cls, mname): \r\n return cls(module=torch.hub.load('pytorch/fairseq', mname, checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt', tokenizer='moses', bpe='fastbpe'))\r\n\r\nclass FairseqTranslator(FairseqProxy):\r\n \r\n def generate(self, **tokenized_sentences):\r\n return self.module.generate(tokenized_sentences['data'])\r\n \r\nclass FairseqBPETokenizer(FairseqProxy):\r\n\r\n def prepare_seq2seq_batch(self, sentences): # encode\r\n return {'data': [self.module.encode(sentence) for sentence in sentences]}\r\n \r\n def batch_decode(self, batched_hypos):\r\n return [self.module.decode(hypos[0]['tokens']) for hypos in batched_hypos]\r\n```\r\n\r\n```\r\n# Look ma, I cheated and the test passes ;)\r\nmname = 'transformer.wmt19.ru-en'\r\nmodel = FairseqTranslator.from_pretrained(mname)\r\ntokenizer = FairseqBPETokenizer.from_pretrained(mname)\r\nbatch = tokenizer.prepare_seq2seq_batch([\"Машинное обучение - это здорово!\"])\r\ntranslated = model.generate(**batch)\r\nassert tokenizer.batch_decode(translated)[0] == 'Machine learning is great!'\r\n```\r\n\r\nNow to the real work of porting...",
"mostly done: https://github.com/huggingface/transformers/pull/6940",
"once https://github.com/huggingface/transformers/pull/6940 is merged this issue is to be closed",
"FYI, Linked Pull requests automatically close the linked issue.",
"I noticed that you already did the linking after leaving the comment, but decided to leave it as the previous comment of mine wasn't certain ;)"
] | 1,593 | 1,600 | 1,600 | CONTRIBUTOR | null | Download instructions from torchub/fairseq: [here](https://github.com/pytorch/fairseq/blob/f03392d11faf1588cb571d19835d6a61ab0d9ca6/examples/wmt19/README.md#L1)
the BART conversion script should be reusable.
## Open source status
* [ x] the model implementation is available: (give details)
* [ x] the model weights are available: (give details)
* [ x] who are the authors: (mention them, if possible by @gh-username)
Sergey Edunov, @myleott Michael Auli, David Grangier
Paper: https://arxiv.org/pdf/1808.09381.pdf
### Spec
Desired API:
```python
mname = 'facebook/wmt-en-de'
model = FairseqTranslator.from_pretrained(mname)
tokenizer = FairseqBPETokenizer.from_pretrained(mname) # AutoTokenizer should also work
batch = tokenizer.prepare_seq2seq_batch(['Maschinelles Lernen ist großartig!'])
translated = model.generate(**batch) # determine
assert tokenizer.batch_decode(translated)[0] == 'Machine Learning is great'
```
- add .rst docs, (see adding a new model instructions, but don't follow them too religiously if something seems suboptimal).
- check timing, memory vs fairseq.
- if lots of modeling code is added, common tests should pass.
### Steps
1. Get tokenizer equivalence (The fairseq object should have an encode method, and there should be wgettable links of fairseq to get the relevant tokenizer files).
1b. Upload tokenizer to s3 so your tokenizer tests work on CI. You can work out of the `stas/fairseq-en-de` namespace on your modelhub account and then move everything over (or not) at the end.
2. Get model.forward/ "logits" equivalence (ignore differences less than 1e-6). This usually doesn't work the first time and you have to go line by line with two ipdb sessions (one fairseq, one hf) until you can find the line that's different. At this stage you should worry very little about code quality and just try to get integration tests passing.
3. Get model.generate/ "translation" equivalence. There may be small beam search discrepancies. For this you will need to figure out `decoder_start_token_id`, `num_beams`, and other config settings.
4. Upload Everything to S3.
5. Go through [template](https://github.com/huggingface/transformers/blob/master/templates/adding_a_new_model/README.md#typical-workflow-for-including-a-model)
and make sure most of the reasonable things are done.
At this point a full integration test (as above) should pass.
6. Check memory, time and BLEU against fairseq (ideally in collab). Improve/document results in PR description.
7. test the scary parts: special tokens, padding insensitivity.
8. Docs/AutoConfig Etc.
Helpful: https://huggingface.co/transformers/model_sharing.html
Assigned to: @stas00 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5419/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5418 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5418/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5418/comments | https://api.github.com/repos/huggingface/transformers/issues/5418/events | https://github.com/huggingface/transformers/pull/5418 | 648,545,647 | MDExOlB1bGxSZXF1ZXN0NDQyMzM3OTEy | 5,418 | Bans SentencePiece 0.1.92 | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5418?src=pr&el=h1) Report\n> Merging [#5418](https://codecov.io/gh/huggingface/transformers/pull/5418?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/87716a6d072b2b66415ce43086c73b04e63fe0fe&el=desc) will **increase** coverage by `0.17%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5418?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5418 +/- ##\n==========================================\n+ Coverage 77.69% 77.87% +0.17% \n==========================================\n Files 140 140 \n Lines 24334 24334 \n==========================================\n+ Hits 18906 18949 +43 \n+ Misses 5428 5385 -43 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5418?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.43% <0.00%> (ø)` | |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.22% <0.00%> (+0.31%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+1.53%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.20% <0.00%> (+2.17%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <0.00%> (+2.28%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.71% <0.00%> (+8.92%)` | :arrow_up: |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/5418/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5418?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5418?src=pr&el=footer). Last update [87716a6...5aa01fe](https://codecov.io/gh/huggingface/transformers/pull/5418?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | MEMBER | null | SentencePiece 0.1.92 seems to cause Segmentation Fault, as visible [here](https://github.com/huggingface/transformers/issues/4857). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5418/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5418",
"html_url": "https://github.com/huggingface/transformers/pull/5418",
"diff_url": "https://github.com/huggingface/transformers/pull/5418.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5418.patch",
"merged_at": 1593696180000
} |
https://api.github.com/repos/huggingface/transformers/issues/5417 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5417/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5417/comments | https://api.github.com/repos/huggingface/transformers/issues/5417/events | https://github.com/huggingface/transformers/pull/5417 | 648,544,805 | MDExOlB1bGxSZXF1ZXN0NDQyMzM3MTYx | 5,417 | Clean up diffs in Trainer/TFTrainer | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5417?src=pr&el=h1) Report\n> Merging [#5417](https://codecov.io/gh/huggingface/transformers/pull/5417?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/64e3d966b1131c15b5905b1e1e582d4bebac1ef0&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `65.11%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5417?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5417 +/- ##\n=======================================\n Coverage 77.75% 77.75% \n=======================================\n Files 140 140 \n Lines 24373 24392 +19 \n=======================================\n+ Hits 18951 18967 +16 \n- Misses 5422 5425 +3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5417?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5417/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `16.53% <33.33%> (-0.45%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5417/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.96% <40.00%> (-0.85%)` | :arrow_down: |\n| [src/transformers/training\\_args\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5417/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `47.45% <44.44%> (-3.71%)` | :arrow_down: |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5417/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.22% <100.00%> (ø)` | |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5417/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `85.71% <100.00%> (+7.45%)` | :arrow_up: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5417/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `77.55% <100.00%> (+0.46%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5417/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5417/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5417/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5417/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.18% <0.00%> (+0.50%)` | :arrow_up: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/5417/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5417?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5417?src=pr&el=footer). Last update [64e3d96...c185e2f](https://codecov.io/gh/huggingface/transformers/pull/5417?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Pretty cool 🔥"
] | 1,593 | 1,593 | 1,593 | COLLABORATOR | null | This PR does a bit of cleanup in the two Trainer and tries ti make the diff in the two TrainingArguments as minimal as possible.
- `set_seed` is now just one function in trainer_utils: the problem was that even if you only use TF and import it from transformers right now, it does not set seed for tf **and** will fail on PyTorch stuff.
- `eval_steps` is now a common argument for both versions of Trainer
- as discussed, `n_gpu` in `TFTrainingArguments` becomes `n_replicas`. This is a breaking change, I can add the deprecation warnings that goes with it if you think it's necessary. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5417/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5417/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5417",
"html_url": "https://github.com/huggingface/transformers/pull/5417",
"diff_url": "https://github.com/huggingface/transformers/pull/5417.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5417.patch",
"merged_at": 1593615621000
} |
https://api.github.com/repos/huggingface/transformers/issues/5416 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5416/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5416/comments | https://api.github.com/repos/huggingface/transformers/issues/5416/events | https://github.com/huggingface/transformers/pull/5416 | 648,536,707 | MDExOlB1bGxSZXF1ZXN0NDQyMzMwMjkz | 5,416 | Refactor generation sampling parameters (e.g. top k, temperature) into "Sampling" classes | {
"login": "turtlesoupy",
"id": 448590,
"node_id": "MDQ6VXNlcjQ0ODU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/448590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/turtlesoupy",
"html_url": "https://github.com/turtlesoupy",
"followers_url": "https://api.github.com/users/turtlesoupy/followers",
"following_url": "https://api.github.com/users/turtlesoupy/following{/other_user}",
"gists_url": "https://api.github.com/users/turtlesoupy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/turtlesoupy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/turtlesoupy/subscriptions",
"organizations_url": "https://api.github.com/users/turtlesoupy/orgs",
"repos_url": "https://api.github.com/users/turtlesoupy/repos",
"events_url": "https://api.github.com/users/turtlesoupy/events{/privacy}",
"received_events_url": "https://api.github.com/users/turtlesoupy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"(Replaced merge with rebase -- see #5420)"
] | 1,593 | 1,593 | 1,593 | NONE | null | #4164 has a full description of the intention here. Basically, to avoid exploding `generate(...)` with more arguments, I've added one generic `Sampler` parameter that allows for arbitrary transformations of the generation probability distribution conditioned on the past. This allows users to specify custom ways of sampling (e.g. insert a specific token after a previous one, etc.)
In the process, I've added some basic tests around these samplers; existing tests pass otherwise. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5416/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5416",
"html_url": "https://github.com/huggingface/transformers/pull/5416",
"diff_url": "https://github.com/huggingface/transformers/pull/5416.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5416.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5415 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5415/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5415/comments | https://api.github.com/repos/huggingface/transformers/issues/5415/events | https://github.com/huggingface/transformers/pull/5415 | 648,514,803 | MDExOlB1bGxSZXF1ZXN0NDQyMzEyMjMw | 5,415 | Gradient checkpointing BERT & ALBERT poc | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5415?src=pr&el=h1) Report\n> Merging [#5415](https://codecov.io/gh/huggingface/transformers/pull/5415?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/87716a6d072b2b66415ce43086c73b04e63fe0fe&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5415?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5415 +/- ##\n==========================================\n+ Coverage 77.69% 77.71% +0.01% \n==========================================\n Files 140 140 \n Lines 24334 24343 +9 \n==========================================\n+ Hits 18906 18917 +11 \n+ Misses 5428 5426 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5415?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.71% <100.00%> (-0.69%)` | :arrow_down: |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `80.86% <100.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.46% <100.00%> (+0.80%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.12% <100.00%> (+0.21%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.68% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5415/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.10% <0.00%> (+0.28%)` | :arrow_up: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/5415/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5415?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5415?src=pr&el=footer). Last update [87716a6...b7e417a](https://codecov.io/gh/huggingface/transformers/pull/5415?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I really like the API, I think it's fine if we enforce all attention layers to use positional arguments and wrap the output attentions bool into a tensor. \r\n\r\nCan we test how much memory is saved here for `bert-base-uncased` layers 6 - 18 for example? \r\nShould be quite easy to do now with the benchmark utils.",
"Benchmarked with the script and updated the PR @patrickvonplaten!",
"Awesome, it looks we can gain quite a lot of memory :-)",
"@LysandreJik , another problem is that `torch.utils.checkpoint.checkpoint` expects the function to return a tuple of `Variable`s. This won't work with forward functions that return other types as in [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bart.py#L321).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@LysandreJik , any plans to resurrect this? ",
"Yes it's on my TODO (probably in ~2 weeks), and will be for most models (with some exceptions like BART and T5, which need a lot of plumbing to work with this POC)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,651 | 1,605 | MEMBER | null | Proof of concept for gradient checkpointing in PyTorch, using a model-agnostic approach. The POC is done for BERT and ALBERT.
Pros:
- Model agnostic, only a few lines to add to models to be able to use this functionality
- Reinforces the model layer API, adding `get_layers()` (name to be discussed) alongside `get_input_embeddings()` and `get_output_embeddings()`
Cons:
- The checkpoint API can only handle positional arguments, pytorch tensors or None only. This means that:
- The `output_hidden_states` must be cast to a tensor in the model
- Models that pass keyword arguments to their layers need to pass positional arguments (see GPT-2 for example, which uses keyword arguments [here](https://github.com/huggingface/transformers/blob/b45e65efa0fbff2611ddd68e14fa75cacef3fe08/src/transformers/modeling_gpt2.py#L488-L493)).
If you think this is a cool API, I'll go ahead and implement this for the remaining models. @patrickvonplaten @thomwolf @julien-c @sgugger @ibeltagy
Here are the results using the benchmarking script:
```py
from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments, BertConfig
args = PyTorchBenchmarkArguments(models=["bert-base-cased"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512], no_inference=True, training=True)
config_base = BertConfig.from_pretrained("bert-base-cased", gradient_checkpointing=False)
benchmark = PyTorchBenchmark(args, configs=[config_base])
benchmark.run()
```
Result (only relevant info):
```
==================== TRAIN - SPEED - RESULTS ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
bert-base-cased 8 8 0.028
bert-base-cased 8 32 0.029
bert-base-cased 8 128 0.072
bert-base-cased 8 512 0.296
--------------------------------------------------------------------------------
==================== TRAIN - MEMORY - RESULTS ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
bert-base-cased 8 8 2419
bert-base-cased 8 32 2481
bert-base-cased 8 128 2985
bert-base-cased 8 512 8233
--------------------------------------------------------------------------------
```
```py
from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments, BertConfig
args = PyTorchBenchmarkArguments(models=["bert-base-cased"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512], no_inference=True, training=True)
config_base = BertConfig.from_pretrained("bert-base-cased", gradient_checkpointing=True)
benchmark = PyTorchBenchmark(args, configs=[config_base])
benchmark.run()
```
Result (only relevant info):
```
==================== TRAIN - SPEED - RESULTS ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
bert-base-cased 8 8 0.049
bert-base-cased 8 32 0.05
bert-base-cased 8 128 0.109
bert-base-cased 8 512 0.473
--------------------------------------------------------------------------------
==================== TRAIN - MEMORY - RESULTS ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
bert-base-cased 8 8 2385
bert-base-cased 8 32 2403
bert-base-cased 8 128 2465
bert-base-cased 8 512 3969
--------------------------------------------------------------------------------
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5415/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5415",
"html_url": "https://github.com/huggingface/transformers/pull/5415",
"diff_url": "https://github.com/huggingface/transformers/pull/5415.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5415.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5414 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5414/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5414/comments | https://api.github.com/repos/huggingface/transformers/issues/5414/events | https://github.com/huggingface/transformers/pull/5414 | 648,511,520 | MDExOlB1bGxSZXF1ZXN0NDQyMzA5NDg4 | 5,414 | Fix roberta model ordering for TFAutoModel | {
"login": "Pierrci",
"id": 5020707,
"node_id": "MDQ6VXNlcjUwMjA3MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5020707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pierrci",
"html_url": "https://github.com/Pierrci",
"followers_url": "https://api.github.com/users/Pierrci/followers",
"following_url": "https://api.github.com/users/Pierrci/following{/other_user}",
"gists_url": "https://api.github.com/users/Pierrci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pierrci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pierrci/subscriptions",
"organizations_url": "https://api.github.com/users/Pierrci/orgs",
"repos_url": "https://api.github.com/users/Pierrci/repos",
"events_url": "https://api.github.com/users/Pierrci/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pierrci/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5414?src=pr&el=h1) Report\n> Merging [#5414](https://codecov.io/gh/huggingface/transformers/pull/5414?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b45e65efa0fbff2611ddd68e14fa75cacef3fe08&el=desc) will **decrease** coverage by `0.59%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5414?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5414 +/- ##\n==========================================\n- Coverage 78.27% 77.67% -0.60% \n==========================================\n Files 140 140 \n Lines 24334 24334 \n==========================================\n- Hits 19047 18902 -145 \n- Misses 5287 5432 +145 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5414?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5414/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `72.50% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5414/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5414/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5414/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5414/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5414/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.68% <0.00%> (-0.72%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5414/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5414/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.43% <0.00%> (ø)` | |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5414/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.10% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5414/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.76% <0.00%> (+32.51%)` | :arrow_up: |\n| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/5414/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5414?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5414?src=pr&el=footer). Last update [b45e65e...c3229c4](https://codecov.io/gh/huggingface/transformers/pull/5414?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"If the order is now consistent with `modeling_auto.py`, LGTM",
"@julien-c What do you mean by consistent exactly? Exact same ordering or same final behavior? (yes for the latter, no for the former for now).",
"@Pierrci both"
] | 1,593 | 1,593 | 1,593 | MEMBER | null | Given that `RobertaConfig` inherits from `BertConfig`, the previous ordering was causing bert models to be wrongfully selected by `TFAutoModel...` in place of roberta ones when instantiated with roberta models (checked the others configs too, it seems it was the only one with such a problem). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5414/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5414",
"html_url": "https://github.com/huggingface/transformers/pull/5414",
"diff_url": "https://github.com/huggingface/transformers/pull/5414.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5414.patch",
"merged_at": 1593732236000
} |
https://api.github.com/repos/huggingface/transformers/issues/5413 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5413/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5413/comments | https://api.github.com/repos/huggingface/transformers/issues/5413/events | https://github.com/huggingface/transformers/pull/5413 | 648,474,822 | MDExOlB1bGxSZXF1ZXN0NDQyMjc4Mzk4 | 5,413 | [mobilebert] Avoid F.tanh deprecation warning | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2139563322,
"node_id": "MDU6TGFiZWwyMTM5NTYzMzIy",
"url": "https://api.github.com/repos/huggingface/transformers/labels/cleanup",
"name": "cleanup",
"color": "e7fc49",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5413?src=pr&el=h1) Report\n> Merging [#5413](https://codecov.io/gh/huggingface/transformers/pull/5413?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ac611145926ff63ee6d6cbd0b28c19bacb6f7ea1&el=desc) will **increase** coverage by `0.44%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5413?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5413 +/- ##\n==========================================\n+ Coverage 77.42% 77.87% +0.44% \n==========================================\n Files 140 140 \n Lines 24334 24334 \n==========================================\n+ Hits 18841 18949 +108 \n+ Misses 5493 5385 -108 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5413?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `88.90% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.43% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `87.50% <0.00%> (+58.65%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5413?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5413?src=pr&el=footer). Last update [ac61114...0845ecd](https://codecov.io/gh/huggingface/transformers/pull/5413?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5413/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5413",
"html_url": "https://github.com/huggingface/transformers/pull/5413",
"diff_url": "https://github.com/huggingface/transformers/pull/5413.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5413.patch",
"merged_at": 1593549704000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5412 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5412/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5412/comments | https://api.github.com/repos/huggingface/transformers/issues/5412/events | https://github.com/huggingface/transformers/pull/5412 | 648,464,297 | MDExOlB1bGxSZXF1ZXN0NDQyMjY5NzQ0 | 5,412 | [GH Runner] fix yaml indent | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5412/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5412",
"html_url": "https://github.com/huggingface/transformers/pull/5412",
"diff_url": "https://github.com/huggingface/transformers/pull/5412.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5412.patch",
"merged_at": 1593548232000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5411 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5411/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5411/comments | https://api.github.com/repos/huggingface/transformers/issues/5411/events | https://github.com/huggingface/transformers/pull/5411 | 648,450,720 | MDExOlB1bGxSZXF1ZXN0NDQyMjU4NTg2 | 5,411 | Add TFBartForConditionalGeneration | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
}
] | closed | false | null | [] | [
"Awesome! This PR will leverage many pretrained weights and make them available for TF! \r\nI don't really think there is a workaround for [supporting multiple input types](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_t5.py#L946) especially to make it compatible with Keras at the moment. There was a discussion on Slack about it (also cc @jplu ). Also, did you check that the model works in tf graph mode (corresponds to this test: https://github.com/huggingface/transformers/blob/316206c11466c9a4019a376843581bf519422369/tests/test_modeling_tf_common.py#L128 which is about to be added in another PR).",
"Sounds like I should wait until you start/for other changes to work more on this @jplu ?\r\n\r\nWould be really good IMO if whatever XLA magic we use decides whether the functions should take tuples or dicts. I much prefer either to both.\r\n",
"IMHO yes, and this will let you more time to polish your code :) After this is only mine, if everybody else prefer to have it merged I will not go against ^^ but I think that for now the more we add models, the more we add issues, and then the longer and harder it will be to fix everything.\r\n\r\nI'm in favor to use only positional arguments and dicts, but this should be discussed with everybody, to see what they think about it.",
"Is this still blocked @jplu ?",
"Try to rebase + make the changes to pass the tests. And it should be ok :)",
"Thanks for the review @LysandreJik !\r\n\r\n+ mBART, Pegasus and Blenderbot, and Marian will be in the next PR. (this is too big already for me to hold in my tiny brain).\r\n+ Your 4 bullets: Will do!"
] | 1,593 | 1,603 | 1,603 | CONTRIBUTOR | null | - adds `TFBartForConditionalGeneration`, which can generate summaries that are equivalent to pytorch.
#### TODO this PR:
- [x] fast tests besides two
- [x] reasonable xsum generations
- [x] tests passing
- [x] fix slow cnn test (tf needs to call `adjust_logits_during_generation`)
- [x] functional dropout
- [x] simplify torch and tf caching logic
- [x] docs
- [x] upload applicable tf/h5 weights.
#### Future PRs:
- [ ] blender/pegasus/mBART/marian etc.
- [ ] #7814 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5411/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5411",
"html_url": "https://github.com/huggingface/transformers/pull/5411",
"diff_url": "https://github.com/huggingface/transformers/pull/5411.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5411.patch",
"merged_at": 1603278617000
} |
https://api.github.com/repos/huggingface/transformers/issues/5410 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5410/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5410/comments | https://api.github.com/repos/huggingface/transformers/issues/5410/events | https://github.com/huggingface/transformers/pull/5410 | 648,449,848 | MDExOlB1bGxSZXF1ZXN0NDQyMjU3ODc2 | 5,410 | [cleanup] TF T5 tests only init t5-base once. | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5410?src=pr&el=h1) Report\n> Merging [#5410](https://codecov.io/gh/huggingface/transformers/pull/5410?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/991172922f9711d7bef160d6aedb2ed1059a88ff&el=desc) will **decrease** coverage by `0.02%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5410?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5410 +/- ##\n==========================================\n- Coverage 77.89% 77.87% -0.03% \n==========================================\n Files 141 140 -1 \n Lines 24634 24334 -300 \n==========================================\n- Hits 19189 18949 -240 \n+ Misses 5445 5385 -60 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5410?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `91.43% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `78.26% <0.00%> (-7.46%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `94.52% <0.00%> (-4.11%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <0.00%> (-1.37%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `76.84% <0.00%> (-0.71%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `75.31% <0.00%> (-0.69%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.27% <0.00%> (-0.39%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `78.74% <0.00%> (-0.28%)` | :arrow_down: |\n| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/5410/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5410?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5410?src=pr&el=footer). Last update [9911729...e4ce37c](https://codecov.io/gh/huggingface/transformers/pull/5410?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"CI is broken for other reasons."
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5410/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5410",
"html_url": "https://github.com/huggingface/transformers/pull/5410",
"diff_url": "https://github.com/huggingface/transformers/pull/5410.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5410.patch",
"merged_at": 1593800869000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5409 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5409/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5409/comments | https://api.github.com/repos/huggingface/transformers/issues/5409/events | https://github.com/huggingface/transformers/pull/5409 | 648,444,819 | MDExOlB1bGxSZXF1ZXN0NDQyMjUzODMz | 5,409 | [CI] gh runner doesn't use -v, cats new result | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5409?src=pr&el=h1) Report\n> Merging [#5409](https://codecov.io/gh/huggingface/transformers/pull/5409?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/27a7fe7a8d3e58d1df7ecc4c5390ac7be728724f&el=desc) will **increase** coverage by `0.23%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5409?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5409 +/- ##\n==========================================\n+ Coverage 77.63% 77.87% +0.23% \n==========================================\n Files 140 140 \n Lines 24334 24334 \n==========================================\n+ Hits 18892 18949 +57 \n+ Misses 5442 5385 -57 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5409?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `78.93% <0.00%> (+0.19%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.40% <0.00%> (+0.71%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.18% <0.00%> (+2.01%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5409?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5409?src=pr&el=footer). Last update [27a7fe7...dee5b20](https://codecov.io/gh/huggingface/transformers/pull/5409?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"merging, will fix if it breaks."
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | This should reduce amount of scrolling required to find error. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5409/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5409",
"html_url": "https://github.com/huggingface/transformers/pull/5409",
"diff_url": "https://github.com/huggingface/transformers/pull/5409.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5409.patch",
"merged_at": 1593547935000
} |
https://api.github.com/repos/huggingface/transformers/issues/5408 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5408/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5408/comments | https://api.github.com/repos/huggingface/transformers/issues/5408/events | https://github.com/huggingface/transformers/pull/5408 | 648,442,388 | MDExOlB1bGxSZXF1ZXN0NDQyMjUxODI1 | 5,408 | Fix examples titles and optimization doc page | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5408?src=pr&el=h1) Report\n> Merging [#5408](https://codecov.io/gh/huggingface/transformers/pull/5408?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/87716a6d072b2b66415ce43086c73b04e63fe0fe&el=desc) will **increase** coverage by `0.21%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5408?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5408 +/- ##\n==========================================\n+ Coverage 77.69% 77.90% +0.21% \n==========================================\n Files 140 140 \n Lines 24334 24336 +2 \n==========================================\n+ Hits 18906 18960 +54 \n+ Misses 5428 5376 -52 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5408?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/5408/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `96.05% <100.00%> (+0.05%)` | :arrow_up: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5408/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.65% <100.00%> (+0.38%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5408/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5408/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `73.37% <0.00%> (-25.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5408/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5408/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.18% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5408/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.22% <0.00%> (+0.31%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5408/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5408/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+1.53%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5408/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.20% <0.00%> (+2.17%)` | :arrow_up: |\n| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/5408/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5408?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5408?src=pr&el=footer). Last update [87716a6...b839c40](https://codecov.io/gh/huggingface/transformers/pull/5408?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | COLLABORATOR | null | This PR addresses two things:
- first, some of the titles were a bit messy in the navigation bar in the examples and optimization page, fixed that
- second, it expands the optimization documentation, adding mentions of which classes/functions go with which backend (since there is no TF prefix) and expand existing docstrings or add them if missing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5408/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5408",
"html_url": "https://github.com/huggingface/transformers/pull/5408",
"diff_url": "https://github.com/huggingface/transformers/pull/5408.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5408.patch",
"merged_at": 1593605485000
} |
https://api.github.com/repos/huggingface/transformers/issues/5407 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5407/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5407/comments | https://api.github.com/repos/huggingface/transformers/issues/5407/events | https://github.com/huggingface/transformers/pull/5407 | 648,427,394 | MDExOlB1bGxSZXF1ZXN0NDQyMjM5NjU4 | 5,407 | examples/seq2seq: never override $WANDB_PROJECT | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5407?src=pr&el=h1) Report\n> Merging [#5407](https://codecov.io/gh/huggingface/transformers/pull/5407?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c4d4e8bdbd25d9463d41de6398940329c89b7fb6&el=desc) will **decrease** coverage by `0.33%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5407?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5407 +/- ##\n==========================================\n- Coverage 77.90% 77.57% -0.34% \n==========================================\n Files 140 140 \n Lines 24334 24334 \n==========================================\n- Hits 18957 18876 -81 \n- Misses 5377 5458 +81 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5407?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.69% <0.00%> (-29.45%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-17.81%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `84.79% <0.00%> (-8.93%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.01% <0.00%> (-5.11%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `87.67% <0.00%> (-2.29%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.02% <0.00%> (-2.18%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.30% <0.00%> (-1.54%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5407/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.67% <0.00%> (-0.51%)` | :arrow_down: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/5407/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5407?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5407?src=pr&el=footer). Last update [c4d4e8b...6c8eb90](https://codecov.io/gh/huggingface/transformers/pull/5407?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | cc @borisdayma | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5407/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5407/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5407",
"html_url": "https://github.com/huggingface/transformers/pull/5407",
"diff_url": "https://github.com/huggingface/transformers/pull/5407.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5407.patch",
"merged_at": 1593545353000
} |
https://api.github.com/repos/huggingface/transformers/issues/5406 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5406/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5406/comments | https://api.github.com/repos/huggingface/transformers/issues/5406/events | https://github.com/huggingface/transformers/pull/5406 | 648,419,303 | MDExOlB1bGxSZXF1ZXN0NDQyMjMzMDc1 | 5,406 | [fix] slow fill_mask test failure | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5406?src=pr&el=h1) Report\n> Merging [#5406](https://codecov.io/gh/huggingface/transformers/pull/5406?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c4d4e8bdbd25d9463d41de6398940329c89b7fb6&el=desc) will **decrease** coverage by `0.02%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5406?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5406 +/- ##\n==========================================\n- Coverage 77.90% 77.87% -0.03% \n==========================================\n Files 140 140 \n Lines 24334 24334 \n==========================================\n- Hits 18957 18950 -7 \n- Misses 5377 5384 +7 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5406?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5406/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `75.31% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5406/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5406/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.43% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5406/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5406/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.37% <0.00%> (+25.00%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5406?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5406?src=pr&el=footer). Last update [c4d4e8b...bd7b994](https://codecov.io/gh/huggingface/transformers/pull/5406?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"lgtm"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | - New tokenizer API does not put space between `<s/>` and sentence.
- New result: "my name is John" is better than old result: "My name is" so fine to update `expected_result`.
- This is caused by tokenizers upgrade.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5406/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5406",
"html_url": "https://github.com/huggingface/transformers/pull/5406",
"diff_url": "https://github.com/huggingface/transformers/pull/5406.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5406.patch",
"merged_at": 1593545295000
} |
https://api.github.com/repos/huggingface/transformers/issues/5405 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5405/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5405/comments | https://api.github.com/repos/huggingface/transformers/issues/5405/events | https://github.com/huggingface/transformers/issues/5405 | 648,390,826 | MDU6SXNzdWU2NDgzOTA4MjY= | 5,405 | Colab session crash with XLA & Tranformers | {
"login": "tuner007",
"id": 46425391,
"node_id": "MDQ6VXNlcjQ2NDI1Mzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/46425391?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tuner007",
"html_url": "https://github.com/tuner007",
"followers_url": "https://api.github.com/users/tuner007/followers",
"following_url": "https://api.github.com/users/tuner007/following{/other_user}",
"gists_url": "https://api.github.com/users/tuner007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tuner007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuner007/subscriptions",
"organizations_url": "https://api.github.com/users/tuner007/orgs",
"repos_url": "https://api.github.com/users/tuner007/repos",
"events_url": "https://api.github.com/users/tuner007/events{/privacy}",
"received_events_url": "https://api.github.com/users/tuner007/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi! Could you share a colab notebook reproducing the error?",
"> Hi! Could you share a colab notebook reproducing the error?\r\n\r\nBelow code was sufficient to reproduce the error:\r\n```\r\n!pip3 install transformers\r\n\r\nVERSION = \"nightly\" #@param [\"1.5\" , \"20200325\", \"nightly\"]\r\n!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py\r\n!python pytorch-xla-env-setup.py --version $VERSION\r\n\r\nfrom transformers import T5Tokenizer\r\n```\r\n\r\n\r\n**_But now i am not able install XLA itself._** I can check they have made some changes in env-setup.py file yesterday. \r\n\r\n\r\n**Output when i installed XLA yesterday :**\r\n```\r\n % Total % Received % Xferd Average Speed Time Time Time Current\r\n Dload Upload Total Spent Left Speed\r\n100 4139 100 4139 0 0 36628 0 --:--:-- --:--:-- --:--:-- 36628\r\nUpdating TPU and VM. This may take around 2 minutes.\r\nUpdating TPU runtime to pytorch-nightly ...\r\nCollecting cloud-tpu-client\r\n Downloading https://files.pythonhosted.org/packages/56/9f/7b1958c2886db06feb5de5b2c191096f9e619914b6c31fdf93999fdbbd8b/cloud_tpu_client-0.10-py3-none-any.whl\r\nCollecting google-api-python-client==1.8.0\r\n Downloading https://files.pythonhosted.org/packages/9a/b4/a955f393b838bc47cbb6ae4643b9d0f90333d3b4db4dc1e819f36aad18cc/google_api_python_client-1.8.0-py3-none-any.whl (57kB)\r\n |████████████████████████████████| 61kB 3.1MB/s \r\nRequirement already satisfied: oauth2client in /usr/local/lib/python3.6/dist-packages (from cloud-tpu-client) (4.1.3)\r\nRequirement already satisfied: httplib2<1dev,>=0.9.2 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client) (0.17.4)\r\nRequirement already satisfied: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client) (3.0.1)\r\nRequirement already satisfied: google-auth>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client) (1.17.2)\r\nRequirement already satisfied: google-api-core<2dev,>=1.13.0 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client) (1.16.0)\r\nRequirement already satisfied: six<2dev,>=1.6.1 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client) (1.12.0)\r\nUninstalling torch-1.5.1+cu101:\r\nRequirement already satisfied: google-auth-httplib2>=0.0.3 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client) (0.0.3)\r\nRequirement already satisfied: pyasn1>=0.1.7 in /usr/local/lib/python3.6/dist-packages (from oauth2client->cloud-tpu-client) (0.4.8)\r\nRequirement already satisfied: pyasn1-modules>=0.0.5 in /usr/local/lib/python3.6/dist-packages (from oauth2client->cloud-tpu-client) (0.2.8)\r\nRequirement already satisfied: rsa>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from oauth2client->cloud-tpu-client) (4.6)\r\nRequirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.4.1->google-api-python-client==1.8.0->cloud-tpu-client) (47.3.1)\r\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.4.1->google-api-python-client==1.8.0->cloud-tpu-client) (4.1.0)\r\nRequirement already satisfied: pytz in /usr/local/lib/python3.6/dist-packages (from google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (2018.9)\r\nRequirement already satisfied: requests<3.0.0dev,>=2.18.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (2.23.0)\r\nRequirement already satisfied: protobuf>=3.4.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (3.10.0)\r\nRequirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (1.52.0)\r\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (2.9)\r\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (3.0.4)\r\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (1.24.3)\r\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (2020.6.20)\r\nInstalling collected packages: google-api-python-client, cloud-tpu-client\r\n Found existing installation: google-api-python-client 1.7.12\r\n Uninstalling google-api-python-client-1.7.12:\r\n Successfully uninstalled google-api-python-client-1.7.12\r\nSuccessfully installed cloud-tpu-client-0.10 google-api-python-client-1.8.0\r\nDone updating TPU runtime\r\n Successfully uninstalled torch-1.5.1+cu101\r\nUninstalling torchvision-0.6.1+cu101:\r\n Successfully uninstalled torchvision-0.6.1+cu101\r\nCopying gs://tpu-pytorch/wheels/torch-nightly-cp36-cp36m-linux_x86_64.whl...\r\n- [1 files][107.3 MiB/107.3 MiB] \r\nOperation completed over 1 objects/107.3 MiB. \r\nCopying gs://tpu-pytorch/wheels/torch_xla-nightly-cp36-cp36m-linux_x86_64.whl...\r\n/ [1 files][230.7 MiB/230.7 MiB] \r\nOperation completed over 1 objects/230.7 MiB. \r\nCopying gs://tpu-pytorch/wheels/torchvision-nightly-cp36-cp36m-linux_x86_64.whl...\r\n/ [1 files][ 1.7 MiB/ 1.7 MiB] \r\nOperation completed over 1 objects/1.7 MiB. \r\nProcessing ./torch-nightly-cp36-cp36m-linux_x86_64.whl\r\nRequirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from torch==nightly) (0.16.0)\r\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torch==nightly) (1.18.5)\r\nERROR: fastai 1.0.61 requires torchvision, which is not installed.\r\nInstalling collected packages: torch\r\nSuccessfully installed torch-1.7.0a0+b9cca4b\r\nProcessing ./torch_xla-nightly-cp36-cp36m-linux_x86_64.whl\r\nInstalling collected packages: torch-xla\r\nSuccessfully installed torch-xla-1.6+71579ee\r\nProcessing ./torchvision-nightly-cp36-cp36m-linux_x86_64.whl\r\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torchvision==nightly) (1.18.5)\r\nRequirement already satisfied: torch in /usr/local/lib/python3.6/dist-packages (from torchvision==nightly) (1.7.0a0+b9cca4b)\r\nRequirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.6/dist-packages (from torchvision==nightly) (7.0.0)\r\nRequirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from torch->torchvision==nightly) (0.16.0)\r\nInstalling collected packages: torchvision\r\nSuccessfully installed torchvision-0.8.0a0+446eac6\r\nReading package lists... Done\r\nBuilding dependency tree \r\nReading state information... Done\r\nThe following package was automatically installed and is no longer required:\r\n libnvidia-common-440\r\nUse 'apt autoremove' to remove it.\r\nThe following NEW packages will be installed:\r\n libomp5\r\n0 upgraded, 1 newly installed, 0 to remove and 33 not upgraded.\r\nNeed to get 234 kB of archives.\r\nAfter this operation, 774 kB of additional disk space will be used.\r\nGet:1 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libomp5 amd64 5.0.1-1 [234 kB]\r\nFetched 234 kB in 1s (373 kB/s)\r\nSelecting previously unselected package libomp5:amd64.\r\n(Reading database ... 144379 files and directories currently installed.)\r\nPreparing to unpack .../libomp5_5.0.1-1_amd64.deb ...\r\nUnpacking libomp5:amd64 (5.0.1-1) ...\r\nSetting up libomp5:amd64 (5.0.1-1) ...\r\nProcessing triggers for libc-bin (2.27-3ubuntu1) ...\r\n/sbin/ldconfig.real: /usr/local/lib/python3.6/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link\r\n```\r\n\r\n**Output now:**\r\n```\r\n % Total % Received % Xferd Average Speed Time Time Time Current\r\n Dload Upload Total Spent Left Speed\r\n100 4139 100 4139 0 0 64671 0 --:--:-- --:--:-- --:--:-- 64671\r\nUpdating TPU and VM. This may take around 2 minutes.\r\nUpdating TPU runtime to pytorch-nightly ...\r\nCollecting cloud-tpu-client\r\n Downloading https://files.pythonhosted.org/packages/56/9f/7b1958c2886db06feb5de5b2c191096f9e619914b6c31fdf93999fdbbd8b/cloud_tpu_client-0.10-py3-none-any.whl\r\nRequirement already satisfied: oauth2client in /usr/local/lib/python3.6/dist-packages (from cloud-tpu-client) (4.1.3)\r\nCollecting google-api-python-client==1.8.0\r\n Downloading https://files.pythonhosted.org/packages/9a/b4/a955f393b838bc47cbb6ae4643b9d0f90333d3b4db4dc1e819f36aad18cc/google_api_python_client-1.8.0-py3-none-any.whl (57kB)\r\n |████████████████████████████████| 61kB 2.7MB/s \r\nRequirement already satisfied: pyasn1>=0.1.7 in /usr/local/lib/python3.6/dist-packages (from oauth2client->cloud-tpu-client) (0.4.8)\r\nRequirement already satisfied: six>=1.6.1 in /usr/local/lib/python3.6/dist-packages (from oauth2client->cloud-tpu-client) (1.12.0)\r\nRequirement already satisfied: pyasn1-modules>=0.0.5 in /usr/local/lib/python3.6/dist-packages (from oauth2client->cloud-tpu-client) (0.2.8)\r\nRequirement already satisfied: rsa>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from oauth2client->cloud-tpu-client) (4.6)\r\nRequirement already satisfied: httplib2>=0.9.1 in /usr/local/lib/python3.6/dist-packages (from oauth2client->cloud-tpu-client) (0.17.4)\r\nRequirement already satisfied: google-api-core<2dev,>=1.13.0 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client) (1.16.0)\r\nRequirement already satisfied: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client) (3.0.1)\r\nRequirement already satisfied: google-auth-httplib2>=0.0.3 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client) (0.0.3)\r\nRequirement already satisfied: google-auth>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client==1.8.0->cloud-tpu-client) (1.17.2)\r\nRequirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (1.52.0)\r\nRequirement already satisfied: setuptools>=34.0.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (47.3.1)\r\nRequirement already satisfied: pytz in /usr/local/lib/python3.6/dist-packages (from google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (2018.9)\r\nRequirement already satisfied: protobuf>=3.4.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (3.10.0)\r\nRequirement already satisfied: requests<3.0.0dev,>=2.18.0 in /usr/local/lib/python3.6/dist-packages (from google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (2.23.0)\r\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth>=1.4.1->google-api-python-client==1.8.0->cloud-tpu-client) (4.1.0)\r\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (2020.6.20)\r\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (3.0.4)\r\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (2.9)\r\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.13.0->google-api-python-client==1.8.0->cloud-tpu-client) (1.24.3)\r\nUninstalling torch-1.5.1+cu101:\r\nInstalling collected packages: google-api-python-client, cloud-tpu-client\r\n Found existing installation: google-api-python-client 1.7.12\r\n Uninstalling google-api-python-client-1.7.12:\r\n Successfully uninstalled google-api-python-client-1.7.12\r\nSuccessfully installed cloud-tpu-client-0.10 google-api-python-client-1.8.0\r\nDone updating TPU runtime\r\n Successfully uninstalled torch-1.5.1+cu101\r\nUninstalling torchvision-0.6.1+cu101:\r\n Successfully uninstalled torchvision-0.6.1+cu101\r\nCopying gs://tpu-pytorch/wheels/torch-nightly-cp36-cp36m-linux_x86_64.whl...\r\n/ [1 files][ 0.0 B/ 0.0 B] \r\nOperation completed over 1 objects. \r\nCopying gs://tpu-pytorch/wheels/torch_xla-nightly-cp36-cp36m-linux_x86_64.whl...\r\n/ [1 files][ 0.0 B/ 0.0 B] \r\nOperation completed over 1 objects. \r\nCopying gs://tpu-pytorch/wheels/torchvision-nightly-cp36-cp36m-linux_x86_64.whl...\r\n/ [1 files][ 0.0 B/ 0.0 B] \r\nOperation completed over 1 objects. \r\nProcessing ./torch-nightly-cp36-cp36m-linux_x86_64.whl\r\nERROR: Exception:\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/base_command.py\", line 153, in _main\r\n status = self.run(options, args)\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/commands/install.py\", line 382, in run\r\n resolver.resolve(requirement_set)\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py\", line 201, in resolve\r\n self._resolve_one(requirement_set, req)\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py\", line 365, in _resolve_one\r\n abstract_dist = self._get_abstract_dist_for(req_to_install)\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py\", line 313, in _get_abstract_dist_for\r\n req, self.session, self.finder, self.require_hashes\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/prepare.py\", line 194, in prepare_linked_requirement\r\n progress_bar=self.progress_bar\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/download.py\", line 452, in unpack_url\r\n unpack_file_url(link, location, download_dir, hashes=hashes)\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/download.py\", line 416, in unpack_file_url\r\n unpack_file(from_path, location, content_type)\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/utils/unpacking.py\", line 252, in unpack_file\r\n flatten=not filename.endswith('.whl')\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/utils/unpacking.py\", line 114, in unzip_file\r\n zip = zipfile.ZipFile(zipfp, allowZip64=True)\r\n File \"/usr/lib/python3.6/zipfile.py\", line 1131, in __init__\r\n self._RealGetContents()\r\n File \"/usr/lib/python3.6/zipfile.py\", line 1198, in _RealGetContents\r\n raise BadZipFile(\"File is not a zip file\")\r\nzipfile.BadZipFile: File is not a zip file\r\nProcessing ./torch_xla-nightly-cp36-cp36m-linux_x86_64.whl\r\nERROR: Exception:\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/base_command.py\", line 153, in _main\r\n status = self.run(options, args)\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/commands/install.py\", line 382, in run\r\n resolver.resolve(requirement_set)\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py\", line 201, in resolve\r\n self._resolve_one(requirement_set, req)\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py\", line 365, in _resolve_one\r\n abstract_dist = self._get_abstract_dist_for(req_to_install)\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py\", line 313, in _get_abstract_dist_for\r\n req, self.session, self.finder, self.require_hashes\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/prepare.py\", line 194, in prepare_linked_requirement\r\n progress_bar=self.progress_bar\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/download.py\", line 452, in unpack_url\r\n unpack_file_url(link, location, download_dir, hashes=hashes)\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/download.py\", line 416, in unpack_file_url\r\n unpack_file(from_path, location, content_type)\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/utils/unpacking.py\", line 252, in unpack_file\r\n flatten=not filename.endswith('.whl')\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/utils/unpacking.py\", line 114, in unzip_file\r\n zip = zipfile.ZipFile(zipfp, allowZip64=True)\r\n File \"/usr/lib/python3.6/zipfile.py\", line 1131, in __init__\r\n self._RealGetContents()\r\n File \"/usr/lib/python3.6/zipfile.py\", line 1198, in _RealGetContents\r\n raise BadZipFile(\"File is not a zip file\")\r\nzipfile.BadZipFile: File is not a zip file\r\nProcessing ./torchvision-nightly-cp36-cp36m-linux_x86_64.whl\r\nERROR: Exception:\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/base_command.py\", line 153, in _main\r\n status = self.run(options, args)\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/commands/install.py\", line 382, in run\r\n resolver.resolve(requirement_set)\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py\", line 201, in resolve\r\n self._resolve_one(requirement_set, req)\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py\", line 365, in _resolve_one\r\n abstract_dist = self._get_abstract_dist_for(req_to_install)\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/legacy_resolve.py\", line 313, in _get_abstract_dist_for\r\n req, self.session, self.finder, self.require_hashes\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/prepare.py\", line 194, in prepare_linked_requirement\r\n progress_bar=self.progress_bar\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/download.py\", line 452, in unpack_url\r\n unpack_file_url(link, location, download_dir, hashes=hashes)\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/download.py\", line 416, in unpack_file_url\r\n unpack_file(from_path, location, content_type)\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/utils/unpacking.py\", line 252, in unpack_file\r\n flatten=not filename.endswith('.whl')\r\n File \"/usr/local/lib/python3.6/dist-packages/pip/_internal/utils/unpacking.py\", line 114, in unzip_file\r\n zip = zipfile.ZipFile(zipfp, allowZip64=True)\r\n File \"/usr/lib/python3.6/zipfile.py\", line 1131, in __init__\r\n self._RealGetContents()\r\n File \"/usr/lib/python3.6/zipfile.py\", line 1198, in _RealGetContents\r\n raise BadZipFile(\"File is not a zip file\")\r\nzipfile.BadZipFile: File is not a zip file\r\nReading package lists... Done\r\nBuilding dependency tree \r\nReading state information... Done\r\nThe following package was automatically installed and is no longer required:\r\n libnvidia-common-440\r\nUse 'apt autoremove' to remove it.\r\nThe following NEW packages will be installed:\r\n libomp5\r\n0 upgraded, 1 newly installed, 0 to remove and 33 not upgraded.\r\nNeed to get 234 kB of archives.\r\nAfter this operation, 774 kB of additional disk space will be used.\r\nGet:1 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libomp5 amd64 5.0.1-1 [234 kB]\r\nFetched 234 kB in 1s (362 kB/s)\r\nSelecting previously unselected package libomp5:amd64.\r\n(Reading database ... 144379 files and directories currently installed.)\r\nPreparing to unpack .../libomp5_5.0.1-1_amd64.deb ...\r\nUnpacking libomp5:amd64 (5.0.1-1) ...\r\nSetting up libomp5:amd64 (5.0.1-1) ...\r\nProcessing triggers for libc-bin (2.27-3ubuntu1) ...\r\n/sbin/ldconfig.real: /usr/local/lib/python3.6/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link\r\n```",
"Hi! You closed the issue, is it because you solved your problem?"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | I am trying to use xla with transformers but as soon as i import transformers after installing XLA the session is restarted.
Even i tried old version of transformers but same issue, is it related to colab ?
```
!pip3 install transformers
VERSION = "nightly" #@param ["1.5" , "20200325", "nightly"]
!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
!python pytorch-xla-env-setup.py --version $VERSION
from transformers import T5Tokenizer
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5405/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5404 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5404/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5404/comments | https://api.github.com/repos/huggingface/transformers/issues/5404/events | https://github.com/huggingface/transformers/issues/5404 | 648,367,434 | MDU6SXNzdWU2NDgzNjc0MzQ= | 5,404 | How to interpret/act on this warning: "Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM"? | {
"login": "ohmeow",
"id": 14000,
"node_id": "MDQ6VXNlcjE0MDAw",
"avatar_url": "https://avatars.githubusercontent.com/u/14000?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ohmeow",
"html_url": "https://github.com/ohmeow",
"followers_url": "https://api.github.com/users/ohmeow/followers",
"following_url": "https://api.github.com/users/ohmeow/following{/other_user}",
"gists_url": "https://api.github.com/users/ohmeow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ohmeow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ohmeow/subscriptions",
"organizations_url": "https://api.github.com/users/ohmeow/orgs",
"repos_url": "https://api.github.com/users/ohmeow/repos",
"events_url": "https://api.github.com/users/ohmeow/events{/privacy}",
"received_events_url": "https://api.github.com/users/ohmeow/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | ```
model = AutoModelForMaskedLM.from_pretrained("bert-base-cased")
```
Returns this warning ...
```
Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']
- This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BertForMaskedLM were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['cls.predictions.decoder.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
What is the recommended way to understand/act on this warning message? For example, from what pre-trained model should I use for the MaskedLM task (and how would I know which to use for any other task)? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5404/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5403 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5403/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5403/comments | https://api.github.com/repos/huggingface/transformers/issues/5403/events | https://github.com/huggingface/transformers/issues/5403 | 648,363,659 | MDU6SXNzdWU2NDgzNjM2NTk= | 5,403 | How to interpret/act on this warning: "Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM | {
"login": "ohmeow",
"id": 14000,
"node_id": "MDQ6VXNlcjE0MDAw",
"avatar_url": "https://avatars.githubusercontent.com/u/14000?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ohmeow",
"html_url": "https://github.com/ohmeow",
"followers_url": "https://api.github.com/users/ohmeow/followers",
"following_url": "https://api.github.com/users/ohmeow/following{/other_user}",
"gists_url": "https://api.github.com/users/ohmeow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ohmeow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ohmeow/subscriptions",
"organizations_url": "https://api.github.com/users/ohmeow/orgs",
"repos_url": "https://api.github.com/users/ohmeow/repos",
"events_url": "https://api.github.com/users/ohmeow/events{/privacy}",
"received_events_url": "https://api.github.com/users/ohmeow/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5403/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5402 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5402/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5402/comments | https://api.github.com/repos/huggingface/transformers/issues/5402/events | https://github.com/huggingface/transformers/issues/5402 | 648,357,457 | MDU6SXNzdWU2NDgzNTc0NTc= | 5,402 | Help with Debugging TF Common tests | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,598 | 1,598 | CONTRIBUTOR | null | I am a TF2 Noob trying to get TFBart working. Most tests pass besides the ones relying on save_pretrained and pt conversion. Has anybody experienced the following issues?
```
test_tf_compile_model:
h5py/h5o.pyx:202: in h5py.h5o.link
...
RuntimeError: Unable to create link (name already exists)
```
Or
```
test_pt_tf_model_equivalence:
AttributeError: tf_bart_model_9.tf_bart_encoder_9.tf_shared_embeddings_9.weight not found in PyTorch model
```
`transformers-cli env`:
```bash
- `transformers` version: 3.0.0
- Platform: Darwin-19.4.0-x86_64-i386-64bit
- Python version: 3.7.5
- PyTorch version (GPU?): 1.5.0 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5402/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5402/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5401 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5401/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5401/comments | https://api.github.com/repos/huggingface/transformers/issues/5401/events | https://github.com/huggingface/transformers/issues/5401 | 648,327,487 | MDU6SXNzdWU2NDgzMjc0ODc= | 5,401 | Runtime for BERT and Roberta | {
"login": "AkshitaJha",
"id": 8939340,
"node_id": "MDQ6VXNlcjg5MzkzNDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8939340?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AkshitaJha",
"html_url": "https://github.com/AkshitaJha",
"followers_url": "https://api.github.com/users/AkshitaJha/followers",
"following_url": "https://api.github.com/users/AkshitaJha/following{/other_user}",
"gists_url": "https://api.github.com/users/AkshitaJha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AkshitaJha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AkshitaJha/subscriptions",
"organizations_url": "https://api.github.com/users/AkshitaJha/orgs",
"repos_url": "https://api.github.com/users/AkshitaJha/repos",
"events_url": "https://api.github.com/users/AkshitaJha/events{/privacy}",
"received_events_url": "https://api.github.com/users/AkshitaJha/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Which version of bert or Roberta you want to use base, large ? It also depends on maximum sequence length",
"I'd like to use the base version with the maximum sequence length of 128.",
"Hi @AkshitaJha , you can run training for few steps 1 or 2, that should give you rough idea of how much time it'll take to finish one epoch",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,599 | 1,599 | NONE | null | I'd like to train a BERT model from scratch. Approximately, how long should it take to train 800k sentences (batch size of say 32) on a 10GB GeForce RTX 2080 GPU?
If I just fine-tune BERT for 800k sentences for 4 epochs, how long should that take?
Are there any benchmarks available except [exacct](https://blog.exxactcorp.com/nvidia-quadro-rtx-6000-bert-large-fine-tune-benchmarks-with-squad-dataset/)?
How much faster is RoBerta? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5401/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5400 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5400/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5400/comments | https://api.github.com/repos/huggingface/transformers/issues/5400/events | https://github.com/huggingface/transformers/pull/5400 | 648,319,753 | MDExOlB1bGxSZXF1ZXN0NDQyMTQ4NzMz | 5,400 | Create model card for schmidek/electra-small-cased | {
"login": "schmidek",
"id": 442328,
"node_id": "MDQ6VXNlcjQ0MjMyOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/442328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/schmidek",
"html_url": "https://github.com/schmidek",
"followers_url": "https://api.github.com/users/schmidek/followers",
"following_url": "https://api.github.com/users/schmidek/following{/other_user}",
"gists_url": "https://api.github.com/users/schmidek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/schmidek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/schmidek/subscriptions",
"organizations_url": "https://api.github.com/users/schmidek/orgs",
"repos_url": "https://api.github.com/users/schmidek/repos",
"events_url": "https://api.github.com/users/schmidek/events{/privacy}",
"received_events_url": "https://api.github.com/users/schmidek/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5400?src=pr&el=h1) Report\n> Merging [#5400](https://codecov.io/gh/huggingface/transformers/pull/5400?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/87716a6d072b2b66415ce43086c73b04e63fe0fe&el=desc) will **increase** coverage by `0.39%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5400?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5400 +/- ##\n==========================================\n+ Coverage 77.69% 78.08% +0.39% \n==========================================\n Files 140 140 \n Lines 24334 24334 \n==========================================\n+ Hits 18906 19001 +95 \n+ Misses 5428 5333 -95 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5400?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5400/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.10% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5400/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.22% <0.00%> (+0.31%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5400/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+1.53%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5400/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.20% <0.00%> (+2.17%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5400/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <0.00%> (+2.28%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5400/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5400/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.71% <0.00%> (+8.92%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5400/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `94.52% <0.00%> (+17.80%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5400?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5400?src=pr&el=footer). Last update [87716a6...8b18faa](https://codecov.io/gh/huggingface/transformers/pull/5400?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5400/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5400",
"html_url": "https://github.com/huggingface/transformers/pull/5400",
"diff_url": "https://github.com/huggingface/transformers/pull/5400.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5400.patch",
"merged_at": 1593590516000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5399 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5399/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5399/comments | https://api.github.com/repos/huggingface/transformers/issues/5399/events | https://github.com/huggingface/transformers/pull/5399 | 648,319,303 | MDExOlB1bGxSZXF1ZXN0NDQyMTQ4MzU0 | 5,399 | Add support for past states | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5399?src=pr&el=h1) Report\n> Merging [#5399](https://codecov.io/gh/huggingface/transformers/pull/5399?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/87716a6d072b2b66415ce43086c73b04e63fe0fe&el=desc) will **decrease** coverage by `0.09%`.\n> The diff coverage is `13.33%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5399?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5399 +/- ##\n==========================================\n- Coverage 77.69% 77.60% -0.10% \n==========================================\n Files 140 140 \n Lines 24334 24362 +28 \n==========================================\n- Hits 18906 18905 -1 \n- Misses 5428 5457 +29 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5399?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5399/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `16.98% <0.00%> (-0.88%)` | :arrow_down: |\n| [src/transformers/training\\_args\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5399/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `51.16% <ø> (ø)` | |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5399/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.81% <21.42%> (-0.58%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5399/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `77.08% <100.00%> (+0.24%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5399/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5399/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5399/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5399/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.68% <0.00%> (-0.72%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5399/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.43% <0.00%> (ø)` | |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5399/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.22% <0.00%> (+0.31%)` | :arrow_up: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/5399/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5399?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5399?src=pr&el=footer). Last update [87716a6...7cce295](https://codecov.io/gh/huggingface/transformers/pull/5399?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I like it! This solution seems to be the cleanest one we have. We have to make sure that all models name their inner state `mems`. \r\nGPT2 has a `past` parameter, but it should not be used for training as far as I know. ",
"@jplu adding yo as a reviewer for the TF side of things :-)"
] | 1,593 | 1,593 | 1,593 | COLLABORATOR | null | This adds support in the `Trainer` for models that can use past states. Since there is no way to know in advance where the output will be in the model (we can guess it's going to be the second, but that may not always be the case for all models or user-defined models), I added an argument to the `TrainingArguments` for the index of where to look at mems in the outputs.
If it's left to -1, nothing happens, and if it's set, the `Trainer` will save the past mems in its state at each training step and add them to the inputs on the next step.
If this looks good to you, I'll add the same thing on the TF side before merging. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5399/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5399",
"html_url": "https://github.com/huggingface/transformers/pull/5399",
"diff_url": "https://github.com/huggingface/transformers/pull/5399.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5399.patch",
"merged_at": 1593605516000
} |
https://api.github.com/repos/huggingface/transformers/issues/5398 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5398/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5398/comments | https://api.github.com/repos/huggingface/transformers/issues/5398/events | https://github.com/huggingface/transformers/issues/5398 | 648,166,642 | MDU6SXNzdWU2NDgxNjY2NDI= | 5,398 | Inference time difference between pipeline and with standalone model and tokenizer | {
"login": "Arjunsankarlal",
"id": 28828445,
"node_id": "MDQ6VXNlcjI4ODI4NDQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/28828445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arjunsankarlal",
"html_url": "https://github.com/Arjunsankarlal",
"followers_url": "https://api.github.com/users/Arjunsankarlal/followers",
"following_url": "https://api.github.com/users/Arjunsankarlal/following{/other_user}",
"gists_url": "https://api.github.com/users/Arjunsankarlal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arjunsankarlal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arjunsankarlal/subscriptions",
"organizations_url": "https://api.github.com/users/Arjunsankarlal/orgs",
"repos_url": "https://api.github.com/users/Arjunsankarlal/repos",
"events_url": "https://api.github.com/users/Arjunsankarlal/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arjunsankarlal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
}
] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"Anyone looking into this issue ? :)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,600 | 1,600 | NONE | null | # 🐛 Bug
## Information
Model I am using: distilbert-base-cased-distilled-squad
Language I am using the model on : English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQUaD
* [ ] my own task or dataset: (give details below)
## To reproduce
With the two examples provided in the Usage section [here](https://huggingface.co/transformers/usage.html#extractive-question-answering), the first method using the pipeline takes more time for the same context and question.
I modified the contents and question same for both the example and I used the same finetuned SQuAD model for both.
```
from transformers import pipeline
from time import perf_counter
nlp = pipeline("question-answering", model="/Users/arjun/Datasets/QuestionAnswering/distilbert-base-cased-distilled-squad",tokenizer="/Users/arjun/Datasets/QuestionAnswering/distilbert-base-cased-distilled-squad")
context = r"""
🤗 Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose
architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural
Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between
TensorFlow 2.0 and PyTorch.
"""
st = perf_counter()
nlp(question="How many pretrained models are available in Transformers?", context=context)
nlp(question="What does Transformers provide?", context=context)
nlp(question="Transformers provides interoperability between which frameworks?", context=context)
print(f'Time taken is {perf_counter()-st}')
```
The output was,
> Time taken is 1.5614857940000002
With some added debugged code inside the pipeline code, it seems that the 99% of the time is taken when the input_ids are fed to the model for getting the start and end tensors.
The next example with standalone model and tokenizers is pretty fast,
```
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("/Users/arjun/Datasets/QuestionAnswering/distilbert-base-cased-distilled-squad")
model = AutoModelForQuestionAnswering.from_pretrained("/Users/arjun/Datasets/QuestionAnswering/distilbert-base-cased-distilled-squad")
text = r"""
🤗 Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose
architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural
Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between
TensorFlow 2.0 and PyTorch.
"""
questions = [
"How many pretrained models are available in Transformers?",
"What does Transformers provide?",
"Transformers provides interoperability between which frameworks?",
]
st = perf_counter()
for question in questions:
inputs = tokenizer.encode_plus(question, text, add_special_tokens=True, return_tensors="pt")
input_ids = inputs["input_ids"].tolist()[0]
text_tokens = tokenizer.convert_ids_to_tokens(input_ids)
answer_start_scores, answer_end_scores = model(**inputs)
answer_start = torch.argmax(
answer_start_scores
) # Get the most likely beginning of answer with the argmax of the score
answer_end = torch.argmax(answer_end_scores) + 1 # Get the most likely end of answer with the argmax of the score
answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))
print(f"Question: {question}")
print(f"Answer: {answer}\n")
print(f'Time taken is {perf_counter()-st}')
```
The output is,
> Time taken is 0.4920176359999999
I would like to know if my understanding is wrong here. Because inside the pipeline flow, the model and tokenizers are initialized with AutoModelForQuestionAnswering and AutoTokenizer only.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I was expecting that both the examples should take relatively equal amount of time.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Darwin-18.0.0-x86_64-i386-64bit
- Python version: 3.7.2
- PyTorch version (GPU?): 0.4.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5398/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5397 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5397/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5397/comments | https://api.github.com/repos/huggingface/transformers/issues/5397/events | https://github.com/huggingface/transformers/issues/5397 | 648,119,148 | MDU6SXNzdWU2NDgxMTkxNDg= | 5,397 | tokenizer started throwing this warning, ""Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'only_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you may want to check this is the right behavior."" | {
"login": "saahiluppal",
"id": 47444392,
"node_id": "MDQ6VXNlcjQ3NDQ0Mzky",
"avatar_url": "https://avatars.githubusercontent.com/u/47444392?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saahiluppal",
"html_url": "https://github.com/saahiluppal",
"followers_url": "https://api.github.com/users/saahiluppal/followers",
"following_url": "https://api.github.com/users/saahiluppal/following{/other_user}",
"gists_url": "https://api.github.com/users/saahiluppal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saahiluppal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saahiluppal/subscriptions",
"organizations_url": "https://api.github.com/users/saahiluppal/orgs",
"repos_url": "https://api.github.com/users/saahiluppal/repos",
"events_url": "https://api.github.com/users/saahiluppal/events{/privacy}",
"received_events_url": "https://api.github.com/users/saahiluppal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"This is because we recently upgraded the library to version v3.0.0, which has an improved tokenizers API. You can either disable warnings or put `truncation=True` to remove that warning (as indicated in the warning).",
"how do you disable the warnings for this? I'm encountering the same issue. But I don't want to set the truncation=True",
"You can disable the warnings with:\r\n\r\n```py\r\nimport logging\r\nlogging.basicConfig(level=logging.ERROR)\r\n```",
"I've changed the logging level and removed max_length but am still getting this error:\r\n\r\nWARNING:transformers.tokenization_utils_base:Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.\r\n",
"On which version are you running? Can you try to install v3.0.2 to see if it fixes this issue?",
"I've tried with v3.0.2 and I'm getting the same warning messages even when I changed the logging level with the code snippet above.",
"@tutmoses @wise-east can you give us a self-contained code example reproducing the behavior?",
"I got the same question",
"update transformers library to v3 and explicitly provide \"trucation=True\" while encoding text using tokenizers",
"Could reproduce the error with this code:\r\n\r\n```\r\nfrom transformers.data.processors.utils import SingleSentenceClassificationProcessor\r\ntokenizer = CamembertTokenizer.from_pretrained(\"camembert-base\")\r\n\r\ntexts = [\"hi\", \"hello\", \"salut\", \"bonjour\"]\r\nlabels = [0, 0, 1, 1,]\r\n\r\nprocessor = SingleSentenceClassificationProcessor().create_from_examples(texts, labels)\r\ndataset = processor.get_features(tokenizer=tokenizer)\r\n```",
"Hello, \r\n\r\nUsing the following command had solved the problem:\r\n\r\n `import logging\r\nlogging.basicConfig(level = logging.ERROR)`\r\n\r\nHowever, since today 15h40 (Paris time), it does not work anymore and the following warning continues to pop up until crashing Google Colab:\r\n\r\n`Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.\r\n`\r\n\r\nCould you please tell me how to solve it? I also tried to deactivate truncation from the encode_plus tokenizer:\r\n\r\n` encoded_dict = tokenizer.encode_plus(\r\n sent, # Sentence to encode.\r\n add_special_tokens = True, # Add '[CLS]' and '[SEP]'\r\n max_length = 128, # Pad & truncate all sentences.\r\n pad_to_max_length = True,\r\n return_attention_mask = True, # Construct attn. masks.\r\n return_tensors = 'pt', # Return pytorch tensors.\r\n truncation = False\r\n )`\r\n\r\nBut it did not work.\r\n\r\nThank for your help/replies,\r\n\r\n----------EDIT---------------\r\n\r\nI modified my code in the following way by setting \"truncation = True\" as suggested on this [post](https://github.com/Tiiiger/bert_score/pull/68). It worked perfectly! From what I understood, this should consider the max_lenght I'm applying and avoid the warning from comming up.\r\n\r\n` encoded_dict = tokenizer.encode_plus(\r\n sent, # Sentence to encode.\r\n add_special_tokens = True, # Add '[CLS]' and '[SEP]'\r\n max_length = 128, # Pad & truncate all sentences.\r\n pad_to_max_length = True,\r\n return_attention_mask = True, # Construct attn. masks.\r\n return_tensors = 'pt', # Return pytorch tensors.\r\n truncation = True\r\n )`\r\n\r\nJ.",
"'truncation=True' solves the problem.\r\ntokenizer = BertTokenizer.from_pretrained(cfg.text_model.pretrain)\r\nlengths = [len(tokenizer.tokenize(c)) + 2 for c in captions]\r\ncaptions_ids = [torch.LongTensor(tokenizer.encode(c, max_length=max_len, pad_to_max_length=True**_, truncation=True_**))\r\n for c in captions]",
"not elegant solution\r\nmodify transformers source code (`~/python/site-packages/transformers/tokenization_utils_base.py`) line 1751 to aviod this warning\r\n\r\n```\r\n if 0: #if verbose:\r\n logger.warning(\r\n \"Truncation was not explicitely activated but `max_length` is provided a specific value, \"\r\n \"please use `truncation=True` to explicitely truncate examples to max length. \"\r\n \"Defaulting to 'longest_first' truncation strategy. \"\r\n \"If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy \"\r\n \"more precisely by providing a specific strategy to `truncation`.\"\r\n )\r\n truncation = \"longest_first\"\r\n```",
"add 'truncation=True' to tokenizer.encode_plus(truncation=True). \r\nwork to me!"
] | 1,593 | 1,620 | 1,593 | NONE | null | Recently while experimenting, BertTokenizer start to throw this warning
```bash
Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'only_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you may want to check this is the right behavior.
```
I know, this warning asks to provide truncation value.
I'm asking here because this warning started this morning. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5397/reactions",
"total_count": 25,
"+1": 25,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5397/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5396 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5396/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5396/comments | https://api.github.com/repos/huggingface/transformers/issues/5396/events | https://github.com/huggingface/transformers/pull/5396 | 648,118,085 | MDExOlB1bGxSZXF1ZXN0NDQxOTgyNDQx | 5,396 | Create model card | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5396?src=pr&el=h1) Report\n> Merging [#5396](https://codecov.io/gh/huggingface/transformers/pull/5396?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/331d8d2936e7a140225cf60301ba6469930fd216&el=desc) will **increase** coverage by `0.85%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5396?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5396 +/- ##\n==========================================\n+ Coverage 76.98% 77.84% +0.85% \n==========================================\n Files 138 138 \n Lines 24314 24314 \n==========================================\n+ Hits 18719 18928 +209 \n+ Misses 5595 5386 -209 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5396?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5396/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5396/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.30% <0.00%> (+1.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5396/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.71% <0.00%> (+1.32%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5396/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.24% <0.00%> (+2.49%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5396/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+8.92%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5396/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+13.07%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5396/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.14% <0.00%> (+29.44%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5396/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5396?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5396?src=pr&el=footer). Last update [331d8d2...3299725](https://codecov.io/gh/huggingface/transformers/pull/5396?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | Create model card for electicidad-small (Spanish Electra) fine-tuned on SQUAD-esv1 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5396/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5396",
"html_url": "https://github.com/huggingface/transformers/pull/5396",
"diff_url": "https://github.com/huggingface/transformers/pull/5396.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5396.patch",
"merged_at": 1593779350000
} |
https://api.github.com/repos/huggingface/transformers/issues/5395 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5395/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5395/comments | https://api.github.com/repos/huggingface/transformers/issues/5395/events | https://github.com/huggingface/transformers/pull/5395 | 648,070,821 | MDExOlB1bGxSZXF1ZXN0NDQxOTQyODE5 | 5,395 | [Almost all TF models] TF clean up: add missing CLM / MLM loss; fix T5 naming and keras compile | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@LysandreJik @julien-c @jplu @thomwolf @sgugger - can you take a look at this example if the CLM loss is correctly added? If yes, I will add this loss to all other CLM models and add tests.",
"Looks good to me!!",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5395?src=pr&el=h1) Report\n> Merging [#5395](https://codecov.io/gh/huggingface/transformers/pull/5395?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/21cd8c40862ba356096ab4cda31563ee3a35c1bb&el=desc) will **decrease** coverage by `0.03%`.\n> The diff coverage is `77.17%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5395?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5395 +/- ##\n==========================================\n- Coverage 76.39% 76.35% -0.04% \n==========================================\n Files 141 141 \n Lines 24617 24868 +251 \n==========================================\n+ Hits 18807 18989 +182 \n- Misses 5810 5879 +69 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5395?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.22% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `74.41% <ø> (ø)` | |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.86% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <10.00%> (-0.91%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <14.28%> (-0.24%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `63.03% <40.42%> (-9.47%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.97% <80.00%> (-1.40%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `90.90% <83.65%> (-0.53%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.88% <84.61%> (-0.23%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `76.47% <100.00%> (+0.48%)` | :arrow_up: |\n| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/5395/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5395?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5395?src=pr&el=footer). Last update [21cd8c4...c25aa53](https://codecov.io/gh/huggingface/transformers/pull/5395?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Ok will add this for all TF CLM models then :-) and add tests."
] | 1,593 | 1,594 | 1,594 | MEMBER | null | This PR aligns TF code more with PT code and adds full training support to all CLM and MLM models applying @jplu's loss design to the remaining models. In more detail the following things are included in the PR:
- Add `TFMaskedLanguageModelingLoss` and `TFCausalLanguageModelingLoss` to all CLM and MLM TF models. Only Transfo-XL and XLM are not included since they use adaptive softmax (TF Transfo-XL currently has no Adaptive Softmax implemented cc @TevenLeScao for notification)
- Change value to mask CE loss from -1 to -100 to align with PyTorch cc - tf_ner script is updated accordingly @jplu. Using -1 is deprecated here and should be removed in a future version.
- Split Bert into BertForCLM and BertForMLM as was done in PyTorch (small break in backward compatibility here)
- Split TFAutoModelWithLMHead into TFAutoModelForCLM, ...ForMLM, ForSeq2Seq as was done in PyTorch to make TF ready for encoder-decoder wrapper.
- Add various tests for `modeling_tf_auto`.py e.g. that the mappings are correctly ordered
- Fix inconsistent naming in TF T5 and fix TF T5 keras compilation bug @sshleifer - encoder decoder tf related tests are fixed so should concern tf bart as well
TODO:
- [x] add labels to all tests where it applies
- [x] add CLM loss to all other models
- [x] add MLM loss to all other models
- [x] Clean TF T5
Future Pr:
- [ ] Test that TF Trainer works well with all new CLM / MLM models - we should definitely start adding tests for TF Trainer as well @jplu @julien-c @LysandreJik
- [ ] TF Benchmark can now be done on training as welll -> update the benchmark scripts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5395/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5395",
"html_url": "https://github.com/huggingface/transformers/pull/5395",
"diff_url": "https://github.com/huggingface/transformers/pull/5395.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5395.patch",
"merged_at": 1594138554000
} |
https://api.github.com/repos/huggingface/transformers/issues/5394 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5394/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5394/comments | https://api.github.com/repos/huggingface/transformers/issues/5394/events | https://github.com/huggingface/transformers/pull/5394 | 648,049,464 | MDExOlB1bGxSZXF1ZXN0NDQxOTI1MDE5 | 5,394 | Upload DistilBART artwork | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | w.r.t. #5278 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5394/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5394",
"html_url": "https://github.com/huggingface/transformers/pull/5394",
"diff_url": "https://github.com/huggingface/transformers/pull/5394.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5394.patch",
"merged_at": 1593511871000
} |
https://api.github.com/repos/huggingface/transformers/issues/5393 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5393/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5393/comments | https://api.github.com/repos/huggingface/transformers/issues/5393/events | https://github.com/huggingface/transformers/issues/5393 | 648,030,501 | MDU6SXNzdWU2NDgwMzA1MDE= | 5,393 | GPT2Tokenizer.save_pretrained does not work in v3.0.0 | {
"login": "hzhwcmhf",
"id": 1344510,
"node_id": "MDQ6VXNlcjEzNDQ1MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1344510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hzhwcmhf",
"html_url": "https://github.com/hzhwcmhf",
"followers_url": "https://api.github.com/users/hzhwcmhf/followers",
"following_url": "https://api.github.com/users/hzhwcmhf/following{/other_user}",
"gists_url": "https://api.github.com/users/hzhwcmhf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hzhwcmhf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hzhwcmhf/subscriptions",
"organizations_url": "https://api.github.com/users/hzhwcmhf/orgs",
"repos_url": "https://api.github.com/users/hzhwcmhf/repos",
"events_url": "https://api.github.com/users/hzhwcmhf/events{/privacy}",
"received_events_url": "https://api.github.com/users/hzhwcmhf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! Do you mind giving me your tokenizers version?\r\n\r\nIt shouldn't fail like this, `save_pretrained` is still supported and the recommended way to save tokenizers.",
"``tokenizers-0.8.0rc4`` is installed when I use ``pip install transformers``.\r\n\r\nActually, I'm running a CI for my project. You can see more information here: https://travis-ci.com/github/thu-coai/cotk/builds/173607827",
"Hmm @mfuntowicz might have an idea about what's going on here?",
"The problem still exists in 3.0.1",
"@hzhwcmhf, I encounter the same issue while saving pretrained fast tokenizer based on `RobertaTokenizerFast`. \r\n\r\nThis happens due to adding non-serializable `AddedToken` instances to `kwargs` and later `init_kwargs`: https://github.com/huggingface/transformers/blob/d6b0b9d451e7ffe0da72a5f532cc9bec1563e801/src/transformers/tokenization_roberta.py#L316\r\nhttps://github.com/huggingface/transformers/blob/d6b0b9d451e7ffe0da72a5f532cc9bec1563e801/src/transformers/tokenization_roberta.py#L321\r\n\r\n`AddedToken` instances appear in `init_kwargs` property which is used to build a `tokenizer_config` which in its turn is being serialized into a `tokenizer_config_file`:\r\nhttps://github.com/huggingface/transformers/blob/d6b0b9d451e7ffe0da72a5f532cc9bec1563e801/src/transformers/tokenization_utils_base.py#L1355\r\nhttps://github.com/huggingface/transformers/blob/d6b0b9d451e7ffe0da72a5f532cc9bec1563e801/src/transformers/tokenization_utils_base.py#L1362\r\n\r\nNote, that `AddedToken` instances are properly serialized to `special_tokens_map_file`:\r\nhttps://github.com/huggingface/transformers/blob/d6b0b9d451e7ffe0da72a5f532cc9bec1563e801/src/transformers/tokenization_utils_base.py#L1368\r\n\r\n## Workaround\r\nAs a workaround you can inspect `tokenizer.init_kwargs` property and update all items which are not JSON serializable (i.e. not strings in this case) before saving a pretrained tokenizer.\r\n\r\n```python\r\ntokenizer = ...\r\n\r\ninit_kwargs = {}\r\nfor key, value in tokenizer.init_kwargs.items():\r\n if isinstance(value, AddedToken):\r\n init_kwargs[key] = str(value)\r\n else:\r\n init_kwargs[key] = value\r\n\r\ntokenizer.init_kwargs = init_kwargs\r\n\r\ntokenizer.save_pretrained(<path>)\r\n```"
] | 1,593 | 1,594 | 1,594 | CONTRIBUTOR | null | # 🐛 Bug
## Information
I am using GPT2Tokenizer and try to save the information about the tokenizer.
In v2.7, I always do
```python
>>> from transformers import GPT2Tokenizer
>>> a = GPT2Tokenizer('./tests/dataloader/dummy_gpt2vocab/vocab.json', './tests/dataloader/dummy_gpt2vocab/merges.txt')
>>> a.save_pretrained("./")
('./vocab.json', './merges.txt', './special_tokens_map.json', './added_tokens.json')
```
And I found it doesn't work when I upgrade to 3.0.
```python
>>> from transformers import GPT2Tokenizer
>>> a = GPT2Tokenizer('./tests/dataloader/dummy_gpt2vocab/vocab.json', './tests/dataloader/dummy_gpt2vocab/merges.txt')
>>> a.save_pretrained("./")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\hzhwcmhf\anaconda3\envs\mix\lib\site-packages\transformers\tokenization_utils_base.py", line 1362, in save_pretrained
f.write(json.dumps(tokenizer_config, ensure_ascii=False))
File "C:\Users\hzhwcmhf\anaconda3\envs\mix\lib\json\__init__.py", line 238, in dumps
**kw).encode(obj)
File "C:\Users\hzhwcmhf\anaconda3\envs\mix\lib\json\encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "C:\Users\hzhwcmhf\anaconda3\envs\mix\lib\json\encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "C:\Users\hzhwcmhf\anaconda3\envs\mix\lib\json\encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type AddedToken is not JSON serializable
>>> a.save_vocabulary("./")
('./vocab.json', './merges.txt')
```
I know there is ``save_vocabulary`` now, but has ``save_pretrained`` been removed and should not be used?
However, ``save_vocabulary`` does not save special tokens. So how can I save the whole information about the tokenizer.
## Environment info
- `transformers` version: 3.0.0
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5393/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5393/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5392 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5392/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5392/comments | https://api.github.com/repos/huggingface/transformers/issues/5392/events | https://github.com/huggingface/transformers/issues/5392 | 648,006,147 | MDU6SXNzdWU2NDgwMDYxNDc= | 5,392 | Windows: No matching distribution found for lightning_base | {
"login": "MichaelJanz",
"id": 66110831,
"node_id": "MDQ6VXNlcjY2MTEwODMx",
"avatar_url": "https://avatars.githubusercontent.com/u/66110831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MichaelJanz",
"html_url": "https://github.com/MichaelJanz",
"followers_url": "https://api.github.com/users/MichaelJanz/followers",
"following_url": "https://api.github.com/users/MichaelJanz/following{/other_user}",
"gists_url": "https://api.github.com/users/MichaelJanz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MichaelJanz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MichaelJanz/subscriptions",
"organizations_url": "https://api.github.com/users/MichaelJanz/orgs",
"repos_url": "https://api.github.com/users/MichaelJanz/repos",
"events_url": "https://api.github.com/users/MichaelJanz/events{/privacy}",
"received_events_url": "https://api.github.com/users/MichaelJanz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1936351150,
"node_id": "MDU6TGFiZWwxOTM2MzUxMTUw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Examples",
"name": "Examples",
"color": "d4c5f9",
"default": false,
"description": "Which is related to examples in general"
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Also I am a bit worried, that my RTx 2070 with 8GB will be too small for training, since 13GB were recommended for a batch size of 1 with fp16. I appreciate any hints, what I could do to make it run. Thanks you",
"Also I followed the basic installation process from here:\r\nhttps://github.com/huggingface/transformers/blob/master/examples/README.md#important-note\r\nbut I still get the same error",
"lightning_base is still there. Do you have a traceback?",
"Yep, thats what I get so far:\r\nTraceback (most recent call last):\r\n _File \"finetune.py\", line 15, in <module>\r\n from lightning_base import BaseTransformer, add_generic_args, generic_train\r\nModuleNotFoundError: No module named 'lightning_base'_\r\n",
"try `export PYTHONPATH=\"../\":\"${PYTHONPATH}\"`\r\nMore info in `examples/seq2seq/README.md`",
"I had the same issue as you described @MichaelJanz, also on Windows10 with python 3.7.7. \r\n\r\nTo clarify my setup, I followed the instructions under \"Important Note\" on the [transformers/examples page](https://github.com/huggingface/transformers/tree/master/examples) and got an error that faiss was unable to be installed (faiss only supports Linux and MacOS currently I think). I removed faiss from the list of requirements at examples/requirements.txt and ran the example finetune.sh command. Similarly to Michael, I got an error that lightning_base was not found. Since \"export\" doesn't work on windows command line, I inserted two lines above the lightning_base import in finetune.py:\r\n\r\n```\r\nimport sys\r\nsys.path.insert(0, r'C:\\Users\\chris\\transformers\\examples\r\n```\r\n\r\nThis solved the issue that lightning_base wasn't found, but I encountered a new error:\r\n```\r\nFile \"finetune.py\", line 17, in <module>\r\n from lightning_base import BaseTransformer, add_generic_args, generic_train\r\n...\r\n File \"C:\\Users\\chris\\transformers\\env\\lib\\site-packages\\tokenizers\\__init__.py\", line 17, in <module>\r\n from .tokenizers import Tokenizer, Encoding, AddedToken\r\nModuleNotFoundError: No module named 'tokenizers.tokenizers'\r\n```\r\n\r\nLooking at the tokenizers package installed, I didn't see an additional folder labeled \"tokenizers\". The tokenizers version I have within my virtual environment is `tokenizers==0.8.0rc4`. @sshleifer , could you let me know what version of tokenizers you have in your environment? Let me know if you have any other suggestions about what might be happening (I worry that the problem lies with using Windows).\r\n\r\nEdit: for context, I tried running the finetuning script within a Linux environment and had no problems, with the same `tokenizers==0.8.0rc4` version. I'm guessing that this whole issue is a Windows problem.",
"Yeah I have the same version of tokenizers, this seems like a windows problem. ",
"To fix that for me, I decided to execute _finetune.sh_ directly. I had to insert\r\n`export PYTHONPATH=\"$PATH:(absolute path to the examples folder)`\r\nand `--data_dir (absolute path to example folder)`\r\nHowever, thats an unpretty workaround, which works.\r\n\r\nThen I got into an encoding error, which I had to change line 39 on utils.py to \r\n`lns = lmap(str.strip, data_path.open(encoding=\"UTF-8\").readlines())`\r\n\r\nThen the finetuning process starts. It just is running horribly slow with no GPU usage with the warning:\r\n\r\n`Warning: multi_tensor_applier fused unscale kernel is unavailable, possibly because apex was installed without --cuda_ext --cpp_ext. Using Python fallback. Original ImportError was: ModuleNotFoundError(\"No module named 'amp_C'\")`\r\n\r\nSo Gpu is not used. I already opened an [Issue](https://github.com/NVIDIA/apex/issues/905) on Nvidia/Apex about bulding Apex for Windows with Cuda extensions, but any hint here is appreciated:\r\n\r\nI am thinking about switching to Ubuntu, since it seems like alot of errors have their origin in windows. Is that a recommended thing?\r\n",
"Before you start running these commands: it probably end up _not_ working due to Mecab issues (see bottom).\r\n\r\n- FAISS is currently not supported on Windows (though it does have [an open project](https://github.com/facebookresearch/faiss/projects/2)): remove from requirements \r\n- instead of `wget` you can use `Invoke-WebRequest` on Powershell (after having `cd`'d into `examples/seq2seq`): \r\n\r\n```ps\r\nInvoke-WebRequest https://s3.amazonaws.com/datasets.huggingface.co/summarization/xsum.tar.gz -OutFile xsum.tar.gz\r\n```\r\n\r\n- To add the environment variable: \r\n\r\n```ps\r\n$env:Path += \";\" + (Join-Path -Path (Get-Item .).FullName -ChildPath \"xsum\")\r\n```\r\n\r\nRather than using the bash file (finetune.sh), I suggest that you open it and copy-paste the Python command that is in there, including the options that are already present, and add your own options after it (things like data dir, model name).\r\n\r\nBefore running the command, add `../` to PYTHONPATH:\r\n\r\n```ps\r\n$env:PythonPath += \";../\"\r\n```\r\n\r\nAfter all that, you will probably still run into a problem involving mecab. It is used for Japanese tokenisation, and it is not easy to disable (it's also part of sacrebleu). Mecab has a new v1.0 release that works on Windows, however, it includes breaking changes in the rest of [transformers ](https://github.com/huggingface/transformers/pull/5375) as well as [sacrebleu](https://github.com/mjpost/sacrebleu/issues/94). This is unfortunate because such a small change means that many functionalities or examples cannot be used on Windows, _even if you do not use Japanese_. This is due to the nature of import. I'd rather have that these libraries are only imported when they are needed to maximise cross-platform usage.",
"@MichaelJanz How did you get passed the mecab issue? And does it run correctly without the AMP flag?\r\n\r\nRunning Ubuntu is one way to go. However, my personal recommendation is waiting a bit until WSL has full GPU support (which is currently in beta, [you can try it out!](https://developer.nvidia.com/cuda/wsl)). That way, you can still enjoy your good ol' Windows experience, and only open up an Ubuntu terminal when running your experiments.",
"@BramVanroy It runs well so far under windows, but I dont know what the AMP flag is for. Just the gpu support is missing in training, however gpu is available during testing (atleast I get some gpu usage there and high clock)\r\n\r\nAbout mecab, I did not have any issues with it at all. In theory, training is working and possible, it just takes way too long.\r\n\r\nTy for the hint about WSL gpu support, I am just working on getting that to run",
"It is very odd how you did not have any issues with mecab because the pinned versions are not supported on Windows...\r\n\r\nI meant the `--fp16` flag. If used, it will try to use AMP. But since you seem to have issues with AMP, you can try to remove `--fp16` from the command in `finetune.sh`.",
"I removed the --fp16 flag and the missing AMP_C message is gone. But still the gpu is not used. Could it be, that it is too small for that model, so it just uses the cpu? I dont know how Pytorch handles memory issues. \r\nThats my screen so far. As you can see, the gpu is not utilized.\r\n\r\n",
"Are you sure? Can you let the code run for a couple of steps and then monitor the GPU? It is possible that the code first does some data preprocessing, which would be CPU-intensive, without GPU. Only when training really starts (and you see the steps moving) the GPU should be used.",
"@MichaelJanz export PYTHONPATH=\"$PATH:(absolute path to the examples folder)\r\ndo you mean something like this? export PYTHONPATH=\"$PATH:/content/transformers/examples\"\r\nI'm using google collab and the path to examples is /content/transformers/examples,\r\nSorry I'm a complete noob when it comes to python\r\n\r\n-------------------------------------------------------------\r\nEdit:\r\nI think I fixed it by giving the path to finetune.py in finetune.sh :\r\npython /content/transformers/examples/seq2seq/finetune.py \\\r\n --learning_rate=3e-5 \\\r\n --fp16 \\\r\n --gpus 1 \\\r\n --do_train \\\r\n --do_predict \\\r\n --n_val 1000 \\\r\n --val_check_interval 0.1 \\\r\n --sortish_sampler \\\r\n $@\r\n\r\nHowever, I got a new error:\r\n\r\nFile \"/content/transformers/examples/seq2seq/finetune.sh\", line 7\r\n --gpus 1 \\\r\nis it normal?\r\n ^",
"@Hildweig make sure your lightning example is up to date with examples/requirements.txt. Closing this. Pls make a new issue if you are still struggling to get things working :)"
] | 1,593 | 1,595 | 1,595 | CONTRIBUTOR | null | # 🐛 Bug
I followed the seq2seq readme and wanted to try the sshleifer/distilbart-cnn-12-6 model for absractive text summarization.
I got the bug above, it seems like lightning_base was part of this project before it was moved/removed.
## Information
Model I am using: sshleifer/distilbart-cnn-12-6
Language I am using the model on: English
The problem arises when using:
* [x] the official example scripts: (give details below)
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
Cnn_dm
## To reproduce
Steps to reproduce the behavior:
1. Follow the instructions in the readme and prepare your environment & oull the latest master
2. Start summarization by using `./finetune.sh \
--data_dir $CNN_DIR \
--train_batch_size=1 \
--eval_batch_size=1 \
--output_dir=xsum_results \
--num_train_epochs 1 \
--model_name_or_path facebook/bart-large`
3. Receive the error
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I would expect the model to start inference
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: no
@sshleifer, you asked for beeing tagged on issues in the readme
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5392/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5392/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5391 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5391/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5391/comments | https://api.github.com/repos/huggingface/transformers/issues/5391/events | https://github.com/huggingface/transformers/issues/5391 | 647,983,215 | MDU6SXNzdWU2NDc5ODMyMTU= | 5,391 | Training a GPT-2 from scratch in Greek-text, results in a low perplexity score of 7 after 15 epochs. Is it normal that score? | {
"login": "Nkonstan",
"id": 35643708,
"node_id": "MDQ6VXNlcjM1NjQzNzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/35643708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nkonstan",
"html_url": "https://github.com/Nkonstan",
"followers_url": "https://api.github.com/users/Nkonstan/followers",
"following_url": "https://api.github.com/users/Nkonstan/following{/other_user}",
"gists_url": "https://api.github.com/users/Nkonstan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nkonstan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nkonstan/subscriptions",
"organizations_url": "https://api.github.com/users/Nkonstan/orgs",
"repos_url": "https://api.github.com/users/Nkonstan/repos",
"events_url": "https://api.github.com/users/Nkonstan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nkonstan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I try to train a GPT-2 from scratch in Greek with an older version of run_language_modeling.py (https://github.com/huggingface/transformers/tree/master/examples/language-modeling) script, but I get a low perplexity score of 7 after 15 epochs.\r\n\r\nMy data for train is about 4.6Gb and is constructed as 5 sentences per line. The data for the evaluation is about 450Mb constructed with the same way. Use of BPE for the encoding with a vocab of 22000 merges.\r\nFor the train is used FP16 O2.\r\n",
"I think it is normal.\r\nFor perplexity, the lower, the better.\r\nBy the way, can you share your script for preprocessing data?",
"@xx-zhou16 Ok i found a mistake i had...when i was computing the loss i wasn't ignoring the pad_token . Now i train it again , i think now the perplexity score will stop about 11, which is again low. \r\nI use the line by LineByLineTextDataset :\r\n\r\nclass LineByLineTextDataset(Dataset):\r\n def __init__(self, tokenizer: PreTrainedTokenizer, args, file_path: str, block_size=512):\r\n assert os.path.isfile(file_path)\r\n logger.info(\"Creating features from dataset file at %s\", file_path)\r\n\r\n with open(file_path, encoding=\"utf-8\") as f:\r\n lines = [line for line in f.read().splitlines() if (len(line) > 0 and not line.isspace())]\r\n self.examples = tokenizer.batch_encode_plus(lines, add_special_tokens=True, truncation=True, max_length=block_size)[\"input_ids\"] \r\n \r\n def __len__(self):\r\n return len(self.examples)\r\n\r\n def __getitem__(self, i):\r\n return torch.tensor(self.examples[i], dtype=torch.long)\r\n\r\n def load_and_cache_examples(args, tokenizer, evaluate=False):\r\n file_path = args.eval_data_file if evaluate else args.train_data_file\r\n if args.line_by_line:\r\n return LineByLineTextDataset(tokenizer, args, file_path=file_path, block_size=args.block_size)\r\n\r\npadding and creating an attention mask .\r\n\r\ndef collate(examples: List[torch.Tensor]):\r\n padding_value = 0 if tokenizer._pad_token is None else tokenizer.pad_token_id\r\n input_ids = pad_sequence(examples, batch_first=True, padding_value=padding_value)\r\n max_length = input_ids.shape[1]\r\n attention_mask = torch.stack([torch.cat([torch.ones(len(t), dtype=torch.long), torch.zeros(max_length - len(t), dtype=torch.long)]) for t in examples])\r\n return input_ids, attention_mask\r\n\r\n\r\nand that is in the evaluation , for the perplexity score computation.\r\n\r\n for batch in tqdm(eval_dataloader, desc=\"Evaluating\"):\r\n input_ids, attention_mask = batch\r\n inputs, labels = (input_ids, input_ids)\r\n\r\n inputs = inputs.to(args.device)\r\n labels = labels.to(args.device)\r\n attention_mask = attention_mask.to(args.device)\r\n\r\n with torch.no_grad():\r\n outputs = model(inputs, labels=labels, attention_mask=attention_mask)\r\n lm_loss = outputs[0].mean().item()\r\n eval_loss += lm_loss\r\n nb_eval_steps += 1\r\n\r\n\r\n eval_loss = eval_loss / nb_eval_steps\r\n perplexity = torch.exp(torch.tensor(eval_loss))",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,600 | 1,600 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5391/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5390 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5390/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5390/comments | https://api.github.com/repos/huggingface/transformers/issues/5390/events | https://github.com/huggingface/transformers/issues/5390 | 647,970,936 | MDU6SXNzdWU2NDc5NzA5MzY= | 5,390 | model.generate source code | {
"login": "yaof20",
"id": 31304106,
"node_id": "MDQ6VXNlcjMxMzA0MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31304106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaof20",
"html_url": "https://github.com/yaof20",
"followers_url": "https://api.github.com/users/yaof20/followers",
"following_url": "https://api.github.com/users/yaof20/following{/other_user}",
"gists_url": "https://api.github.com/users/yaof20/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yaof20/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaof20/subscriptions",
"organizations_url": "https://api.github.com/users/yaof20/orgs",
"repos_url": "https://api.github.com/users/yaof20/repos",
"events_url": "https://api.github.com/users/yaof20/events{/privacy}",
"received_events_url": "https://api.github.com/users/yaof20/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"`generate()` is defined in the `PretrainedModel` class here https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_utils.py#L872. Given that `BartModel`, `BartForConditionalGeneration`, etc, inherit from `PretrainedModel` they all have access to this generate function. \r\n\r\nHope this help? ",
"We actually just moved all of the `generate` functions to their own file:\r\nhttps://github.com/huggingface/transformers/blob/c4d4e8bdbd25d9463d41de6398940329c89b7fb6/src/transformers/generation_utils.py#L101\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> We actually just moved all of the `generate` functions to their own file:\r\n> \r\n> https://github.com/huggingface/transformers/blob/c4d4e8bdbd25d9463d41de6398940329c89b7fb6/src/transformers/generation_utils.py#L101\r\n\r\nHi, it seems that .generate() can only take input_ids as source input. I wonder whether input_embs can be used as input.",
"in case anyone's still looking for the function, it has moved again and can currently be found here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/3e93dd295b5343557a83bc07b0b2ea64c926f9b4/src/transformers/generation/utils.py#L1342"
] | 1,593 | 1,697 | 1,598 | NONE | null | Hi there,
I am trying to use BART to do NLG task. During my reading the BART tutorial on the website, I couldn't find the definition of 'model.generate()" function. Could you please add some explaination on that? thanks in advance | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5390/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5389 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5389/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5389/comments | https://api.github.com/repos/huggingface/transformers/issues/5389/events | https://github.com/huggingface/transformers/pull/5389 | 647,968,025 | MDExOlB1bGxSZXF1ZXN0NDQxODYxNDU3 | 5,389 | Raises PipelineException on FillMaskPipeline when there are != 1 mask_token in the input | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5389?src=pr&el=h1) Report\n> Merging [#5389](https://codecov.io/gh/huggingface/transformers/pull/5389?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/64e3d966b1131c15b5905b1e1e582d4bebac1ef0&el=desc) will **decrease** coverage by `0.16%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5389?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5389 +/- ##\n==========================================\n- Coverage 77.75% 77.59% -0.17% \n==========================================\n Files 140 140 \n Lines 24373 24384 +11 \n==========================================\n- Hits 18951 18920 -31 \n- Misses 5422 5464 +42 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5389?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `75.79% <100.00%> (+0.47%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.68% <0.00%> (-0.72%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.43% <0.00%> (+0.75%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.30% <0.00%> (+1.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `89.11% <0.00%> (+5.10%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.14% <0.00%> (+29.44%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (+33.33%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5389?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5389?src=pr&el=footer). Last update [64e3d96...56d9c36](https://codecov.io/gh/huggingface/transformers/pull/5389?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"no particular reason @LysandreJik other than developer lazyness (me)",
"> no particular reason @LysandreJik other than developer lazyness (me)\r\n\r\n@julien-c \r\n\r\nHi Julien. I understand if the implementation of multiple tokens is too inconvenient, but I am sure that many would appreciate such a feature. ",
"Hey @mfuntowicz , @julien-c,\r\n\r\n> LGTM! Why don't we handle multiple tokens, by the way? The models should behave somewhat okay given that their pre-training hides more than one token, right?\r\n\r\nI am implementing this feature in my local version, is there anything I should be careful of? I am just planning to remove the exception trigger and handle the multiple masked tokens output. \r\n\r\nAlso, let me know if you'd like a PR for this. \r\n"
] | 1,593 | 1,608 | 1,593 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5389/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5389",
"html_url": "https://github.com/huggingface/transformers/pull/5389",
"diff_url": "https://github.com/huggingface/transformers/pull/5389.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5389.patch",
"merged_at": 1593617267000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5388 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5388/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5388/comments | https://api.github.com/repos/huggingface/transformers/issues/5388/events | https://github.com/huggingface/transformers/issues/5388 | 647,940,489 | MDU6SXNzdWU2NDc5NDA0ODk= | 5,388 | Why T5 do not generate the whole next sentence as one of the pretrain loss? | {
"login": "guotong1988",
"id": 4702353,
"node_id": "MDQ6VXNlcjQ3MDIzNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4702353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guotong1988",
"html_url": "https://github.com/guotong1988",
"followers_url": "https://api.github.com/users/guotong1988/followers",
"following_url": "https://api.github.com/users/guotong1988/following{/other_user}",
"gists_url": "https://api.github.com/users/guotong1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guotong1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guotong1988/subscriptions",
"organizations_url": "https://api.github.com/users/guotong1988/orgs",
"repos_url": "https://api.github.com/users/guotong1988/repos",
"events_url": "https://api.github.com/users/guotong1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/guotong1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @guotong1988 what do you exactly mean by not generating the whole next sentence ?",
"Sorry, my mistake.\r\nInput the previous sentence and generate the next sentence, instead of input the two sentences.",
"I don't really understand the question here. Are you asking why T5 was not trained with the objective to generate the complete next sentence? ",
"Thank you for your reply.\r\nYes. I am asking why T5 was not pre-trained with the objective to generate the complete next sentence?\r\n\r\nDo you think generating the whole next sentence as one of the pretrain loss will improve pretrain performance?",
"\r\nhttps://github.com/google-research/text-to-text-transfer-transformer/issues/286"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | T5 has the next-sentence-predict loss.
And the next-sentence-predict loss is not generating the whole next sentence but generating is_next or not_next token.
Thank you very much. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5388/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5387 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5387/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5387/comments | https://api.github.com/repos/huggingface/transformers/issues/5387/events | https://github.com/huggingface/transformers/issues/5387 | 647,918,029 | MDU6SXNzdWU2NDc5MTgwMjk= | 5,387 | BART fine tuning on gpu issue | {
"login": "Annu99",
"id": 67584311,
"node_id": "MDQ6VXNlcjY3NTg0MzEx",
"avatar_url": "https://avatars.githubusercontent.com/u/67584311?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Annu99",
"html_url": "https://github.com/Annu99",
"followers_url": "https://api.github.com/users/Annu99/followers",
"following_url": "https://api.github.com/users/Annu99/following{/other_user}",
"gists_url": "https://api.github.com/users/Annu99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Annu99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Annu99/subscriptions",
"organizations_url": "https://api.github.com/users/Annu99/orgs",
"repos_url": "https://api.github.com/users/Annu99/repos",
"events_url": "https://api.github.com/users/Annu99/events{/privacy}",
"received_events_url": "https://api.github.com/users/Annu99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Have you checked if your setup is working properly and if \r\n`torch.cuda.is_available()`\r\nreturns True?",
"Yeah\r\nIt gives initial output as:\r\nSome weights of the model checkpoint at facebook/bart-large were not used when initializing BartForConditionalGeneration: ['encoder.version', 'decoder.version']\r\n- This IS expected if you are initializing BartForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).\r\n- This IS NOT expected if you are initializing BartForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of BartForConditionalGeneration were not initialized from the model checkpoint at facebook/bart-large and are newly initialized: ['final_logits_bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nGPU available: True, used: True\r\nTPU available: False, using: 0 TPU cores\r\nCUDA_VISIBLE_DEVICES: [0]\r\nUsing APEX 16bit precision.\r\nSelected optimization level O1: Insert automatic casts around Pytorch functions and Tensor methods.\r\n\r\nDefaults for this optimization level are:\r\nenabled : True\r\nopt_level : O1\r\ncast_model_type : None\r\npatch_torch_functions : True\r\nkeep_batchnorm_fp32 : None\r\nmaster_weights : None\r\nloss_scale : dynamic\r\nProcessing user overrides (additional kwargs that are not None)...\r\nAfter processing overrides, optimization options are:\r\nenabled : True\r\nopt_level : O1\r\ncast_model_type : None\r\npatch_torch_functions : True\r\nkeep_batchnorm_fp32 : None\r\nmaster_weights : None\r\nloss_scale : dynamic\r\nWarning: multi_tensor_applier fused unscale kernel is unavailable, possibly because apex was installed without --cuda_ext --cpp_ext. Using Python fallback. Original ImportError was: ModuleNotFoundError(\"No module named 'amp_C'\",)\r\nValidation sanity check: 0it [00:00, ?it/s]\r\nThen fails.",
"What's the output of `transformers-cli env`?",
"Sorry I didnt get what you are asking for.\r\n",
"Can you run the command `transformers-cli env` in terminal and paste the output here in code format (using ```)",
"It says command not found.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,599 | 1,599 | NONE | null | # ❓ Questions & Help
I am fine tuning BART by executing transformers/examples/seq2seq/finetune.sh as instructed.
I am getting the following error:
```
Traceback (most recent call last):
File "finetune.py", line 346, in <module>
main(args)
File "finetune.py", line 324, in main
logger=logger,
File "/data/t-angand/transformers/examples/lightning_base.py", line 330, in generic_train
trainer.fit(model)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 928, in fit
self.single_gpu_train(model)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 183, in single_gpu_train
self.run_pretrain_routine(model)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1086, in run_pretrain_routine
False)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 291, in _evaluate
output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 462, in evaluation_forward
output = model.validation_step(*args)
File "finetune.py", line 136, in validation_step
return self._generative_step(batch)
File "finetune.py", line 163, in _generative_step
generated_ids = self.model.generate(input_ids=source_ids, attention_mask=source_mask, use_cache=True,)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/data/t-angand/transformers/src/transformers/modeling_utils.py", line 1159, in generate
encoder_outputs: tuple = encoder(input_ids, attention_mask=attention_mask)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/data/t-angand/transformers/src/transformers/modeling_bart.py", line 303, in forward
inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 114, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/torch/nn/functional.py", line 1724, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select
```
Thanks is advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5387/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5386 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5386/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5386/comments | https://api.github.com/repos/huggingface/transformers/issues/5386/events | https://github.com/huggingface/transformers/issues/5386 | 647,912,400 | MDU6SXNzdWU2NDc5MTI0MDA= | 5,386 | bart-large-cnn training related information | {
"login": "MaheshChandrra",
"id": 13826929,
"node_id": "MDQ6VXNlcjEzODI2OTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/13826929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaheshChandrra",
"html_url": "https://github.com/MaheshChandrra",
"followers_url": "https://api.github.com/users/MaheshChandrra/followers",
"following_url": "https://api.github.com/users/MaheshChandrra/following{/other_user}",
"gists_url": "https://api.github.com/users/MaheshChandrra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MaheshChandrra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MaheshChandrra/subscriptions",
"organizations_url": "https://api.github.com/users/MaheshChandrra/orgs",
"repos_url": "https://api.github.com/users/MaheshChandrra/repos",
"events_url": "https://api.github.com/users/MaheshChandrra/events{/privacy}",
"received_events_url": "https://api.github.com/users/MaheshChandrra/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"bart-large-cnn is trained on CNN/DM summarization dataset. See this for dataset info https://www.tensorflow.org/datasets/catalog/cnn_dailymail",
"Also available in the Hugging Face `nlp` [library](https://huggingface.co/datasets/cnn_dailymail) :) \r\n\r\nYou can explore the dataset [here](https://huggingface.co/nlp/viewer/?dataset=cnn_dailymail&config=3.0.0)\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"The name is very confusing if it's trained on CNN and Dailymail but named as CNN (only without DM). Please consider changing the name. ",
"sorry for the confusion. it's too late to change it and the name is copied from fairseq. it's trained on the cnn/dailymail dataset.",
"I feel that you answered the question about the dataset, but what about the training configuration used?"
] | 1,593 | 1,644 | 1,598 | NONE | null | # ❓ Questions & Help
Hi Team HuggingFace
Can we please know the training data set,,total number of summary pairs used and the configuration used to train BART Summarizer (facebook/bart-large-cnn).
Thanks in advance!! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5386/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5385 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5385/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5385/comments | https://api.github.com/repos/huggingface/transformers/issues/5385/events | https://github.com/huggingface/transformers/issues/5385 | 647,754,832 | MDU6SXNzdWU2NDc3NTQ4MzI= | 5,385 | Example: PyTorch Lightning returns missing attribute error (Token Classification) | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834060867,
"node_id": "MDU6TGFiZWwxODM0MDYwODY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Named%20Entity%20Recognition",
"name": "Ex: Named Entity Recognition",
"color": "06FFD8",
"default": false,
"description": ""
}
] | closed | false | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,599 | 1,599 | COLLABORATOR | null | Hi,
the following error message is currently thrown (version 3.0.0 of Transformers) when running the `run_ner_pl.sh` example from the token classification example:
```bash
Traceback (most recent call last):
File "run_pl_ner.py", line 198, in <module>
trainer = generic_train(model, args)
File "/mnt/transformers-stefan/examples/lightning_base.py", line 330, in generic_train
trainer.fit(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 918, in fit
self.single_gpu_train(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/distrib_parts.py", line 176, in single_gpu_train
self.run_pretrain_routine(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1093, in run_pretrain_routine
self.train()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py", line 335, in train
self.reset_train_dataloader(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/data_loading.py", line 189, in reset_train_dataloader
self.train_dataloader = self.request_dataloader(model.train_dataloader)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/data_loading.py", line 352, in request_dataloader
dataloader = dataloader_fx()
File "/mnt/transformers-stefan/examples/lightning_base.py", line 142, in train_dataloader
* float(self.hparams.num_train_epochs)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/utilities/parsing.py", line 116, in __getattr__
raise AttributeError(f'Missing attribute "{key}"')
AttributeError: Missing attribute "n_gpu"
```
It seems that `n_gpu` is set in the `lightning_base.py` file:
https://github.com/huggingface/transformers/blob/7f60e93ac5c73e74b5a00d57126d156be9dbd2b8/examples/lightning_base.py#L139-L142
The example is running when `n_gpu` is changed to `gpus`.
I will prepare for fixing it and hopefully this does not introduce regression bugs for other PL examples 😅 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5385/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5384 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5384/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5384/comments | https://api.github.com/repos/huggingface/transformers/issues/5384/events | https://github.com/huggingface/transformers/issues/5384 | 647,744,088 | MDU6SXNzdWU2NDc3NDQwODg= | 5,384 | Positional and Segment Embeddings in BERT | {
"login": "AkshitaJha",
"id": 8939340,
"node_id": "MDQ6VXNlcjg5MzkzNDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8939340?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AkshitaJha",
"html_url": "https://github.com/AkshitaJha",
"followers_url": "https://api.github.com/users/AkshitaJha/followers",
"following_url": "https://api.github.com/users/AkshitaJha/following{/other_user}",
"gists_url": "https://api.github.com/users/AkshitaJha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AkshitaJha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AkshitaJha/subscriptions",
"organizations_url": "https://api.github.com/users/AkshitaJha/orgs",
"repos_url": "https://api.github.com/users/AkshitaJha/repos",
"events_url": "https://api.github.com/users/AkshitaJha/events{/privacy}",
"received_events_url": "https://api.github.com/users/AkshitaJha/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"They are not predicted, they are learnt alongside the token embedding themselves. They work exactly the same w.r.t. to training: they are all simply embeddings but with a different vocab size.\r\n\r\nhttps://github.com/huggingface/transformers/blob/9a473f1e43221348334b9e7f95bb45770b7ef268/src/transformers/modeling_bert.py#L154-L156\r\n\r\nThe output of all three embeddings are summed up before passing them to the transformer layers.\r\n\r\nPositional embeddings can help because they basically highlight the position of a word in the sentence. A word in the first position likely has another meaning/function than the last one. Also, the same word likely will have a different syntactic function in the first vs. last position. Positional embeddings thus play some kind of syntactic role where they tell the model that a word can have a different meaning/syntactic function depending on its position.",
"Thank you for the explanation. What is the benefit of learning positional embeddings as opposed to the technique adopted by transformers where the positional encoding is directly added to each input embedding and not learnt?",
"When is a positional embedding directly added? An embedding is an embedding and as such always needs to be learned or transfered. ",
"Section 3.5 of the paper '[Attention is All You Need](https://arxiv.org/pdf/1706.03762.pdf)' explains the positional encoding in the case of transformers. They use _'sine and cosine functions of different frequencies'_ to inject information about the position of the tokens. Learned positional embeddings do not seem to help in the case of the original transformers. BERT on the other hand 'learns' positional embeddings. I wanted to understand the benefits of learning these positional embeddings in BERT.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,603 | 1,603 | NONE | null | #### Question
I'm trying to train a BERT language model from scratch using the huggingface library. I'm not sure I completely understand how positional and segment embeddings work in BERT. The original BERT paper states that unlike transformers, positional and segment embeddings are learned. What exactly does this mean?
How do positional embeddings help in predicting masked tokens? Is the positional embedding of the masked token predicted along with the word?
How has this been implemented in the huggingface library? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5384/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5383 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5383/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5383/comments | https://api.github.com/repos/huggingface/transformers/issues/5383/events | https://github.com/huggingface/transformers/pull/5383 | 647,696,556 | MDExOlB1bGxSZXF1ZXN0NDQxNjY5MDk0 | 5,383 | Documentation for the Trainer API | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5383?src=pr&el=h1) Report\n> Merging [#5383](https://codecov.io/gh/huggingface/transformers/pull/5383?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0267668c3d648c6e41afda97f5df8671ee880ac3&el=desc) will **decrease** coverage by `0.19%`.\n> The diff coverage is `77.71%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5383?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5383 +/- ##\n==========================================\n- Coverage 77.01% 76.81% -0.20% \n==========================================\n Files 128 138 +10 \n Lines 21615 24314 +2699 \n==========================================\n+ Hits 16646 18676 +2030 \n- Misses 4969 5638 +669 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5383?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2N0cmwucHk=) | `97.05% <ø> (ø)` | |\n| [src/transformers/configuration\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/5383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.22% <ø> (ø)` | |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `97.14% <ø> (ø)` | |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JvYmVydGEucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <ø> (ø)` | |\n| ... and [157 more](https://codecov.io/gh/huggingface/transformers/pull/5383/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5383?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5383?src=pr&el=footer). Last update [482a599...c56586e](https://codecov.io/gh/huggingface/transformers/pull/5383?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great! Much needed! Also adding @thomwolf who might want to chime in",
"The TF documentation looks much better!! :smile: Thanks @sgugger ",
"Thanks for catching all my bad copy-pastes @jplu \r\nConcerning `n_gpu`, I think the change of name to `n_replica`/`n_device` might be welcome, especially since it does not represent the same thing as `n_gpu` in the PyTorch `Trainer` (which is only more than 1 when using dp instead of ddp). But that probably would go better in another PR!"
] | 1,593 | 1,593 | 1,593 | COLLABORATOR | null | This PR introduces documentation for the Trainer/TFTrainer classes and the argument dataclasses.
While documenting, I noticed that the method `Trainer.evaluate` and `TFTrainer.evaluate` had an argument `pred_loss_only` which was not used at all (my guess is that it's the same argument passed at init that is used). I removed that argument. I'm aware this is a breaking change but, as a user, I'd expect a hard error when passing something that is not used at all. Let me know if there was a good reason for it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5383/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 5,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5383/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5383",
"html_url": "https://github.com/huggingface/transformers/pull/5383",
"diff_url": "https://github.com/huggingface/transformers/pull/5383.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5383.patch",
"merged_at": 1593531824000
} |
https://api.github.com/repos/huggingface/transformers/issues/5382 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5382/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5382/comments | https://api.github.com/repos/huggingface/transformers/issues/5382/events | https://github.com/huggingface/transformers/issues/5382 | 647,675,235 | MDU6SXNzdWU2NDc2NzUyMzU= | 5,382 | cannot import name 'AutoModelForSeq2SeqLM' from transformers | {
"login": "PingYu-iris",
"id": 23408859,
"node_id": "MDQ6VXNlcjIzNDA4ODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/23408859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PingYu-iris",
"html_url": "https://github.com/PingYu-iris",
"followers_url": "https://api.github.com/users/PingYu-iris/followers",
"following_url": "https://api.github.com/users/PingYu-iris/following{/other_user}",
"gists_url": "https://api.github.com/users/PingYu-iris/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PingYu-iris/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PingYu-iris/subscriptions",
"organizations_url": "https://api.github.com/users/PingYu-iris/orgs",
"repos_url": "https://api.github.com/users/PingYu-iris/repos",
"events_url": "https://api.github.com/users/PingYu-iris/events{/privacy}",
"received_events_url": "https://api.github.com/users/PingYu-iris/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"what's your `transformers-cli env` output? You should be on 3.0.0 or install from source for this to work."
] | 1,593 | 1,595 | 1,595 | NONE | null | cannot import name 'AutoModelForSeq2SeqLM' from transformers | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5382/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5381 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5381/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5381/comments | https://api.github.com/repos/huggingface/transformers/issues/5381/events | https://github.com/huggingface/transformers/issues/5381 | 647,611,251 | MDU6SXNzdWU2NDc2MTEyNTE= | 5,381 | AssertionError: Padding_idx must be within num_embeddings | {
"login": "aranciokov",
"id": 18611292,
"node_id": "MDQ6VXNlcjE4NjExMjky",
"avatar_url": "https://avatars.githubusercontent.com/u/18611292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aranciokov",
"html_url": "https://github.com/aranciokov",
"followers_url": "https://api.github.com/users/aranciokov/followers",
"following_url": "https://api.github.com/users/aranciokov/following{/other_user}",
"gists_url": "https://api.github.com/users/aranciokov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aranciokov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aranciokov/subscriptions",
"organizations_url": "https://api.github.com/users/aranciokov/orgs",
"repos_url": "https://api.github.com/users/aranciokov/repos",
"events_url": "https://api.github.com/users/aranciokov/events{/privacy}",
"received_events_url": "https://api.github.com/users/aranciokov/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have no problem loading this in `transformers`. These files have not been updated in the past two months either; can you try doing the same in `transformers` or are you constrained by Python 2?",
"I'm constrained to Python 2 right now (due to a decently large codebase which I do not have enough time to upgrade to Py3 in these months), and that's why I'm using such an old version the library (which is the latest available for Py2).",
"Hmm, okay I think I found the source of the issue. It seems the configuration file was indeed changed, the April 24th in a non-backwards compatible way (cc @julien-c). Very sorry about this.\r\n\r\nIn order to fix this you can use this JSON as a file on your machine:\r\n\r\n```json\r\n{\r\n \"architectures\": [\r\n \"XLMWithLMHeadModel\"\r\n ],\r\n \"emb_dim\": 2048,\r\n \"n_layers\": 12,\r\n \"n_heads\": 16,\r\n \"dropout\": 0.1,\r\n \"attention_dropout\": 0.1,\r\n \"gelu_activation\": true,\r\n \"sinusoidal_embeddings\": false,\r\n \"asm\": false,\r\n \"bos_index\": 0,\r\n \"eos_index\": 1,\r\n \"pad_index\": 2,\r\n \"unk_index\": 3,\r\n \"mask_index\": 5,\r\n \"n_langs\": 1,\r\n \"n_words\": 30145\r\n}\r\n```\r\n\r\nand then load it as you did, but replacing the weights name with the filepath:\r\n\r\n```py\r\nfrom pytorch_transformers import XLMModel, XLMConfig, XLMTokenizer\r\n\r\nmodel_class, tokenizer_class, pretrained_weights = (XLMModel, XLMTokenizer, 'xlm-mlm-en-2048')\r\nconfig = XLMConfig.from_pretrained(PATH_TO_FILE)\r\nxlm_model = model_class.from_pretrained(pretrained_weights, config=config)\r\n```",
"Everything's working now (both the four-liner and my codebase as well!). Thanks! ",
"Glad you could make it work!"
] | 1,593 | 1,593 | 1,593 | NONE | null | ### Information
Model I am using: XLM
### To reproduce
```
from pytorch_transformers import XLMModel, XLMConfig, XLMTokenizer
model_class, tokenizer_class, pretrained_weights = (XLMModel, XLMTokenizer, 'xlm-mlm-en-2048')
config = XLMConfig.from_pretrained(pretrained_weights)
xlm_model = model_class.from_pretrained(pretrained_weights, config=config)
```
**Error:**
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/username/venv27/lib/python2.7/site-packages/pytorch_transformers/modeling_utils.py", line 536, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/home/username/venv27/lib/python2.7/site-packages/pytorch_transformers/modeling_xlm.py", line 545, in __init__
self.embeddings = nn.Embedding(self.n_words, self.dim, padding_idx=self.pad_index)
File "/home/username/venv27/lib/python2.7/site-packages/torch/nn/modules/sparse.py", line 88, in __init__
assert padding_idx < self.num_embeddings, 'Padding_idx must be within num_embeddings'
AssertionError: Padding_idx must be within num_embeddings
```
### Expected behavior
Until recently (last I checked was 2 months ago, but since then I never updated anything), everything worked without issues and I was able to load the model with those 4 instructions easily (which is what I am expecting).
### Environment info
pytorch_transformers: 1.2.0
Python: 2.7
pytorch: 1.4.0 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5381/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5380 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5380/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5380/comments | https://api.github.com/repos/huggingface/transformers/issues/5380/events | https://github.com/huggingface/transformers/issues/5380 | 647,601,050 | MDU6SXNzdWU2NDc2MDEwNTA= | 5,380 | Unable to load pre-trained model/tokenizer when using Kubernetes | {
"login": "g-karthik",
"id": 3851993,
"node_id": "MDQ6VXNlcjM4NTE5OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3851993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-karthik",
"html_url": "https://github.com/g-karthik",
"followers_url": "https://api.github.com/users/g-karthik/followers",
"following_url": "https://api.github.com/users/g-karthik/following{/other_user}",
"gists_url": "https://api.github.com/users/g-karthik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-karthik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-karthik/subscriptions",
"organizations_url": "https://api.github.com/users/g-karthik/orgs",
"repos_url": "https://api.github.com/users/g-karthik/repos",
"events_url": "https://api.github.com/users/g-karthik/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-karthik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Do you have a code sample to reproduce this?",
"@LysandreJik sure, create a file named `train.py` containing only the following 2 lines:\r\n```\r\nfrom transformers import GPT2Tokenizer\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\n```\r\n\r\nCreate `helm/mpijob_chart/templates/job.yaml` as follows:\r\n\r\n```\r\napiVersion: kubeflow.org/v1alpha2\r\nkind: MPIJob\r\nmetadata:\r\n name: {{ .Values.jobName }}\r\nspec:\r\n slotsPerWorker: {{ .Values.gpusPerNode }}\r\n cleanPodPolicy: Running\r\n backoffLimit: 0\r\n mpiReplicaSpecs:\r\n Worker:\r\n replicas: {{ .Values.nodeCount }}\r\n restartPolicy: Never\r\n template:\r\n spec:\r\n nodeSelector:\r\n beta.kubernetes.io/instance-type: {{ .Values.nodeType }}\r\n volumes:\r\n - name: fsx\r\n persistentVolumeClaim:\r\n claimName: fsx-claim\r\n containers:\r\n - image: {{ .Values.image }}\r\n imagePullPolicy: Always\r\n name: worker\r\n # This will allocate 8 GPUs for each worker, helping Kubernetes place resources.\r\n # Should match slotsPerWorker\r\n resources:\r\n limits:\r\n nvidia.com/gpu: {{ .Values.gpusPerNode }}\r\n # This mounts onto an external FSx volume, useful for storing the training data here.\r\n volumeMounts:\r\n - name: fsx\r\n mountPath: /fsx\r\n # This exposes the Docker container to listen to the mpirun ports\r\n env:\r\n - name: NCCL_SOCKET_IFNAME\r\n value: ^lo,docker0\r\n # Place the subdirectory on the Python path\r\n - name: PYTHONPATH\r\n value: {{ .Values.pythonPath }}\r\n # Some initial preparation, sleep for 14 days so the container will stay alive and respond to mpirun\r\n command: [\"/bin/bash\"]\r\n args: [\"-c\", \"sleep 1209600\"]\r\n Launcher:\r\n replicas: 1\r\n restartPolicy: Never\r\n template:\r\n spec:\r\n nodeSelector:\r\n beta.kubernetes.io/instance-type: {{ .Values.nodeType }}\r\n volumes:\r\n - name: fsx\r\n persistentVolumeClaim:\r\n claimName: fsx-claim\r\n containers:\r\n - image: {{ .Values.image }}\r\n imagePullPolicy: Always\r\n name: launcher\r\n volumeMounts:\r\n - mountPath: /fsx\r\n name: fsx\r\n # Wait 15 seconds so any additional dependencies being manually installed will finish on worker nodes\r\n command: [\"/bin/bash\"]\r\n args:\r\n - \"-c\"\r\n - \"sleep 15 && \\\r\n mpirun -np {{ mul .Values.nodeCount .Values.gpusPerNode }} \\\r\n --allow-run-as-root \\\r\n --timestamp-output \\\r\n -bind-to none -map-by slot \\\r\n -x NCCL_DEBUG=INFO -x LD_LIBRARY_PATH -x PATH \\\r\n -mca pml ob1 -mca btl ^openib \\\r\n {{ .Values.command1 }} \"\r\n```\r\n\r\n`helm/mpijob_chart/values.yaml` for the above `job.yaml`:\r\n\r\n```\r\n# Each job name must be unique to the Kubernetes cluster\r\njobName: gpt2-training\r\nnodeCount: 1\r\n# If gpusPerNode is too high, the job will pend forever.\r\ngpusPerNode: 7\r\nnodeType: p3dn.24xlarge\r\nimage: <REPLACE WITH YOUR OWN IMAGE HERE, I suggest building a custom image from Horovod base image with transformers installed in it>\r\npythonPath: /fsx/myCodeRepo\r\ncommand1: \"python /fsx/myCodeRepo/train.py &> /fsx/testout.txt\"\r\n```\r\n\r\nFor setting up the shared file-system for the cluster (i.e., FSx, the persistent volume and persistent volume claim YAMLs), the instructions/templates are here: https://github.com/kubernetes-sigs/aws-fsx-csi-driver/tree/master/examples/kubernetes/static_provisioning\r\n\r\nCopy `train.py` to `/fsx/myCodeRepo/` in the shared file-system. Then you can use `helm install trainjob helm/mpijob_chart` to run the job.",
"Have you tried with changing the `cache_dir` when using `from_pretrained`?",
"@LysandreJik yes, I changed it to:\r\n\r\n```\r\nfrom transformers import GPT2Tokenizer\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\", cache_dir=\"/fsx/transformers_cache/\")\r\n```\r\n\r\nThe error is still the same. Instead of giving `FileExistsError ` with `/root/.cache/torch/transformers`, it gives the error with `/fsx/transformers_cache/`. And the `OSError` is also thrown just like before.",
"@LysandreJik it seems that this error only happens when I have multiple processes during training. I tried a simple test of using 1 node and only 1 GPU in that node during training, and that is working totally fine.\r\n\r\nUPDATE: So as a temporary workaround, what I did was I kicked off a job with 1 node, 1 GPU, allowing for the pre-trained tokenizer/model caching to happen fine. Then I killed the job immediately, and kicked off a new one with N nodes, 8 GPUs. No errors this time. But this is kinda a hack. Do you have a better solution?"
] | 1,593 | 1,594 | 1,594 | NONE | null | # 🐛 Bug
## Information
Model I am using: GPT-2
The problem arises when using:
* [Y] my own modified scripts: (give details below)
The tasks I am working on is:
* [Y] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Create a Kubernetes cluster, a Docker image with all needed dependencies, and a shared mount file-system for the cluster (the file-system contains the code and training data).
2. Create an `MPIJob` YAML file for running training.
3. Deploy the job on the cluster using `kubectl` or `helm`.
Here is the error I get (I have removed references to my custom code since that is irrelevant for this error):
```
Mon Jun 29 06:31:29 2020<stderr>:Traceback (most recent call last):
Mon Jun 29 06:31:29 2020<stderr>: File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 379, in _from_pretrained
Mon Jun 29 06:31:29 2020<stderr>: resolved_vocab_files[file_id] = cached_path(file_path, cache_dir=cache_dir, force_download=force_download, proxies=proxies, resume_download=resume_download)
Mon Jun 29 06:31:29 2020<stderr>: File "/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py", line 211, in cached_path
Mon Jun 29 06:31:29 2020<stderr>: resume_download=resume_download, user_agent=user_agent)
Mon Jun 29 06:31:29 2020<stderr>: File "/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py", line 312, in get_from_cache
Mon Jun 29 06:31:29 2020<stderr>: os.makedirs(cache_dir)
Mon Jun 29 06:31:29 2020<stderr>: File "/usr/lib/python3.6/os.py", line 220, in makedirs
Mon Jun 29 06:31:29 2020<stderr>: mkdir(name, mode)
Mon Jun 29 06:31:29 2020<stderr>:FileExistsError: [Errno 17] File exists: '/root/.cache/torch/transformers'
Mon Jun 29 06:31:29 2020<stderr>:
Mon Jun 29 06:31:29 2020<stderr>:During handling of the above exception, another exception occurred:
Mon Jun 29 06:31:29 2020<stderr>:
Mon Jun 29 06:31:29 2020<stderr>:Traceback (most recent call last):
Mon Jun 29 06:31:29 2020<stderr>: tokenizer = tokenizer_class.from_pretrained(args.model_checkpoint)
Mon Jun 29 06:31:29 2020<stderr>: File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 302, in from_pretrained
Mon Jun 29 06:31:29 2020<stderr>: return cls._from_pretrained(*inputs, **kwargs)
Mon Jun 29 06:31:29 2020<stderr>: File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 391, in _from_pretrained
Mon Jun 29 06:31:29 2020<stderr>: raise EnvironmentError(msg)
Mon Jun 29 06:31:29 2020<stderr>:OSError: Couldn't reach server at '{}' to download vocabulary files.
```
## Expected behavior
I wouldn't expect `FileExistsError: [Errno 17] File exists: '/root/.cache/torch/transformers'` to show up.
I also wouldn't expect `OSError: Couldn't reach server at '{}' to download vocabulary files.` to show up.
To debug whether there is a network issue with my containers causing the above `OSError`, I created a separate sample Pod and within the container, I started up python and tried to load a tokenizer, which worked just fine. So I don't think this is a network issue. I also don't quite understand why the server path is empty in the above log.
## Environment info
- `transformers` version: 2.3.0
- Platform: AWS Elastic Kubernetes Service (EKS) cluster
- Python version: 3.6
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5380/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5379 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5379/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5379/comments | https://api.github.com/repos/huggingface/transformers/issues/5379/events | https://github.com/huggingface/transformers/issues/5379 | 647,600,962 | MDU6SXNzdWU2NDc2MDA5NjI= | 5,379 | Cannot reduce n_ctx for distil gpt2 from 1024 to 256 | {
"login": "nim17",
"id": 54574009,
"node_id": "MDQ6VXNlcjU0NTc0MDA5",
"avatar_url": "https://avatars.githubusercontent.com/u/54574009?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nim17",
"html_url": "https://github.com/nim17",
"followers_url": "https://api.github.com/users/nim17/followers",
"following_url": "https://api.github.com/users/nim17/following{/other_user}",
"gists_url": "https://api.github.com/users/nim17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nim17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nim17/subscriptions",
"organizations_url": "https://api.github.com/users/nim17/orgs",
"repos_url": "https://api.github.com/users/nim17/repos",
"events_url": "https://api.github.com/users/nim17/events{/privacy}",
"received_events_url": "https://api.github.com/users/nim17/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! Do you mind filling in the template?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,599 | 1,599 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): gpt2
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5379/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5378 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5378/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5378/comments | https://api.github.com/repos/huggingface/transformers/issues/5378/events | https://github.com/huggingface/transformers/pull/5378 | 647,575,372 | MDExOlB1bGxSZXF1ZXN0NDQxNTc1OTMz | 5,378 | Mention openAI model card and merge content | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5378?src=pr&el=h1) Report\n> Merging [#5378](https://codecov.io/gh/huggingface/transformers/pull/5378?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0267668c3d648c6e41afda97f5df8671ee880ac3&el=desc) will **decrease** coverage by `0.20%`.\n> The diff coverage is `77.67%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5378?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5378 +/- ##\n==========================================\n- Coverage 77.01% 76.80% -0.21% \n==========================================\n Files 128 138 +10 \n Lines 21615 24314 +2699 \n==========================================\n+ Hits 16646 18675 +2029 \n- Misses 4969 5639 +670 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5378?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2N0cmwucHk=) | `97.05% <ø> (ø)` | |\n| [src/transformers/configuration\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/5378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.22% <ø> (ø)` | |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `97.14% <ø> (ø)` | |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JvYmVydGEucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <ø> (ø)` | |\n| ... and [158 more](https://codecov.io/gh/huggingface/transformers/pull/5378/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5378?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5378?src=pr&el=footer). Last update [482a599...3595179](https://codecov.io/gh/huggingface/transformers/pull/5378?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | COLLABORATOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5378/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5378",
"html_url": "https://github.com/huggingface/transformers/pull/5378",
"diff_url": "https://github.com/huggingface/transformers/pull/5378.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5378.patch",
"merged_at": 1593469657000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5377 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5377/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5377/comments | https://api.github.com/repos/huggingface/transformers/issues/5377/events | https://github.com/huggingface/transformers/issues/5377 | 647,573,950 | MDU6SXNzdWU2NDc1NzM5NTA= | 5,377 | New tokenizer code in transformer 3.0.0 is creating error with old code | {
"login": "llStringll",
"id": 30209072,
"node_id": "MDQ6VXNlcjMwMjA5MDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/30209072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/llStringll",
"html_url": "https://github.com/llStringll",
"followers_url": "https://api.github.com/users/llStringll/followers",
"following_url": "https://api.github.com/users/llStringll/following{/other_user}",
"gists_url": "https://api.github.com/users/llStringll/gists{/gist_id}",
"starred_url": "https://api.github.com/users/llStringll/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/llStringll/subscriptions",
"organizations_url": "https://api.github.com/users/llStringll/orgs",
"repos_url": "https://api.github.com/users/llStringll/repos",
"events_url": "https://api.github.com/users/llStringll/events{/privacy}",
"received_events_url": "https://api.github.com/users/llStringll/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"Hi, this is not an error but a warning. If you want to disable warnings, you can use the following:\r\n\r\n```py\r\nimport logging\r\n\r\nlogging.basicConfig(level=logging.ERROR)\r\n```",
"Oh, I'm sorry for writing \"error\" everywhere, but I want to know, is this default behaviour correct for BeRT, it says by default it'll use only_first",
"You can read the documentation concerning that method [here](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.__call__).\r\n\r\nHere's the part you're probably interested in: \r\n\r\n> ‘only_first’: truncate to a max length specified in max_length or to the max acceptable input length for the model if no length is provided (max_length=None). This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided,\r\n\r\nSince you're only using a single sentence, it seems to be what you're looking for?",
"I am also concatenating multiple \"context\" sequences using [SEP] token, I'm just feeding sequences into encode_plus and stripping off the [CLS] token at the beginning and concatenating the rest with the old one, making it\r\n[CLS]seq1[SEP]seq2[SEP]\r\nI assume, even earlier, in older version, when this warning wasnt being logged to the terminal, it still used only_first, did it?",
"Is there a reason why you're not using the `encode_plus` method with your pairs? The tokenizer will automatically build the pairs as the model expects them.\r\n\r\nIf you pass a single sequence and want to build them with the special tokens yourself, you can use the `add_special_tokens=False` flag. No need to strip the special tokens then.\r\n\r\nBe careful as since you're building the sequences yourself, if you truncate these some of the special tokens might be cut off.",
"This is my snippet, \"texts\" is a list of strings\r\n```\r\ndef __call__(self, texts):\r\n input_ids_list, segment_ids_list, input_masks_list = [], [], []\r\n\r\n for text in texts[::-1][:self.max_history]:\r\n tokenized_dict = self.tokenizer.encode_plus(text,\r\n text_pair=None,\r\n add_special_tokens=True,\r\n max_length=self.max_len,\r\n pad_to_max_length=False)\r\n input_ids, input_masks = tokenized_dict['input_ids'], tokenized_dict['attention_mask']\r\n segment_ids = [1] * len(input_ids)\r\n if len(input_ids_list) > 0:\r\n input_ids = input_ids[1:]\r\n segment_ids = segment_ids[1:]\r\n input_masks = input_masks[1:]\r\n input_ids_list.extend(input_ids)\r\n segment_ids_list.extend(segment_ids)\r\n input_masks_list.extend(input_masks)\r\n\r\n if len(input_ids_list) >= self.max_len:\r\n input_ids_list = input_ids_list[:self.max_len - 1] + [self.sep_id]\r\n segment_ids_list = segment_ids_list[:self.max_len]\r\n input_masks_list = input_masks_list[:self.max_len]\r\n break\r\n input_ids_list += [self.pad_id] * (self.max_len - len(input_ids_list))\r\n segment_ids_list += [0] * (self.max_len - len(segment_ids_list))\r\n input_masks_list += [0] * (self.max_len - len(input_masks_list))\r\n\r\n assert len(input_ids_list) == self.max_len\r\n assert len(segment_ids_list) == self.max_len\r\n assert len(input_masks_list) == self.max_len\r\n\r\n return input_ids_list, segment_ids_list, input_masks_list\r\n```\r\nI'm not truncating anything after I create my full sequence. That suggestion was great for using text_pair, I dont know why I didnt think of that. Thank you\r\nPS- Is this way of creating input ids correct, and in the older version, when this warning wasn't being logged to the terminal, was it using only_first even then?",
"In that snippet, are you trying to concatenate a lot of sequences together? If you have 10 sequences in your text, you want to have a giant `input_ids_list` containing all the 10 sequences separated by a separator token?",
"Yes, exactly, thats what I am doing, and I am then strippin off earlier later part, coz I am flippin the list too. Basically the list is a conversation where I am making a new token list out of recent N words of conversation",
"For anyone stumbling across this issue and having problems with sentence pair classification in v3.0.0:\r\n\r\nIn v3.0.0, the default truncation strategy was changed, which causes code that used to work in v2.11.0 to break in some cases.\r\n**v2.11.0**: default truncation strategy is `longest_first` \r\n**v.3.0.0**: truncation strategy appears to default to `only_first`\r\n\r\nFor sentence pair classification in v3.0.0, this can result in a failure to truncate sentence pair to the supplied `max_length` parameter, which can break downstream model or other code:\r\n```\r\nW0702 12:56:50.435204 140139424331584 tokenization_utils_base.py:1447] Truncation was not explicitely activated but \r\n`max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length.\r\n Defaulting to 'only_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you may \r\nwant to check this is the right behavior.\r\nE0702 12:56:50.437675 140139424331584 tokenization_utils.py:784] We need to remove 25 to truncate the input but the first\r\n sequence has a length 17. Please select another truncation strategy than TruncationStrategy.ONLY_FIRST, for instance \r\n'longest_first' or 'only_second'.\r\n```\r\n\r\nFor example, the following code prints **32** in v2.11.0, but **57** in v3.0.0:\r\n\r\n```python\r\ntext_a = '''Debunk this: Six Corporations Control $NUMBER$% Of The Media In America'''\r\ntext_b = '''\r\nI can't believe people are missing the two obvious flaws in this analysis. \r\nThis infographic doesn't show that $NUMBER$ companies control $NUMBER$% of the media. '''\r\nfrom transformers import *\r\nt = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')\r\noutput = t.encode_plus(text_a, text_b, max_length=32)\r\nprint(len(output['input_ids']))\r\n```\r\n\r\nThe solution is to explicitly provide `truncate='longest_first`:, as indicated in the warning: \r\n```python\r\noutput = t.encode_plus(text_a, text_b, max_length=32, truncation='longest_first')\r\n```",
"Good point. We will release a patch to fix this breaking change (move back to having `longest_first` as default) plus the one mentioned in #5447 probably tomorrow or early next week. ",
"@thomwolf: Thanks - changing the default back to `longest_first` may also address #5460 "
] | 1,593 | 1,593 | 1,593 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): BeRT and GPT2 for Poly-encoder implementation
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Ubuntu V2.0 corpus dataset, implementing Poly-encoder pipeline, everything was done, was re-training the model again to verify the results of the first training process.
## To reproduce
Steps to reproduce the behavior:
Happens when using some_tokenizer_fromHF.encode_plus(), below is the eval script to test on custom input text, is exactly same as the one reading from dataset during training(simplified for eval)
```
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
c_text="what is your name. I am Gloid"
context=tokenizer.encode_plus(c_text,
text_pair=None,
add_special_tokens=True,
max_length=max_len,
pad_to_max_length=False)
texts=["Obama", "Trump", "Eminem", "slender man", "Pewdiepie"]
for text in texts:
tokenized_dict = tokenizer.encode_plus(text,
text_pair=None,
add_special_tokens=True,
max_length=max_len,
pad_to_max_length=True)
```
The error is -
"Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'only_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you may want to check this is the right behavior."
Repeated a total of 6 times, i.e., for every sequence passed into encode_plus
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Not to give this error, and return the input ids, segment ids, and input masks
The issue is completely identical to the closed issue - https://github.com/huggingface/transformers/issues/5155
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.0
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1+cu101 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5377/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5377/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5376 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5376/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5376/comments | https://api.github.com/repos/huggingface/transformers/issues/5376/events | https://github.com/huggingface/transformers/issues/5376 | 647,573,680 | MDU6SXNzdWU2NDc1NzM2ODA= | 5,376 | Unable to load Longformer pretrained weights | {
"login": "qianyingw",
"id": 32983355,
"node_id": "MDQ6VXNlcjMyOTgzMzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/32983355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qianyingw",
"html_url": "https://github.com/qianyingw",
"followers_url": "https://api.github.com/users/qianyingw/followers",
"following_url": "https://api.github.com/users/qianyingw/following{/other_user}",
"gists_url": "https://api.github.com/users/qianyingw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qianyingw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qianyingw/subscriptions",
"organizations_url": "https://api.github.com/users/qianyingw/orgs",
"repos_url": "https://api.github.com/users/qianyingw/repos",
"events_url": "https://api.github.com/users/qianyingw/events{/privacy}",
"received_events_url": "https://api.github.com/users/qianyingw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Do you have internet access? Does the folder `allenai/longformer-base-4096` exist on your machine?",
"Ahh...sorry I thought it was included in the library...Thank you for your reply!",
"It is included in the library :) I'm asking because it may be possible that you have a folder that has the same name, so the library is looking to load that folder instead of [the weights we have on S3.](https://huggingface.co/allenai/longformer-base-4096)",
"@LysandreJik I don't have any existing folder called 'allenai/longformer-base-4096' (or 'bert-base-uncased' for BERT) and I can't load pretrained weights until I download them to my local machines"
] | 1,593 | 1,595 | 1,595 | NONE | null | Hi,
I am following the official example to load longformer model by:
```
from transformers import LongformerModel, LongformerTokenizer
tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
model = LongformerModel.from_pretrained('allenai/longformer-base-4096')
```
I can load tokenizer and config, but when I try to load model, it gave me the error "OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True." It happened on my other transformer models like BERT/XLNet, and I solved it by their corresponding functions "convert_tf_checkpoint_to_pytorch". I don't understand why this happen on Longformer as it is not from any tensorflow module?
I'm using
- `transformers` version: 3.0.0
- Python version: 3.7.4
- PyTorch version: 1.2.0 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5376/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5375 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5375/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5375/comments | https://api.github.com/repos/huggingface/transformers/issues/5375/events | https://github.com/huggingface/transformers/pull/5375 | 647,558,746 | MDExOlB1bGxSZXF1ZXN0NDQxNTYyNDU0 | 5,375 | Update dependency mecab to at least version 1.0 | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"No, I pinned this on purpose because v1.0 has breaking changes. Tokenization is not done the same way (once you go through the obvious bugs) because the underlying dictionary seems different. Not sure how we can go to v1.0 until pretrained models using this tokenizer have been updated (if that's ever done).",
"Could you give more information so that I can post a new issue over at mecab? (Or even better, if you have the time, post the issue yourself?)",
"Well the breaking changes are numerous.\r\nFirst, the library does not provide a dictionary again and requires to install another dependency:\r\n```\r\npip install unidic-lite\r\n```\r\nOnce the installation problem is passed, we need to fix the tokenizer by changing [this line](https://github.com/huggingface/transformers/blob/482a5993c20ef32512a42661bfa404516763b72e/src/transformers/tokenization_bert_japanese.py#L207) to `token = line.split(\"\\t\")[0]` because the tokenizer now returns multiple tab-separated things.\r\n\r\nOnce this step is done, the [test_mecab_tokenizer](https://github.com/huggingface/transformers/blob/482a5993c20ef32512a42661bfa404516763b72e/tests/test_tokenization_bert_japanese.py#L90) still fails because the tokenization of \" \\tアップルストアでiPhone8 が \\n 発売された 。 \" is not [\"アップルストア\", \"で\", \"iPhone\", \"8\", \"が\", \"発売\", \"さ\", \"れ\", \"た\", \"。\"] anymore but ['アップル', 'ストア', 'で', 'iPhone', '8', 'が', '発売', 'さ', 'れ', 'た', '。'].\r\n\r\nAnd I did not look at other tests yet, since that tokenization change alone is enough to warrant a pin on older releases, given the fact it will break models pretrained with that tokenizer.\r\n\r\n(Note that the mecab dependency is only used for these tests anyway and that a Windows user calling pytest on the tests folder will have no issue since the tokenizer tests are ignored by default.)\r\n",
"Gotcha. I would like to see the move to v1 in the future though, for the reasons mentioned before. Perhaps this is something to plan for transformers v4? That would give users plenty of time still to use their current models, and in having a new major version it is no surprise there can be compatibility changes.",
"Hello, I'm the mecab-python3 maintainer. Please feel free to tag me on any discussions about this if there's a way I can help.\r\n\r\nI would strongly suggest switching to the new dictionary when you can - the dictionary you currently use, ipadic, hasn't been updated since at least 2007 and will never be updated again (the organization that created it no longer does that kind of work). In contrast Unidic is maintained by the NINJAL, the organization in charge of Japanese Universal Dependencies.",
"Hi @polm, thanks for getting involved! So from an accuracy point of view, the new dictionary is better than the previous version - presumably? The reason for updating, then, would be: better tokenisation, cross-platform pip install support. Downsides are that models that are already trained with the old tokenisation will behave differently if they use a different tokenizer. That's why I suggest to make the breaking change in v4 which is probably still some months away. If people want to use older models, they can always use an older version of mecab.\r\n\r\n@sgugger I don't know if this is possible, but perhaps the model cards should include a section on which version of the library and/or dependencies (pip freeze) were used to train the model. This would also greatly help reproducibility issues.",
"ipadic was already pretty accurate, but yes, the new dictionary (Unidic) will at the very least be more accurate by virtue of having newer words, like 令和 (Reiwa), the current era name. You can read more about the different dictionaries [here](https://www.dampfkraft.com/nlp/japanese-tokenizer-dictionaries.html). \r\n\r\n> @sgugger I don't know if this is possible, but perhaps the model cards should include a section on which version of the library and/or dependencies (pip freeze) were used to train the model. This would also greatly help reproducibility issues.\r\n\r\nThis is also very important. One of my objectives in packaging unidic in pip was to make it easier to use the same dictionary version between projects.",
"AFAICT having a model on the Hub that doesn't work properly across all versions of the library would be a first, and set a dangerous precedent. IMO moving to mecab >= 1 requires to retrain the model on the hub with the new tokenizer and pulling out the old ones, as I've stated before. I don't know if @thomwolf or @julien-c feel differently.",
"IMO in the future the library shouldn't prescribe what tokenizer to use, i.e. the model cards from @singletongue at https://huggingface.co/cl-tohoku could let users know (or maybe even programmatically check) that they need a specific version of `mecab<1.0`, and we would remove `mecab` from the dependencies (users can install it if they need it)\r\n\r\nSame for sacremoses and sentencepiece which shouldn't be hard dependencies of the library anymore.\r\n\r\nIn the short term, I'm ok with pinning the dependency to <1.\r\n\r\nThoughts?",
"@julien-c Yes, I agree. This joins together what I wrote before: the ability/requirement for model cards to indicate some environment variables/dependencies that were used. So for instance, this could be part of the model card in a \"dependency\" section:\r\n\r\n```\r\nmecab==1.0.0\r\ntransformers>=2.8.1\r\n```\r\n",
"Then we just need to decide which version we use for the tests (since we can't have both mecab < 1 and mecab >= 1 at the same time.",
"Just a note, I am working on packaging ipadic the same way I packaged Unidic after several requests for backwards compatability, including from sacrebleu. That should be released in the next few weeks, see [this issue](https://github.com/SamuraiT/mecab-python3/issues/49). That way you'll get backward compatability and the better wheels in 1.0+. ",
"I released ipadic on PyPI. It works with the latest mecab-python3. You can use it like this:\r\n\r\n```\r\nimport MeCab\r\nimport ipadic\r\ntagger = MeCab.Tagger(ipadic.MECAB_ARGS)\r\nprint(tagger.parse(\"図書館にいた事がバレた\"))\r\n```\r\n\r\nI hope you'll change to Unidic as soon as is feasible anyway, but this way you can use the recent version of mecab-python3 with the old models.",
"I believe that since #6086 has been merged this can be closed.",
"Yes it can."
] | 1,593 | 1,596 | 1,596 | COLLABORATOR | null | Require at least version 1.0 of mecab, which comes with prebuilt wheels for all major platforms (OS X, Linux, Windows). This should do away with some reported incompatibility issues for non-linux systems.
See https://github.com/SamuraiT/mecab-python3/issues/31#issuecomment-651053281
Note: code untested but nothing much to test. [Version 1.0.0 is on PyPi](https://pypi.org/project/mecab-python3/1.0.0/) so it should work as written. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5375/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5375",
"html_url": "https://github.com/huggingface/transformers/pull/5375",
"diff_url": "https://github.com/huggingface/transformers/pull/5375.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5375.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5374 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5374/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5374/comments | https://api.github.com/repos/huggingface/transformers/issues/5374/events | https://github.com/huggingface/transformers/pull/5374 | 647,553,101 | MDExOlB1bGxSZXF1ZXN0NDQxNTU3ODM4 | 5,374 | How to share model cards with the CLI | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I would reverse the two options for now (PR to the repo is still the recommended way at the moment)",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5374?src=pr&el=h1) Report\n> Merging [#5374](https://codecov.io/gh/huggingface/transformers/pull/5374?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0267668c3d648c6e41afda97f5df8671ee880ac3&el=desc) will **increase** coverage by `0.17%`.\n> The diff coverage is `77.67%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5374?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5374 +/- ##\n==========================================\n+ Coverage 77.01% 77.18% +0.17% \n==========================================\n Files 128 138 +10 \n Lines 21615 24314 +2699 \n==========================================\n+ Hits 16646 18766 +2120 \n- Misses 4969 5548 +579 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5374?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2N0cmwucHk=) | `97.05% <ø> (ø)` | |\n| [src/transformers/configuration\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/5374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.22% <ø> (ø)` | |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `97.14% <ø> (ø)` | |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JvYmVydGEucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <ø> (ø)` | |\n| ... and [157 more](https://codecov.io/gh/huggingface/transformers/pull/5374/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5374?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5374?src=pr&el=footer). Last update [482a599...12d3941](https://codecov.io/gh/huggingface/transformers/pull/5374?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Switched the order."
] | 1,593 | 1,593 | 1,593 | COLLABORATOR | null | As seen offline with @julien-c, adding a new recommended way to upload model cards along with models. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5374/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5374",
"html_url": "https://github.com/huggingface/transformers/pull/5374",
"diff_url": "https://github.com/huggingface/transformers/pull/5374.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5374.patch",
"merged_at": 1593521973000
} |
https://api.github.com/repos/huggingface/transformers/issues/5373 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5373/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5373/comments | https://api.github.com/repos/huggingface/transformers/issues/5373/events | https://github.com/huggingface/transformers/pull/5373 | 647,550,352 | MDExOlB1bGxSZXF1ZXN0NDQxNTU1NTg4 | 5,373 | Update README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5373?src=pr&el=h1) Report\n> Merging [#5373](https://codecov.io/gh/huggingface/transformers/pull/5373?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b9ee87f5c730d72b326ef65089a574a0b519e827&el=desc) will **decrease** coverage by `0.06%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5373?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5373 +/- ##\n==========================================\n- Coverage 77.49% 77.42% -0.07% \n==========================================\n Files 138 138 \n Lines 24314 24314 \n==========================================\n- Hits 18843 18826 -17 \n- Misses 5471 5488 +17 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5373?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `49.40% <0.00%> (-42.04%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `61.90% <0.00%> (-33.34%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.68% <0.00%> (-0.72%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.22% <0.00%> (+0.31%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `83.28% <0.00%> (+0.58%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.30% <0.00%> (+1.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+1.53%)` | :arrow_up: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/5373/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5373?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5373?src=pr&el=footer). Last update [482a599...126ed9d](https://codecov.io/gh/huggingface/transformers/pull/5373?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | - T5 pic uploaded to a more permanent place | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5373/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5373",
"html_url": "https://github.com/huggingface/transformers/pull/5373",
"diff_url": "https://github.com/huggingface/transformers/pull/5373.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5373.patch",
"merged_at": 1593511305000
} |
https://api.github.com/repos/huggingface/transformers/issues/5372 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5372/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5372/comments | https://api.github.com/repos/huggingface/transformers/issues/5372/events | https://github.com/huggingface/transformers/issues/5372 | 647,550,051 | MDU6SXNzdWU2NDc1NTAwNTE= | 5,372 | Albert pooling dimension mismatches | {
"login": "williamsdaniel888",
"id": 23630112,
"node_id": "MDQ6VXNlcjIzNjMwMTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/23630112?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/williamsdaniel888",
"html_url": "https://github.com/williamsdaniel888",
"followers_url": "https://api.github.com/users/williamsdaniel888/followers",
"following_url": "https://api.github.com/users/williamsdaniel888/following{/other_user}",
"gists_url": "https://api.github.com/users/williamsdaniel888/gists{/gist_id}",
"starred_url": "https://api.github.com/users/williamsdaniel888/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/williamsdaniel888/subscriptions",
"organizations_url": "https://api.github.com/users/williamsdaniel888/orgs",
"repos_url": "https://api.github.com/users/williamsdaniel888/repos",
"events_url": "https://api.github.com/users/williamsdaniel888/events{/privacy}",
"received_events_url": "https://api.github.com/users/williamsdaniel888/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,599 | 1,599 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): **albert-base-v2**
Language I am using the model on (English, Chinese ...): **English**
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run the following code snippet. You will need the **gdown** package. Note that I am using a modified version of modeling_albert.py to avoid the bug described [here](https://github.com/huggingface/transformers/pull/4095) and [here](https://github.com/huggingface/transformers/issues/1188).
```shell
# Install Transformers
pip install transformers
# Create a working folder (skip if using Colab)
mkdir /content/
# Clone BAT repository to the folder
git clone https://github.com/akkarimi/Adversarial-Training-for-ABSA.git
# Download patches for BAT scripts in src/
cd /content/
git clone https://github.com/williamsdaniel888/reimagined-octo-tribble.git
mv /content/reimagined-octo-tribble/modeling_albert.py /usr/local/lib/python3.6/dist-packages/transformers/modeling_albert.py
mv /content/reimagined-octo-tribble/absa_data_utils.py /content/Adversarial-Training-for-ABSA/src/absa_data_utils.py
mv /content/reimagined-octo-tribble/asc_bert_pt.py /content/Adversarial-Training-for-ABSA/src/asc_bert_pt.py
mv /content/reimagined-octo-tribble/bat_asc.py /content/Adversarial-Training-for-ABSA/src/bat_asc.py
mv /content/reimagined-octo-tribble/run_asc.py /content/Adversarial-Training-for-ABSA/src/run_asc.py
# Training
cd /content/Adversarial-Training-for-ABSA/script
chmod +x ./run_absa.sh
./run_absa.sh asc laptop_pt laptop pt_asc 1 0
```
2. Open **/content/Adversarial-Training-for-ABSA/run/pt_asc/laptop/1/train_log.txt** to observe the stack trace.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
**Stack trace:**
```
Traceback (most recent call last):
File "../src/run_asc.py", line 280, in <module>
main()
File "../src/run_asc.py", line 275, in main
train(args)
File "../src/run_asc.py", line 122, in train
_loss, adv_loss = model(input_ids, segment_ids, input_mask, label_ids)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/content/Adversarial-Training-for-ABSA/src/bat_asc.py", line 20, in forward
_loss = self.loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py", line 932, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 2317, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 2113, in nll_loss
.format(input.size(0), target.size(0)))
ValueError: Expected input batch_size (1600) to match target batch_size (16).
```
A diagnostic printout of intermediate variables' sizes within the model's forward() method:
```
Shape of input_ids: torch.Size([16, 100])
Shape of embedding_output: torch.Size([16, 100, 128])
Shape of last encoded layer: torch.Size([16, 100, 768])
Shape of pooled_output: torch.Size([16, 100, 768])
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The pooling unit should pool over the second dimension of the last encoded layer, so that the shape of pooled_output is ```torch.Size([16, 768])``` instead of ```torch.Size([16, 100, 768])```.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.0
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1+cu101 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
## Possible fix
I have worked around this issue by defining an AlbertPooler class for AlbertModel, similar to the BertPooler method for BertModel in modeling_bert.py:
```python
class AlbertPooler(nn.Module):
def __init__(self, config):
super().__init__()
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
self.activation = nn.Tanh()
def forward(self, hidden_states):
# We "pool" the model by simply taking the hidden state corresponding
# to the first token.
first_token_tensor = hidden_states[:, 0]
pooled_output = self.dense(first_token_tensor)
pooled_output = self.activation(pooled_output)
return pooled_output
```
Make sure to then update **AlbertModel's init()** method:
```
self.pooler = AlbertPooler(config)
```
and **AlbertModel's forward()** method:
```python
pooled_output = self.pooler(sequence_output)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5372/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5371 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5371/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5371/comments | https://api.github.com/repos/huggingface/transformers/issues/5371/events | https://github.com/huggingface/transformers/pull/5371 | 647,548,345 | MDExOlB1bGxSZXF1ZXN0NDQxNTUzOTMx | 5,371 | Update README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5371?src=pr&el=h1) Report\n> Merging [#5371](https://codecov.io/gh/huggingface/transformers/pull/5371?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b9ee87f5c730d72b326ef65089a574a0b519e827&el=desc) will **increase** coverage by `0.06%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5371?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5371 +/- ##\n==========================================\n+ Coverage 77.49% 77.56% +0.06% \n==========================================\n Files 138 138 \n Lines 24314 24314 \n==========================================\n+ Hits 18843 18858 +15 \n+ Misses 5471 5456 -15 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5371?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.93% <0.00%> (+0.33%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.75% <0.00%> (+2.05%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5371?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5371?src=pr&el=footer). Last update [482a599...812f63a](https://codecov.io/gh/huggingface/transformers/pull/5371?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | - Model pic uploaded to a more permanent place | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5371/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5371",
"html_url": "https://github.com/huggingface/transformers/pull/5371",
"diff_url": "https://github.com/huggingface/transformers/pull/5371.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5371.patch",
"merged_at": 1593511272000
} |
https://api.github.com/repos/huggingface/transformers/issues/5370 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5370/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5370/comments | https://api.github.com/repos/huggingface/transformers/issues/5370/events | https://github.com/huggingface/transformers/pull/5370 | 647,546,091 | MDExOlB1bGxSZXF1ZXN0NDQxNTUyMDk1 | 5,370 | Update README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5370?src=pr&el=h1) Report\n> Merging [#5370](https://codecov.io/gh/huggingface/transformers/pull/5370?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b9ee87f5c730d72b326ef65089a574a0b519e827&el=desc) will **increase** coverage by `0.34%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5370?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5370 +/- ##\n==========================================\n+ Coverage 77.49% 77.84% +0.34% \n==========================================\n Files 138 138 \n Lines 24314 24314 \n==========================================\n+ Hits 18843 18928 +85 \n+ Misses 5471 5386 -85 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5370?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5370/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5370/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5370/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.22% <0.00%> (+0.31%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5370/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.30% <0.00%> (+1.25%)` | :arrow_up: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5370/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+1.53%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5370/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.20% <0.00%> (+2.17%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5370/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <0.00%> (+2.28%)` | :arrow_up: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5370/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5370/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.24% <0.00%> (+4.54%)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5370/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.71% <0.00%> (+8.92%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/5370/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5370?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5370?src=pr&el=footer). Last update [482a599...7a1f1c6](https://codecov.io/gh/huggingface/transformers/pull/5370?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | - Fix missing ```-``` in language meta
- T5 pic uploaded to a more permanent place | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5370/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5370",
"html_url": "https://github.com/huggingface/transformers/pull/5370",
"diff_url": "https://github.com/huggingface/transformers/pull/5370.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5370.patch",
"merged_at": 1593511230000
} |
https://api.github.com/repos/huggingface/transformers/issues/5369 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5369/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5369/comments | https://api.github.com/repos/huggingface/transformers/issues/5369/events | https://github.com/huggingface/transformers/pull/5369 | 647,544,416 | MDExOlB1bGxSZXF1ZXN0NDQxNTUwNzM1 | 5,369 | Update README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | - Fix missing ```-``` in language meta
- T5 pic uploaded to a more permanent place | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5369/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5369",
"html_url": "https://github.com/huggingface/transformers/pull/5369",
"diff_url": "https://github.com/huggingface/transformers/pull/5369.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5369.patch",
"merged_at": 1593511344000
} |
https://api.github.com/repos/huggingface/transformers/issues/5368 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5368/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5368/comments | https://api.github.com/repos/huggingface/transformers/issues/5368/events | https://github.com/huggingface/transformers/pull/5368 | 647,481,358 | MDExOlB1bGxSZXF1ZXN0NDQxNTAwMzAx | 5,368 | Fix model card folder name so that it is consistent with model hub | {
"login": "chrisliu298",
"id": 59010212,
"node_id": "MDQ6VXNlcjU5MDEwMjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/59010212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisliu298",
"html_url": "https://github.com/chrisliu298",
"followers_url": "https://api.github.com/users/chrisliu298/followers",
"following_url": "https://api.github.com/users/chrisliu298/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisliu298/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisliu298/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisliu298/subscriptions",
"organizations_url": "https://api.github.com/users/chrisliu298/orgs",
"repos_url": "https://api.github.com/users/chrisliu298/repos",
"events_url": "https://api.github.com/users/chrisliu298/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisliu298/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5368?src=pr&el=h1) Report\n> Merging [#5368](https://codecov.io/gh/huggingface/transformers/pull/5368?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b9ee87f5c730d72b326ef65089a574a0b519e827&el=desc) will **decrease** coverage by `0.88%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5368?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5368 +/- ##\n==========================================\n- Coverage 77.49% 76.61% -0.89% \n==========================================\n Files 138 138 \n Lines 24314 24314 \n==========================================\n- Hits 18843 18627 -216 \n- Misses 5471 5687 +216 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5368?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `24.19% <0.00%> (-74.20%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/5368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `34.69% <0.00%> (-57.15%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `50.74% <0.00%> (-35.83%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `17.45% <0.00%> (-21.94%)` | :arrow_down: |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `32.10% <0.00%> (-17.35%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `65.26% <0.00%> (-11.58%)` | :arrow_down: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/5368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `26.31% <0.00%> (-1.32%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.68% <0.00%> (-0.72%)` | :arrow_down: |\n| ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/5368/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5368?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5368?src=pr&el=footer). Last update [97f2430...37b7dc0](https://codecov.io/gh/huggingface/transformers/pull/5368?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5368/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5368",
"html_url": "https://github.com/huggingface/transformers/pull/5368",
"diff_url": "https://github.com/huggingface/transformers/pull/5368.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5368.patch",
"merged_at": 1593449670000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5367 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5367/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5367/comments | https://api.github.com/repos/huggingface/transformers/issues/5367/events | https://github.com/huggingface/transformers/pull/5367 | 647,471,970 | MDExOlB1bGxSZXF1ZXN0NDQxNDkyNTc1 | 5,367 | Add link to file and fix typos in model card | {
"login": "chrisliu298",
"id": 59010212,
"node_id": "MDQ6VXNlcjU5MDEwMjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/59010212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisliu298",
"html_url": "https://github.com/chrisliu298",
"followers_url": "https://api.github.com/users/chrisliu298/followers",
"following_url": "https://api.github.com/users/chrisliu298/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisliu298/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisliu298/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisliu298/subscriptions",
"organizations_url": "https://api.github.com/users/chrisliu298/orgs",
"repos_url": "https://api.github.com/users/chrisliu298/repos",
"events_url": "https://api.github.com/users/chrisliu298/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisliu298/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5367/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5367",
"html_url": "https://github.com/huggingface/transformers/pull/5367",
"diff_url": "https://github.com/huggingface/transformers/pull/5367.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5367.patch",
"merged_at": 1593444892000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5366 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5366/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5366/comments | https://api.github.com/repos/huggingface/transformers/issues/5366/events | https://github.com/huggingface/transformers/pull/5366 | 647,453,163 | MDExOlB1bGxSZXF1ZXN0NDQxNDc3MTU3 | 5,366 | Doc for v3.0.0 | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,593 | 1,593 | 1,593 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5366/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5366",
"html_url": "https://github.com/huggingface/transformers/pull/5366",
"diff_url": "https://github.com/huggingface/transformers/pull/5366.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5366.patch",
"merged_at": 1593443335000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5365 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5365/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5365/comments | https://api.github.com/repos/huggingface/transformers/issues/5365/events | https://github.com/huggingface/transformers/pull/5365 | 647,428,841 | MDExOlB1bGxSZXF1ZXN0NDQxNDU3MTMz | 5,365 | [seq2seq docs] Move evaluation down, fix typo | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5365/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5365",
"html_url": "https://github.com/huggingface/transformers/pull/5365",
"diff_url": "https://github.com/huggingface/transformers/pull/5365.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5365.patch",
"merged_at": 1593441364000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5364 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5364/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5364/comments | https://api.github.com/repos/huggingface/transformers/issues/5364/events | https://github.com/huggingface/transformers/issues/5364 | 647,424,739 | MDU6SXNzdWU2NDc0MjQ3Mzk= | 5,364 | Layer #0 (named "roberta") expects 0 weight(s), but the saved weights have 199 element(s) | {
"login": "QixinLi",
"id": 25460447,
"node_id": "MDQ6VXNlcjI1NDYwNDQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/25460447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/QixinLi",
"html_url": "https://github.com/QixinLi",
"followers_url": "https://api.github.com/users/QixinLi/followers",
"following_url": "https://api.github.com/users/QixinLi/following{/other_user}",
"gists_url": "https://api.github.com/users/QixinLi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/QixinLi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QixinLi/subscriptions",
"organizations_url": "https://api.github.com/users/QixinLi/orgs",
"repos_url": "https://api.github.com/users/QixinLi/repos",
"events_url": "https://api.github.com/users/QixinLi/events{/privacy}",
"received_events_url": "https://api.github.com/users/QixinLi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, I'm facing the same issue. Do you find any solutions ? :)",
"Hello, facing the same problem",
"cc @Rocketknight1 @gante :raised_hands: ",
"Directly using `TFRobertaMainLayer` is unusual and not recommended - @jessiewang158 can you try using `TFRobertaModel` instead?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,593 | 1,663 | 1,663 | NONE | null | # ❓ Questions & Help
## Details
I wanna build a new NLU model extented by TFRobertaPreTrainedModel.
And I write code with reference to 'TFRobertaForSequenceClassification' class.
```python
from transformers import TFRobertaPreTrainedModel, TFRobertaMainLayer
class TFRobertaForNLU(TFRobertaPreTrainedModel):
def __init__(self, config, *inputs, **kwargs):
super().__init__(config, *inputs, **kwargs)
self.roberta = TFRobertaMainLayer(config, name="roberta")
def call(self,inputs=None,attention_mask=None,token_type_ids=None,position_ids=None,head_mask=None,inputs_embeds=None,output_attentions=None,output_hidden_states=None,labels=None,training=False,):
pass
model = TFRobertaForNLU.from_pretrained("jplu/tf-xlm-roberta-base")
```
And I got an error:
```python
Traceback (most recent call last):
File "test.py", line 449, in <module>
File ".../lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 489, in from_pretrained
model.load_weights(resolved_archive_file, by_name=True)
File ".../lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 181, in load_weights
return super(Model, self).load_weights(filepath, by_name)
File ".../lib/python3.7/site-packages/tensorflow_core/python/keras/engine/network.py", line 1175, in load_weights
saving.load_weights_from_hdf5_group_by_name(f, self.layers)
File ".../lib/python3.7/site-packages/tensorflow_core/python/keras/saving/hdf5_format.py", line 751, in load_weights_from_hdf5_group_by_name
str(len(weight_values)) + ' element(s).')
ValueError: Layer #0 (named "roberta") expects 0 weight(s), but the saved weights have 199 element(s).
```
It seems like the 'roberta' layer not properly initialized. Am I wrong in somewhere? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5364/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5363 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5363/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5363/comments | https://api.github.com/repos/huggingface/transformers/issues/5363/events | https://github.com/huggingface/transformers/pull/5363 | 647,411,056 | MDExOlB1bGxSZXF1ZXN0NDQxNDQyNTY1 | 5,363 | [Benchmark] Readme for benchmark | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5363?src=pr&el=h1) Report\n> Merging [#5363](https://codecov.io/gh/huggingface/transformers/pull/5363?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4bcc35cd693cc0f62d2b4853cd8a5db8608a4abd&el=desc) will **decrease** coverage by `0.87%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5363?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5363 +/- ##\n==========================================\n- Coverage 77.65% 76.77% -0.88% \n==========================================\n Files 138 138 \n Lines 24314 24314 \n==========================================\n- Hits 18881 18668 -213 \n- Misses 5433 5646 +213 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5363?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.39% <0.00%> (-1.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.24% <0.00%> (-0.15%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.40% <0.00%> (+0.71%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5363?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5363?src=pr&el=footer). Last update [4bcc35c...a73a58e](https://codecov.io/gh/huggingface/transformers/pull/5363?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@julien-c - should we go for this solution for now instead of letting users adapt a giant csv file?\r\nWe can always later adapt the way we present the data since it's just a README",
"sounds good"
] | 1,593 | 1,594 | 1,594 | MEMBER | null | At @sshleifer, @julien-c, @clmnt - shifting discussion about README.me here to include prev PR: #5360
in v3.0.0 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5363/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5363",
"html_url": "https://github.com/huggingface/transformers/pull/5363",
"diff_url": "https://github.com/huggingface/transformers/pull/5363.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5363.patch",
"merged_at": 1594156884000
} |
https://api.github.com/repos/huggingface/transformers/issues/5362 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5362/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5362/comments | https://api.github.com/repos/huggingface/transformers/issues/5362/events | https://github.com/huggingface/transformers/pull/5362 | 647,391,735 | MDExOlB1bGxSZXF1ZXN0NDQxNDI2NzA3 | 5,362 | Pin mecab for now | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,593 | 1,593 | 1,593 | COLLABORATOR | null | This fixes the tests locally, so will hopefully fix the CI :-) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5362/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5362",
"html_url": "https://github.com/huggingface/transformers/pull/5362",
"diff_url": "https://github.com/huggingface/transformers/pull/5362.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5362.patch",
"merged_at": 1593438674000
} |
https://api.github.com/repos/huggingface/transformers/issues/5361 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5361/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5361/comments | https://api.github.com/repos/huggingface/transformers/issues/5361/events | https://github.com/huggingface/transformers/pull/5361 | 647,342,693 | MDExOlB1bGxSZXF1ZXN0NDQxMzg2MTk3 | 5,361 | [WIP] update pl=0.8.5 | {
"login": "williamFalcon",
"id": 3640001,
"node_id": "MDQ6VXNlcjM2NDAwMDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3640001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/williamFalcon",
"html_url": "https://github.com/williamFalcon",
"followers_url": "https://api.github.com/users/williamFalcon/followers",
"following_url": "https://api.github.com/users/williamFalcon/following{/other_user}",
"gists_url": "https://api.github.com/users/williamFalcon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/williamFalcon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/williamFalcon/subscriptions",
"organizations_url": "https://api.github.com/users/williamFalcon/orgs",
"repos_url": "https://api.github.com/users/williamFalcon/repos",
"events_url": "https://api.github.com/users/williamFalcon/events{/privacy}",
"received_events_url": "https://api.github.com/users/williamFalcon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"@sshleifer this looks good now :) \r\n\r\n```\r\nEpoch 1: 0%| | 2/13382 [00:01<3:15:14, 1.14it/s, loss=171200.000, v_num=1t6n9wkw]Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0\r\nEpoch 1: 0%| | 3/13382 [00:02<2:28:51, 1.50it/s, loss=135029.328, v_num=1t6n9wkw]Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 4096.0\r\nEpoch 1: 0%| | 4/13382 [00:02<2:06:06, 1.77it/s, loss=109544.000, v_num=1t6n9wkw]Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 2048.0\r\nEpoch 1: 20%|███████████████████████▉ | 2672/13382 [23:08<1:32:43, 1.93it/s, loss=7298.400, v_num=1t6n9wkw[[AEpoch 1: 50%|████████████████████████████████████████████████████████████▎ | 6674/13382 [55:41<55:58, 2.00it/s, loss=28663.199, v_num=1t6n9wkw]\r\nEpoch 1: 50%|████████████████████████████████████████████████████████████▎ | 6675/13382 [55:46<56:02, 1.99it/s, loss=28663.199, v_num=1t6n9wkw]\r\nValidating: 76%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 48/63 [03:33<01:07, 4.52s/it]\r\nEpoch 1: 53%|████████████████████████████████████████████████████████████████▎ | 7114/13382 [59:07<52:05, 2.01it/s, loss=28257.600, v_num=1t6n9wkw]\r\n```\r\n\r\n\r\n**This PR:**\r\n- Fixes the lightning_base.py\r\n- adds best practices to it\r\n- fixes seq2seq example\r\n\r\n**TODO:**\r\n- fix the other examples (separate PR likely?)\r\n\r\n**Comments**\r\n- There is a lot of flag mapping that kind of starts to limit what users can do. (accumulate_grad_batches vs whatever). It's not the worst thing, but it adds unnecessary points of failure.\r\n\r\n- data: Download data in prepare_data, use setup to split/etc and assign.\r\n\r\n",
"- Do you have a ROUGE-2 score for that run? The TQDM reported loss is much higher.\r\n- Does multi-gpu + `trainer.test` work?\r\n",
"I think you also need to fix examples/seq2seq/test_seq2seq_examples.py\r\nand `examples/requirements.txt`. But I would only do that after you've verified\r\nthat val-rouge2 doesn't get worse on single and multi-gpu. You can see metrics in `output_dir/metrics.json`.\r\n\r\n",
"@sshleifer ok, this is verified to work with 2 GPUs wnb and apex... what do you want to do now?",
"You also need to run `make style` to satisfy the check_code_quality CI job, and potentially examples/requirements.txt.",
"@sshleifer 0.8.5 is live. we can merge and finish this now :) \r\nThen update the rest of the examples and we should be golden.\r\n\r\nI'll fix the style stuff this weekend",
"A very similar change was merged @moscow25"
] | 1,593 | 1,595 | 1,595 | CONTRIBUTOR | null | 1. update to use from_argparse_args
2. deleted args that were manual but trainer has already.
3. fixed optimizer_step.
4. moved the LR logging to the correct hook
5. dropped the rank_zero_only stuff since that's not needed. All loggers and print ops happen only on rank_zero already. This was redundant.
6. proper use of setup()
@sshleifer let me know how I can test these changes | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5361/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5361/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5361",
"html_url": "https://github.com/huggingface/transformers/pull/5361",
"diff_url": "https://github.com/huggingface/transformers/pull/5361.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5361.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5360 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5360/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5360/comments | https://api.github.com/repos/huggingface/transformers/issues/5360/events | https://github.com/huggingface/transformers/pull/5360 | 647,320,783 | MDExOlB1bGxSZXF1ZXN0NDQxMzY3ODgz | 5,360 | [Docs] Benchmark docs | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Actually will make a separate PR for the examples README.md - to have the docs in v3.0.0.",
"@sgugger right now it's not ignored, so the slow test will fail because the output isn't the same. I don't think it's too big of a deal though, we can fix that after the release with only partial testing of the file.",
"Thanks for the review. Addressed them and also renamed the classes for consistency."
] | 1,593 | 1,593 | 1,593 | MEMBER | null | This PR updates the docs for benchmarks and adds a README.md where the community can post their benchmark results.
Would be happy about Feedback from @sgugger and @LysandreJik .
@LysandreJik - I deleted the part about "This work was done by [Timothy Liu](https://github.com/tlkh)." because the links were broken. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5360/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5360",
"html_url": "https://github.com/huggingface/transformers/pull/5360",
"diff_url": "https://github.com/huggingface/transformers/pull/5360.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5360.patch",
"merged_at": 1593439738000
} |
https://api.github.com/repos/huggingface/transformers/issues/5359 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5359/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5359/comments | https://api.github.com/repos/huggingface/transformers/issues/5359/events | https://github.com/huggingface/transformers/issues/5359 | 647,300,535 | MDU6SXNzdWU2NDczMDA1MzU= | 5,359 | Segmentation fault (core dumped) after importing transformers | {
"login": "huangxiaoshuo",
"id": 32594371,
"node_id": "MDQ6VXNlcjMyNTk0Mzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/32594371?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/huangxiaoshuo",
"html_url": "https://github.com/huangxiaoshuo",
"followers_url": "https://api.github.com/users/huangxiaoshuo/followers",
"following_url": "https://api.github.com/users/huangxiaoshuo/following{/other_user}",
"gists_url": "https://api.github.com/users/huangxiaoshuo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/huangxiaoshuo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/huangxiaoshuo/subscriptions",
"organizations_url": "https://api.github.com/users/huangxiaoshuo/orgs",
"repos_url": "https://api.github.com/users/huangxiaoshuo/repos",
"events_url": "https://api.github.com/users/huangxiaoshuo/events{/privacy}",
"received_events_url": "https://api.github.com/users/huangxiaoshuo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, I believe this error may be due to [the sentencepiece version 0.1.92 which causes a segmentation fault.](https://github.com/huggingface/transformers/issues/4857).",
"> Hi, I believe this error may be due to [the sentencepiece version 0.1.92 which causes a segmentation fault.](https://github.com/huggingface/transformers/issues/4857).\r\n\r\nThank you very much, this error disappeared when sentencpiece was downgraded to 0.1.91.\r\nBTW, is there some debugging method to find this root cause when we encounter segmentation fault? ",
"Not that I know of :man_shrugging: ",
"It works. GREAT!!!!",
"The error still persists after 6 months. Maybe the version 0.1.91 should be hardcoded in package requirements",
"It was for the last 3.x versions. For version 4.x onwards, `sentencepiece` is not a requirement anymore."
] | 1,593 | 1,607 | 1,593 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.import torch
2.import transformers
3.torch.rand(4,5)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```python
import torch
import transformers
torch.rand(4,5)
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I got an error 'segmentation fault (core dumped)' while trying to generate a tensor after importing transformers, but if I removed 'import transformers', the tensor could be generated.

## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Ubuntu 18.04.4 LTS
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0+cu100
- Tensorflow version (GPU?): 2.2.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5359/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5358 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5358/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5358/comments | https://api.github.com/repos/huggingface/transformers/issues/5358/events | https://github.com/huggingface/transformers/issues/5358 | 647,285,294 | MDU6SXNzdWU2NDcyODUyOTQ= | 5,358 | Do T5 have the next-sentence-predict loss? | {
"login": "guotong1988",
"id": 4702353,
"node_id": "MDQ6VXNlcjQ3MDIzNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4702353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guotong1988",
"html_url": "https://github.com/guotong1988",
"followers_url": "https://api.github.com/users/guotong1988/followers",
"following_url": "https://api.github.com/users/guotong1988/following{/other_user}",
"gists_url": "https://api.github.com/users/guotong1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guotong1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guotong1988/subscriptions",
"organizations_url": "https://api.github.com/users/guotong1988/orgs",
"repos_url": "https://api.github.com/users/guotong1988/repos",
"events_url": "https://api.github.com/users/guotong1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/guotong1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"\r\nHi @guotong1988 It's possible to do these tasks with text-to-text approach.\r\n\r\nfor predicting next sentence you can process input like this\r\n`sentence1: sentence1_text sentence2: sentence2_text`\r\nand ask the model to predict `true` if sentence2 is the next sentence else `false`\r\n\r\nand for generating next sentence, provide first sentence as input and next sentence as the target.",
"Thank you. It is not generating the whole next sentence. But it is predicting is_next or not_next."
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | Input the previous sentence and predict/generate the next sentence.
Thank you very much. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5358/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5357 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5357/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5357/comments | https://api.github.com/repos/huggingface/transformers/issues/5357/events | https://github.com/huggingface/transformers/pull/5357 | 647,271,108 | MDExOlB1bGxSZXF1ZXN0NDQxMzI1Njc4 | 5,357 | Fix table format fot test tesults | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5357/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5357",
"html_url": "https://github.com/huggingface/transformers/pull/5357",
"diff_url": "https://github.com/huggingface/transformers/pull/5357.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5357.patch",
"merged_at": 1593435754000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5356 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5356/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5356/comments | https://api.github.com/repos/huggingface/transformers/issues/5356/events | https://github.com/huggingface/transformers/pull/5356 | 647,269,799 | MDExOlB1bGxSZXF1ZXN0NDQxMzI0NTU1 | 5,356 | Create model card | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Pretty cool!",
"Thank you @julien-c"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5356/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5356",
"html_url": "https://github.com/huggingface/transformers/pull/5356",
"diff_url": "https://github.com/huggingface/transformers/pull/5356.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5356.patch",
"merged_at": 1593435716000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5355 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5355/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5355/comments | https://api.github.com/repos/huggingface/transformers/issues/5355/events | https://github.com/huggingface/transformers/pull/5355 | 647,223,357 | MDExOlB1bGxSZXF1ZXN0NDQxMjg1NjQw | 5,355 | Update Bertabs example to work again | {
"login": "MichaelJanz",
"id": 66110831,
"node_id": "MDQ6VXNlcjY2MTEwODMx",
"avatar_url": "https://avatars.githubusercontent.com/u/66110831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MichaelJanz",
"html_url": "https://github.com/MichaelJanz",
"followers_url": "https://api.github.com/users/MichaelJanz/followers",
"following_url": "https://api.github.com/users/MichaelJanz/following{/other_user}",
"gists_url": "https://api.github.com/users/MichaelJanz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MichaelJanz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MichaelJanz/subscriptions",
"organizations_url": "https://api.github.com/users/MichaelJanz/orgs",
"repos_url": "https://api.github.com/users/MichaelJanz/repos",
"events_url": "https://api.github.com/users/MichaelJanz/events{/privacy}",
"received_events_url": "https://api.github.com/users/MichaelJanz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"There are `remi/bertabs-finetuned-cnndm-extractive-abstractive-summarization` and also `remi/bertabs-finetuned-extractive-abstractive-summarization`. Which should be used here? @sshleifer ",
"Also #5234",
"Model remi/bertabs-finetuned-cnndm-extractive-abstractive-summarization did not provide GPU support, which I (newbie) would expect from cnndm prefix.\r\nI had to use remi/bertabs-finetuned-extractive-abstractive-summarization for GPU support.",
"Thanks! Circleci wants you to run `make style` I believe.",
"Ty, will do the next time. Is there a guideline for how to properly set up pull requests? I was not able to find one",
"Yes: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | Fix the bug 'Attempted relative import with no known parent package' when using the bertabs example. Also change the used model from bertabs-finetuned-cnndm, since it seems not be accessible anymore | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5355/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5355",
"html_url": "https://github.com/huggingface/transformers/pull/5355",
"diff_url": "https://github.com/huggingface/transformers/pull/5355.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5355.patch",
"merged_at": 1593497102000
} |
https://api.github.com/repos/huggingface/transformers/issues/5354 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5354/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5354/comments | https://api.github.com/repos/huggingface/transformers/issues/5354/events | https://github.com/huggingface/transformers/pull/5354 | 647,124,265 | MDExOlB1bGxSZXF1ZXN0NDQxMjA1MDE3 | 5,354 | Added data collator for XLNet language modeling and related calls | {
"login": "shngt",
"id": 20009551,
"node_id": "MDQ6VXNlcjIwMDA5NTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/20009551?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shngt",
"html_url": "https://github.com/shngt",
"followers_url": "https://api.github.com/users/shngt/followers",
"following_url": "https://api.github.com/users/shngt/following{/other_user}",
"gists_url": "https://api.github.com/users/shngt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shngt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shngt/subscriptions",
"organizations_url": "https://api.github.com/users/shngt/orgs",
"repos_url": "https://api.github.com/users/shngt/repos",
"events_url": "https://api.github.com/users/shngt/events{/privacy}",
"received_events_url": "https://api.github.com/users/shngt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5354?src=pr&el=h1) Report\n> Merging [#5354](https://codecov.io/gh/huggingface/transformers/pull/5354?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/28a690a80e6c8dbcb50b5628ef853146e1940125&el=desc) will **increase** coverage by `1.15%`.\n> The diff coverage is `82.51%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5354?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5354 +/- ##\n==========================================\n+ Coverage 76.18% 77.33% +1.15% \n==========================================\n Files 138 141 +3 \n Lines 24292 24660 +368 \n==========================================\n+ Hits 18506 19071 +565 \n+ Misses 5786 5589 -197 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5354?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.21% <0.00%> (-0.45%)` | :arrow_down: |\n| [src/transformers/testing\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `80.85% <ø> (ø)` | |\n| [src/transformers/trainer\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `16.53% <4.34%> (-2.16%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `61.06% <17.30%> (-37.33%)` | :arrow_down: |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.09% <20.00%> (-0.36%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.96% <30.00%> (-1.43%)` | :arrow_down: |\n| [src/transformers/training\\_args\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `47.45% <44.44%> (-3.71%)` | :arrow_down: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `93.80% <66.66%> (+25.94%)` | :arrow_up: |\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `88.37% <77.96%> (-0.86%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.67% <85.67%> (ø)` | |\n| ... and [39 more](https://codecov.io/gh/huggingface/transformers/pull/5354/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5354?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5354?src=pr&el=footer). Last update [28a690a...3397fb4](https://codecov.io/gh/huggingface/transformers/pull/5354?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks a lot for the PR @shngt - this looks really cool :-) \r\n\r\nIt's won't actually be that easy to make this work with the trainer since we will have to pass `mems` to the `model(...)` and retrieve it from the outputs. This won't be solved in a nice clean way IMO. \r\n\r\nWe will have to adapt this function here: https://github.com/huggingface/transformers/blob/331d8d2936e7a140225cf60301ba6469930fd216/src/transformers/trainer.py#L572\r\n\r\nBecause we don't have `namedtuples` (another case why we should have `namedtuples` :D @thomwolf @LysandreJik) \r\nI would suggest to add the following lines after this one: https://github.com/huggingface/transformers/blob/331d8d2936e7a140225cf60301ba6469930fd216/src/transformers/trainer.py#L580 for the moment (not very pretty to this :-/):\r\n\r\n```python \r\n outputs = model(**inputs)\r\n loss = outputs[0] # model outputs are always tuple in transformers (see doc)\r\n \r\n ...\r\n if model.config_class.__name__ in [\"XLNetConfig\", \"TransfoXLConfig\"]:\r\n mems = outputs[1]\r\n else:\r\n mems = None\r\n return loss, mems\r\n```\r\n\r\nI don't see a better solution at the moment. What do you think @LysandreJik @julien-c ?\r\n\r\nWould it be ok for you to add a couple of \"not-so-pretty\" if-statements to the trainer? \r\n\r\n@shngt - Let's first decide on how to adapt the `Trainer` before continuing.",
"Wait for #5399 to be merged :-) ",
"Sorry for the mess, I didn't know rebasing would do this. Repeating the commit message for clarity:\r\nChanged the name of `DataCollatorForXLNetLanguageModeling` to the more general `DataCollatorForPermutationLanguageModelling`.\r\nRemoved the `--mlm` flag requirement for the new collator and defined a separate `--plm_probability` flag for its use.\r\nCTRL uses a CLM loss just like GPT and GPT-2, so should work out of the box with this script (provided `past` is taken care of\r\nsimilar to `mems` for XLNet). Added a few words in the comments to reflect this.\r\nChanged calls and imports appropriately.\r\n",
"Yeah rebasing can be dangerous, happens to us all the time :D. \r\n#5399 is merged so it should be possible to add the XLNet data collator.\r\n\r\nI would here to copy the only file that is changed from the branch `src/transformers/data/data_collator.py` (I tihnk) and then open a new PR where you paste in this file. Then this PR should be clean. We can close this PR then since it seems to be borked."
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | Added `DataCollatorForXLNetLanguageModeling` in `data/data_collator.py` to return necessary inputs (applies masking and generates revelant tensors i.e. input_ids, perm_mask, target_mask and labels as per https://github.com/zihangdai/xlnet/blob/master/data_utils.py) for language modeling training with XLNetLMHeadModel. Also added related arguments, logic and calls in `examples/language-modeling/run_language_modeling.py`.
Resolves: #4739, #2008 (partially) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5354/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5354",
"html_url": "https://github.com/huggingface/transformers/pull/5354",
"diff_url": "https://github.com/huggingface/transformers/pull/5354.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5354.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5353 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5353/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5353/comments | https://api.github.com/repos/huggingface/transformers/issues/5353/events | https://github.com/huggingface/transformers/pull/5353 | 647,043,694 | MDExOlB1bGxSZXF1ZXN0NDQxMTQzMjUw | 5,353 | Create model card for asafaya/bert-large-arabic | {
"login": "alisafaya",
"id": 22398153,
"node_id": "MDQ6VXNlcjIyMzk4MTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/22398153?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alisafaya",
"html_url": "https://github.com/alisafaya",
"followers_url": "https://api.github.com/users/alisafaya/followers",
"following_url": "https://api.github.com/users/alisafaya/following{/other_user}",
"gists_url": "https://api.github.com/users/alisafaya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alisafaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alisafaya/subscriptions",
"organizations_url": "https://api.github.com/users/alisafaya/orgs",
"repos_url": "https://api.github.com/users/alisafaya/repos",
"events_url": "https://api.github.com/users/alisafaya/events{/privacy}",
"received_events_url": "https://api.github.com/users/alisafaya/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5353?src=pr&el=h1) Report\n> Merging [#5353](https://codecov.io/gh/huggingface/transformers/pull/5353?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/28a690a80e6c8dbcb50b5628ef853146e1940125&el=desc) will **increase** coverage by `1.73%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5353?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5353 +/- ##\n==========================================\n+ Coverage 76.18% 77.91% +1.73% \n==========================================\n Files 138 138 \n Lines 24292 24292 \n==========================================\n+ Hits 18506 18928 +422 \n+ Misses 5786 5364 -422 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5353?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.39% <0.00%> (-0.15%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.77% <0.00%> (+23.94%)` | :arrow_up: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.27% <0.00%> (+24.54%)` | :arrow_up: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `93.75% <0.00%> (+25.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `75.98% <0.00%> (+55.48%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.72% <0.00%> (+73.10%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5353?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5353?src=pr&el=footer). Last update [28a690a...4558772](https://codecov.io/gh/huggingface/transformers/pull/5353?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5353/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5353",
"html_url": "https://github.com/huggingface/transformers/pull/5353",
"diff_url": "https://github.com/huggingface/transformers/pull/5353.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5353.patch",
"merged_at": 1593435511000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5352 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5352/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5352/comments | https://api.github.com/repos/huggingface/transformers/issues/5352/events | https://github.com/huggingface/transformers/pull/5352 | 647,043,211 | MDExOlB1bGxSZXF1ZXN0NDQxMTQyODgw | 5,352 | Create model card for asafaya/bert-mini-arabic | {
"login": "alisafaya",
"id": 22398153,
"node_id": "MDQ6VXNlcjIyMzk4MTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/22398153?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alisafaya",
"html_url": "https://github.com/alisafaya",
"followers_url": "https://api.github.com/users/alisafaya/followers",
"following_url": "https://api.github.com/users/alisafaya/following{/other_user}",
"gists_url": "https://api.github.com/users/alisafaya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alisafaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alisafaya/subscriptions",
"organizations_url": "https://api.github.com/users/alisafaya/orgs",
"repos_url": "https://api.github.com/users/alisafaya/repos",
"events_url": "https://api.github.com/users/alisafaya/events{/privacy}",
"received_events_url": "https://api.github.com/users/alisafaya/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5352?src=pr&el=h1) Report\n> Merging [#5352](https://codecov.io/gh/huggingface/transformers/pull/5352?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/28a690a80e6c8dbcb50b5628ef853146e1940125&el=desc) will **decrease** coverage by `0.03%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5352?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5352 +/- ##\n==========================================\n- Coverage 76.18% 76.14% -0.04% \n==========================================\n Files 138 138 \n Lines 24292 24292 \n==========================================\n- Hits 18506 18497 -9 \n- Misses 5786 5795 +9 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5352?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.95% <0.00%> (-0.59%)` | :arrow_down: |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.77% <0.00%> (+23.94%)` | :arrow_up: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.27% <0.00%> (+24.54%)` | :arrow_up: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `93.75% <0.00%> (+25.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `75.98% <0.00%> (+55.48%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5352?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5352?src=pr&el=footer). Last update [28a690a...62e5950](https://codecov.io/gh/huggingface/transformers/pull/5352?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks! Added references to the datasets, feel free to update."
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5352/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5352/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5352",
"html_url": "https://github.com/huggingface/transformers/pull/5352",
"diff_url": "https://github.com/huggingface/transformers/pull/5352.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5352.patch",
"merged_at": 1593434502000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5351 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5351/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5351/comments | https://api.github.com/repos/huggingface/transformers/issues/5351/events | https://github.com/huggingface/transformers/pull/5351 | 647,043,121 | MDExOlB1bGxSZXF1ZXN0NDQxMTQyODA1 | 5,351 | Create model card for asafaya/bert-medium-arabic | {
"login": "alisafaya",
"id": 22398153,
"node_id": "MDQ6VXNlcjIyMzk4MTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/22398153?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alisafaya",
"html_url": "https://github.com/alisafaya",
"followers_url": "https://api.github.com/users/alisafaya/followers",
"following_url": "https://api.github.com/users/alisafaya/following{/other_user}",
"gists_url": "https://api.github.com/users/alisafaya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alisafaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alisafaya/subscriptions",
"organizations_url": "https://api.github.com/users/alisafaya/orgs",
"repos_url": "https://api.github.com/users/alisafaya/repos",
"events_url": "https://api.github.com/users/alisafaya/events{/privacy}",
"received_events_url": "https://api.github.com/users/alisafaya/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5351?src=pr&el=h1) Report\n> Merging [#5351](https://codecov.io/gh/huggingface/transformers/pull/5351?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/28a690a80e6c8dbcb50b5628ef853146e1940125&el=desc) will **increase** coverage by `0.97%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5351?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5351 +/- ##\n==========================================\n+ Coverage 76.18% 77.15% +0.97% \n==========================================\n Files 138 138 \n Lines 24292 24292 \n==========================================\n+ Hits 18506 18743 +237 \n+ Misses 5786 5549 -237 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5351?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.69% <0.00%> (-29.45%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.75% <0.00%> (-2.79%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.77% <0.00%> (+23.94%)` | :arrow_up: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.27% <0.00%> (+24.54%)` | :arrow_up: |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/5351/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5351?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5351?src=pr&el=footer). Last update [28a690a...17af457](https://codecov.io/gh/huggingface/transformers/pull/5351?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks! Feel free to open another PR for tweaks if needed"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5351/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5351",
"html_url": "https://github.com/huggingface/transformers/pull/5351",
"diff_url": "https://github.com/huggingface/transformers/pull/5351.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5351.patch",
"merged_at": 1593434161000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5350 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5350/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5350/comments | https://api.github.com/repos/huggingface/transformers/issues/5350/events | https://github.com/huggingface/transformers/pull/5350 | 647,036,121 | MDExOlB1bGxSZXF1ZXN0NDQxMTM3MzQw | 5,350 | Move tests/utils.py -> transformers/testing_utils.py | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2139563322,
"node_id": "MDU6TGFiZWwyMTM5NTYzMzIy",
"url": "https://api.github.com/repos/huggingface/transformers/labels/cleanup",
"name": "cleanup",
"color": "e7fc49",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"CI failure is spurious",
"no objection",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5350?src=pr&el=h1) Report\n> Merging [#5350](https://codecov.io/gh/huggingface/transformers/pull/5350?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/28a690a80e6c8dbcb50b5628ef853146e1940125&el=desc) will **increase** coverage by `1.86%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5350?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5350 +/- ##\n==========================================\n+ Coverage 76.18% 78.04% +1.86% \n==========================================\n Files 138 139 +1 \n Lines 24292 24339 +47 \n==========================================\n+ Hits 18506 18996 +490 \n+ Misses 5786 5343 -443 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5350?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/testing\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `80.85% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.47% <0.00%> (-49.57%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.95% <0.00%> (-0.59%)` | :arrow_down: |\n| [src/transformers/optimization\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.27% <0.00%> (+24.54%)` | :arrow_up: |\n| [src/transformers/tokenization\\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `93.75% <0.00%> (+25.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `75.98% <0.00%> (+55.48%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.72% <0.00%> (+73.10%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5350?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5350?src=pr&el=footer). Last update [28a690a...4b09e67](https://codecov.io/gh/huggingface/transformers/pull/5350?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | Fixes #5350
The motivation is to allow examples/ tests to use these utilities.
for both groups of tests, the import is
```python
from transformers.testing_utils import slow
```
Motivation: I was about to rewrite the @slow decorator today and felt that this was cleaner. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5350/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5350",
"html_url": "https://github.com/huggingface/transformers/pull/5350",
"diff_url": "https://github.com/huggingface/transformers/pull/5350.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5350.patch",
"merged_at": 1593613878000
} |
https://api.github.com/repos/huggingface/transformers/issues/5349 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5349/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5349/comments | https://api.github.com/repos/huggingface/transformers/issues/5349/events | https://github.com/huggingface/transformers/issues/5349 | 647,033,435 | MDU6SXNzdWU2NDcwMzM0MzU= | 5,349 | T5 FP16: bad generations from my converted checkpoint | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Not sure whether I can help you here. I don't think fp16 is really stable for T5 no? ",
"Definitely not stable. But you had an idea for how to fix related to layernorm if i recall?\r\n",
"As it is now layer norm is always done in fp32: https://github.com/huggingface/transformers/blob/9a473f1e43221348334b9e7f95bb45770b7ef268/src/transformers/modeling_t5.py#L157\r\nBut even that does not seem to be enough to make fp16 work all the time",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,599 | 1,599 | CONTRIBUTOR | null | `t5-base` can generate reasonable summaries in fp16, but my checkpoint `sshleifer/t5-base-cnn` cannot. Is there something besides the conversion script I need to do to make fp16 work?
[Colab](https://colab.research.google.com/drive/1k8agNDPdzfF38aTrz5o1kcqvWDHUxAnD?usp=sharing) showing good generations for t5-base, bad for my checkpoint.
Without colab,
```python
from transformers import *
device = 'cuda'
tokenizer = T5Tokenizer.from_pretrained('t5-base')
my_model = T5ForConditionalGeneration.from_pretrained('sshleifer/t5-base-cnn').to(device)
TXT = """summarize: Marseille, France (CNN)The French prosecutor leading an investigation into the crash of Germanwings Flight 9525 insisted Wednesday that he was not aware of any video footage from on board the plane. Marseille prosecutor Brice Robin told CNN that "so far no videos were used in the crash investigation." He added, "A person who has such a video needs to immediately give it to the investigators." Robin\'s comments follow claims by two magazines, German daily Bild and French Paris Match, of a cell phone video showing the harrowing final seconds from on board Germanwings Flight 9525 as it crashed into the French Alps. All 150 on board were killed. Paris Match and Bild reported that the video was recovered from a phone at the wreckage site. The two publications described the supposed video, but did not post it on their websites. The publications said that they watched the video, which was found by a source close to the investigation. "One can hear cries of \'My God\' in several languages," Paris Match reported. "Metallic banging can also be heard more than three times, perhaps of the pilot trying to open the cockpit door with a heavy object. Towards the end, after a heavy shake, stronger than the others, the screaming intensifies. Then nothing." "It is a very disturbing scene," said Julian Reichelt, editor-in-chief of Bild online. An official with France\'s accident investigation agency, the BEA, said the agency is not aware of any such video. Lt. Col. Jean-Marc Menichini, a French Gendarmerie spokesman in charge of communications on rescue efforts around the Germanwings crash site, told CNN that the reports were "completely wrong" and "unwarranted." Cell phones have been collected at the site, he said, but that they "hadn\'t been exploited yet." Menichini said he believed the cell phones would need to be sent to the Criminal Research Institute in Rosny sous-Bois, near Paris, in order to be analyzed by specialized technicians working hand-in-hand with investigators. But none of the cell phones found so far have been sent to the institute, Menichini said. Asked whether staff involved in the search could have leaked a memory card to the media, Menichini answered with a categorical "no." Reichelt told "Erin Burnett: Outfront" that he had watched the video and stood by the report, saying Bild and Paris Match are "very confident" that the clip is real. He noted that investigators only revealed they\'d recovered cell phones from the crash site after Bild and Paris Match published their reports."""
batch = tokenizer.batch_encode_plus([TXT], return_tensors='pt').to(device)
tokenizer.batch_decode(my_model.generate(**batch, skip_special_tokens=True)) # way shorter than t5-base but english
# ['prosecutor in crash investigation says he is not aware of any video footage from on board ']
my_model = my_model.half()
my_model.generate(**batch, skip_special_tokens=True)
# [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]
```
**Update:** On master +brutasse, (torch 1.5), t5-base produces bad generations in fp16. They are similar to the recently converted checkpoint -all bos. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5349/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5348 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5348/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5348/comments | https://api.github.com/repos/huggingface/transformers/issues/5348/events | https://github.com/huggingface/transformers/issues/5348 | 647,028,529 | MDU6SXNzdWU2NDcwMjg1Mjk= | 5,348 | T5 Warning: embeddings are not initialized | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"This [comment](https://github.com/huggingface/transformers/issues/3553#issuecomment-624306027) answers this issue."
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | ```python
from transformers import *
model = T5ForConditionalGeneration.from_pretrained("t5-base")
```
Is this concerning?
```
Some weights of T5ForConditionalGeneration were not initialized from the model checkpoint at t5-base and are newly initialized: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
Relatedly, I am porting a summarization checkpoint and wondering whether I should initialize lm_head. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5348/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5348/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5347 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5347/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5347/comments | https://api.github.com/repos/huggingface/transformers/issues/5347/events | https://github.com/huggingface/transformers/issues/5347 | 647,025,018 | MDU6SXNzdWU2NDcwMjUwMTg= | 5,347 | Training with a large dataset | {
"login": "hgjlee",
"id": 11896786,
"node_id": "MDQ6VXNlcjExODk2Nzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/11896786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hgjlee",
"html_url": "https://github.com/hgjlee",
"followers_url": "https://api.github.com/users/hgjlee/followers",
"following_url": "https://api.github.com/users/hgjlee/following{/other_user}",
"gists_url": "https://api.github.com/users/hgjlee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hgjlee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hgjlee/subscriptions",
"organizations_url": "https://api.github.com/users/hgjlee/orgs",
"repos_url": "https://api.github.com/users/hgjlee/repos",
"events_url": "https://api.github.com/users/hgjlee/events{/privacy}",
"received_events_url": "https://api.github.com/users/hgjlee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, can you share more details, (all the details asked in the issue template for instance).\r\nIn particular, the exact command line you are using, the full error message, etc...",
"Hi! Thank you for your reply. My apologies for not using the template.\r\n\r\nI'm following the tutorial on Colab and switched LineByLineTextDataset to the following custom dataset class to process each line on the fly rather than loading everything in memory:\r\n```\r\nfrom torch.utils.data import IterableDataset\r\n\r\nclass CustomIterableDataset(IterableDataset):\r\n def __init__(self, filename, tokenizer, block_size, len):\r\n self.filename = filename\r\n self.tokenizer = tokenizer\r\n self.block_size = block_size\r\n self.len = len \r\n\r\n def preprocess(self, text):\r\n batch_encoding = self.tokenizer(text.strip(\"\\n\"), add_special_tokens=True, truncation=True, max_length=self.block_size)\r\n\r\n return torch.tensor(batch_encoding[\"input_ids\"])\r\n\r\n def line_mapper(self, line): \r\n return self.preprocess(line)\r\n\r\n def __iter__(self):\r\n file_itr = open(self.filename, encoding=\"utf-8\")\r\n mapped_itr = map(self.line_mapper, file_itr)\r\n\r\n return mapped_itr\r\n\r\n def __len__(self):\r\n return self.len\r\n\r\ndataset = CustomIterableDataset(\"large_corpus.txt\", tokenizer=tokenizer, block_size=128, len=40228303)\r\n```\r\nI run the rest of the script until this part:\r\n```\r\n%%time\r\ntrainer.train()\r\n```\r\n\r\nThe error looks like this:\r\n```\r\nValueError Traceback (most recent call last)\r\n<ipython-input-31-0c647bc3a8b8> in <module>()\r\n----> 1 get_ipython().run_cell_magic('time', '', 'trainer.train()')\r\n\r\n5 frames\r\n<decorator-gen-60> in time(self, line, cell, local_ns)\r\n\r\n<timed eval> in <module>()\r\n\r\n/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in __init__(self, dataset, batch_size, shuffle, sampler, batch_sampler, num_workers, collate_fn, pin_memory, drop_last, timeout, worker_init_fn, multiprocessing_context)\r\n 177 raise ValueError(\r\n 178 \"DataLoader with IterableDataset: expected unspecified \"\r\n--> 179 \"sampler option, but got sampler={}\".format(sampler))\r\n 180 elif batch_sampler is not None:\r\n 181 # See NOTE [ Custom Samplers and IterableDataset ]\r\n\r\nValueError: DataLoader with IterableDataset: expected unspecified sampler option, but got sampler=<torch.utils.data.sampler.RandomSampler object at 0x7f1eaf8fa128>\r\n```\r\n\r\nPlease let me know if there's anything else I can provide to help you understand this problem.",
"That's because unfortunately the trainer cannot be currently used with an `IterableDataset`, because the `get_train_dataloader` method creates a `DataLoader` with a sampler, while `IterableDataset` may not be used with a sampler. You could override the trainer and reimplement that method as follows:\r\n\r\n```py\r\n def get_train_dataloader(self) -> DataLoader:\r\n if self.train_dataset is None:\r\n raise ValueError(\"Trainer: training requires a train_dataset.\")\r\n if is_tpu_available():\r\n train_sampler = get_tpu_sampler(self.train_dataset)\r\n else:\r\n train_sampler = (\r\n RandomSampler(self.train_dataset)\r\n if self.args.local_rank == -1\r\n else DistributedSampler(self.train_dataset)\r\n )\r\n data_loader = DataLoader(\r\n self.train_dataset,\r\n batch_size=self.args.train_batch_size,\r\n sampler=train_sampler if not isinstance(self.train_dataset, IterableDataset) else None,\r\n collate_fn=self.data_collator.collate_batch,\r\n )\r\n```\r\n\r\nNote how we're not passing the sampler if it's an `IterableDataset`.",
"@LysandreJik Thank you so much for your help. I just started training with your suggestion. \r\n\r\nFor future references, I had to make the following trivial adjustments to the snippet to make it with the current master version.\r\n\r\n```\r\nfrom torch.utils.data.dataset import IterableDataset\r\n\r\ndef get_train_dataloader(self) -> DataLoader:\r\n if self.train_dataset is None:\r\n raise ValueError(\"Trainer: training requires a train_dataset.\")\r\n if is_torch_tpu_available():\r\n train_sampler = get_tpu_sampler(self.train_dataset)\r\n else:\r\n train_sampler = (\r\n RandomSampler(self.train_dataset)\r\n if self.args.local_rank == -1\r\n else DistributedSampler(self.train_dataset)\r\n )\r\n \r\n data_loader = DataLoader(\r\n self.train_dataset,\r\n batch_size=self.args.train_batch_size,\r\n sampler=train_sampler if not isinstance(self.train_dataset, IterableDataset) else None,\r\n collate_fn=self.data_collator,\r\n )\r\n\r\n return data_loader\r\n```",
"oops yes forgot the return statement :sweat_smile: glad you got it to work!",
"Thanks! It was super helpful. Actually, I just had to change is_tpu_available() -> is_torch_tpu_available() and collate_fn=self.data_collator.collate_batch -> collate_fn=self.data_collator. I wasn't sure if these were from different versions. ",
"> I'm currently following the tutorial on how to train a new language model ([here](https://huggingface.co/blog/how-to-train)) and facing some issues on Colab because of my large training corpus (+40 mil lines, 5 GB).\r\n> \r\n> I tried to use IterableDataset so I can load data on the fly, and I'm getting this error when I try to train with the script provided in the tutorial:\r\n> \"ValueError: DataLoader with IterableDataset: expected unspecified sampler option, but got sampler=<torch.utils.data.sampler.RandomSampler object at 0x7f777bdbe710>\"\r\n> \r\n> What's the best way to resolve this?\r\n\r\nHI hgjlee,\r\n\r\nHave you solve this problem? Could you please share the solution that how to load a huge dataset for the pretraining?\r\nNow my corpus has 10G in hundres of txt files. Now I have no idea to load these file for pretraining. \r\n\r\nThanks in advance for your help."
] | 1,593 | 1,598 | 1,593 | NONE | null | I'm currently following the tutorial on how to train a new language model ([here](https://huggingface.co/blog/how-to-train)) and facing some issues on Colab because of my large training corpus (+40 mil lines, 5 GB).
I tried to use IterableDataset so I can load data on the fly, and I'm getting this error when I try to train with the script provided in the tutorial:
"ValueError: DataLoader with IterableDataset: expected unspecified sampler option, but got sampler=<torch.utils.data.sampler.RandomSampler object at 0x7f777bdbe710>"
What's the best way to resolve this? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5347/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5347/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5346 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5346/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5346/comments | https://api.github.com/repos/huggingface/transformers/issues/5346/events | https://github.com/huggingface/transformers/issues/5346 | 647,023,878 | MDU6SXNzdWU2NDcwMjM4Nzg= | 5,346 | Upload model card with the CLI | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is not currently documented (cc @sgugger) but it's already possible, see for instance https://huggingface.co/Helsinki-NLP/opus-mt-fr-en\r\n\r\nAt some point it might become the preferred way of publishing a model card too.",
"Will update the tutorial today.",
"Thanks, this is great!"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Upload model card README.md with transformers CLI.
## Motivation
Some applications may require to upload multiple models. It would be nice to be able to upload associated model card with model/tokenizer/config files.
At the moment a PR is required making it more difficult for users to upload their models and nearly impossible for apps relying on training and uploading a lot of different models.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
I could help propose a PR for the client side.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5346/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5345 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5345/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5345/comments | https://api.github.com/repos/huggingface/transformers/issues/5345/events | https://github.com/huggingface/transformers/issues/5345 | 646,986,187 | MDU6SXNzdWU2NDY5ODYxODc= | 5,345 | Massive text generation slowdown when using repetition_penalty param on GPU | {
"login": "minimaxir",
"id": 2179708,
"node_id": "MDQ6VXNlcjIxNzk3MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2179708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minimaxir",
"html_url": "https://github.com/minimaxir",
"followers_url": "https://api.github.com/users/minimaxir/followers",
"following_url": "https://api.github.com/users/minimaxir/following{/other_user}",
"gists_url": "https://api.github.com/users/minimaxir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minimaxir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minimaxir/subscriptions",
"organizations_url": "https://api.github.com/users/minimaxir/orgs",
"repos_url": "https://api.github.com/users/minimaxir/repos",
"events_url": "https://api.github.com/users/minimaxir/events{/privacy}",
"received_events_url": "https://api.github.com/users/minimaxir/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks for the notebook - I can reproduce the results both on a local GPU and on colab. You are right, this slowdown is disproportionate! \r\nI'm suspecting the double for loop to be the reason for the heavy slow down. It might be possible to replace the two for loops by some smart matrix operations, but not sure.\r\n\r\nAlso pinging our PyTorch GPU master @mfuntowicz - do you maybe have some insight here?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,600 | 1,600 | NONE | null | # 🐛 Bug
## Information
Text generation when using the `repetition_penalty` takes about 2x-10x longer on a GPU, which is disproportionate and implies the GPU may not be used in that instance.
Reported from https://github.com/minimaxir/aitextgen/issues/34
The `enforce_repetition_penalty_` function at
https://github.com/huggingface/transformers/blob/08c9607c3d025f9f1a0c40e6d124d5d5d446208e/src/transformers/modeling_utils.py#L817
may be using CPU ops instead of Tensor ops.
## To reproduce
Demo Colab notebook w/ time benchmarks: https://colab.research.google.com/drive/1SzYUEC0xikHN8OEp2WOTWHlT6JxT9rRf?usp=sharing
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5345/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5345/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5344 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5344/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5344/comments | https://api.github.com/repos/huggingface/transformers/issues/5344/events | https://github.com/huggingface/transformers/pull/5344 | 646,926,881 | MDExOlB1bGxSZXF1ZXN0NDQxMDYwNTAx | 5,344 | [examples] fix example links | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5344?src=pr&el=h1) Report\n> Merging [#5344](https://codecov.io/gh/huggingface/transformers/pull/5344?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/98109464c12619c4164ba7714f3e5526a290239a&el=desc) will **decrease** coverage by `1.10%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5344?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5344 +/- ##\n==========================================\n- Coverage 77.91% 76.80% -1.11% \n==========================================\n Files 138 138 \n Lines 24282 24282 \n==========================================\n- Hits 18920 18651 -269 \n- Misses 5362 5631 +269 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5344?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `24.19% <0.00%> (-74.20%)` | :arrow_down: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/5344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `34.69% <0.00%> (-57.15%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `50.74% <0.00%> (-35.83%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `17.45% <0.00%> (-21.94%)` | :arrow_down: |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `32.10% <0.00%> (-17.35%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `65.26% <0.00%> (-11.58%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `91.78% <0.00%> (-2.74%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.41% <0.00%> (-2.36%)` | :arrow_down: |\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/5344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `26.31% <0.00%> (-1.32%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5344?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5344?src=pr&el=footer). Last update [9810946...fc8f199](https://codecov.io/gh/huggingface/transformers/pull/5344?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks!"
] | 1,593 | 1,593 | 1,593 | MEMBER | null | Fix links for summarization and translation examples in BIG TABLE OF TASKS.
Regarding issue #5309
@sshleifer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5344/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5344",
"html_url": "https://github.com/huggingface/transformers/pull/5344",
"diff_url": "https://github.com/huggingface/transformers/pull/5344.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5344.patch",
"merged_at": 1593363295000
} |
https://api.github.com/repos/huggingface/transformers/issues/5343 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5343/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5343/comments | https://api.github.com/repos/huggingface/transformers/issues/5343/events | https://github.com/huggingface/transformers/pull/5343 | 646,905,332 | MDExOlB1bGxSZXF1ZXN0NDQxMDQ3NzUx | 5,343 | [Reformer] Simpler reverse sort backward implementation | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5343?src=pr&el=h1) Report\n> Merging [#5343](https://codecov.io/gh/huggingface/transformers/pull/5343?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9fe09cec76efa1e221c3fd6eb8520ba0a911f092&el=desc) will **increase** coverage by `0.40%`.\n> The diff coverage is `66.66%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5343?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5343 +/- ##\n==========================================\n+ Coverage 77.93% 78.33% +0.40% \n==========================================\n Files 138 138 \n Lines 23860 23851 -9 \n==========================================\n+ Hits 18595 18684 +89 \n+ Misses 5265 5167 -98 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5343?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5343/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `89.18% <66.66%> (+0.97%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5343/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.41% <0.00%> (-0.74%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5343/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.82% <0.00%> (+0.31%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5343/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.86% <0.00%> (+0.91%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5343/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5343/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5343?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5343?src=pr&el=footer). Last update [9fe09ce...d53f899](https://codecov.io/gh/huggingface/transformers/pull/5343?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | MEMBER | null | After reviewing code of similar PyTorch implementation: https://github.com/lucidrains/reformer-pytorch/pull/104 and original trax code again: https://github.com/google/trax/blob/master/trax/layers/research/efficient_attention.py#L1265-L1266, there is a simpler way to implement the backward reverse sort function.
The code is taken from https://github.com/lucidrains/reformer-pytorch/blob/42a8682ff8e7cec3122eff6febc9087f1c53f370/reformer_pytorch/reformer_pytorch.py#L414.
This simpler code is also implemented in the Reformer test branch and all tests are checked for correctness: https://github.com/huggingface/transformers/tree/branch_to_save_trax_integration_tests
**Note**: There was no bug in the code before / the logic of the code has not changed. This PR just makes the backward function of ReverseSort simpler.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5343/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5343",
"html_url": "https://github.com/huggingface/transformers/pull/5343",
"diff_url": "https://github.com/huggingface/transformers/pull/5343.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5343.patch",
"merged_at": 1593347546000
} |
https://api.github.com/repos/huggingface/transformers/issues/5342 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5342/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5342/comments | https://api.github.com/repos/huggingface/transformers/issues/5342/events | https://github.com/huggingface/transformers/pull/5342 | 646,879,121 | MDExOlB1bGxSZXF1ZXN0NDQxMDI4NDc5 | 5,342 | Added support for XLNet language modelling training in examples | {
"login": "shngt",
"id": 20009551,
"node_id": "MDQ6VXNlcjIwMDA5NTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/20009551?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shngt",
"html_url": "https://github.com/shngt",
"followers_url": "https://api.github.com/users/shngt/followers",
"following_url": "https://api.github.com/users/shngt/following{/other_user}",
"gists_url": "https://api.github.com/users/shngt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shngt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shngt/subscriptions",
"organizations_url": "https://api.github.com/users/shngt/orgs",
"repos_url": "https://api.github.com/users/shngt/repos",
"events_url": "https://api.github.com/users/shngt/events{/privacy}",
"received_events_url": "https://api.github.com/users/shngt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"CircleCI says:\r\nexamples/language-modeling/run_language_modeling.py:29: in <module>\r\n from transformers import (\r\nE ImportError: cannot import name 'DataCollatorForXLNetLanguageModeling'\r\n\r\nHow can I fix this?",
"`DataCollatorForXLNetLanguageModeling` needs to be imported into `src/transformers/__init__.py` to be imported `from transformers`"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | Updated src/transformers/data/data_collator.py with a new XLNet-specific collator that applies masking and generates revelant tensors (input_ids, perm_mask, target_mask, labels) as per https://github.com/zihangdai/xlnet/blob/master/data_utils.py. Also added relevant calls and imports in examples/language-modeling/run_language_modeling.py. Relevant issues #4739 #2008 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5342/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5342",
"html_url": "https://github.com/huggingface/transformers/pull/5342",
"diff_url": "https://github.com/huggingface/transformers/pull/5342.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5342.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5341 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5341/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5341/comments | https://api.github.com/repos/huggingface/transformers/issues/5341/events | https://github.com/huggingface/transformers/issues/5341 | 646,863,707 | MDU6SXNzdWU2NDY4NjM3MDc= | 5,341 | In Tensorflow the serving is very slow | {
"login": "only-yao",
"id": 36235579,
"node_id": "MDQ6VXNlcjM2MjM1NTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/36235579?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/only-yao",
"html_url": "https://github.com/only-yao",
"followers_url": "https://api.github.com/users/only-yao/followers",
"following_url": "https://api.github.com/users/only-yao/following{/other_user}",
"gists_url": "https://api.github.com/users/only-yao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/only-yao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/only-yao/subscriptions",
"organizations_url": "https://api.github.com/users/only-yao/orgs",
"repos_url": "https://api.github.com/users/only-yao/repos",
"events_url": "https://api.github.com/users/only-yao/events{/privacy}",
"received_events_url": "https://api.github.com/users/only-yao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't get it"
] | 1,593 | 1,594 | 1,594 | NONE | null | # ❓ I'm using the gpt2 model, In Tensorflow the serving is very slow, Tensorflow is much faster.
## Details
I use tf.saved_model save model
`tf.saved_model.save(model, export_dir="/model/2/")`
Tensorflow reads the model
```
imported = tf.saved_model.load("/model/2/")
f = imported.signatures["serving_default"]
```
The speed of Tensorflow is much higher than that of tensorflow
In the same model without the use of transformers, tensorFlow and TensorFlow serve at the same speed | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5341/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5341/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5340 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5340/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5340/comments | https://api.github.com/repos/huggingface/transformers/issues/5340/events | https://github.com/huggingface/transformers/issues/5340 | 646,859,546 | MDU6SXNzdWU2NDY4NTk1NDY= | 5,340 | GPT2Tokenizer remove the ' ' (space) if it is at the end of text? | {
"login": "carter54",
"id": 26741594,
"node_id": "MDQ6VXNlcjI2NzQxNTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/26741594?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/carter54",
"html_url": "https://github.com/carter54",
"followers_url": "https://api.github.com/users/carter54/followers",
"following_url": "https://api.github.com/users/carter54/following{/other_user}",
"gists_url": "https://api.github.com/users/carter54/gists{/gist_id}",
"starred_url": "https://api.github.com/users/carter54/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/carter54/subscriptions",
"organizations_url": "https://api.github.com/users/carter54/orgs",
"repos_url": "https://api.github.com/users/carter54/repos",
"events_url": "https://api.github.com/users/carter54/events{/privacy}",
"received_events_url": "https://api.github.com/users/carter54/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"Hi @carter54,\r\n\r\nThanks for your issue! I think you're correct. This was a bug in a previous version.\r\nNote that for v3.0.0 we have done a major refactoring that corrected this error as far as I can see. \r\n\r\nIf you take a look at this line in current master in this line: https://github.com/huggingface/transformers/blob/9d9b872b66f9ab9b7b7c73f2c00985dd92c4121b/src/transformers/tokenization_utils.py#L310\r\n\r\nYou can see that the whitespace is now only stripped when `all_special_tokens_extended.get(tok, None).lstrip)` is explicitly set to `True` which is not the default case for `gpt2`. So using these versions:\r\n\r\n```\r\ntokenizers.__version__: 0.8.0.rc3\r\n```\r\nand \r\n```\r\ntransformers.__version__: 3.0.1 (master)\r\n```\r\n\r\nI cannot reproduce the error. \r\n\r\nCould you update transformers via:\r\n\r\n```\r\npip install transformers --upgrade\r\n```\r\n\r\nand check if the error persists? I think v3.0.0 should have fixed it :-) ",
"@patrickvonplaten Yes, I can see it is fixed in V3.0.0.\r\nThanks~"
] | 1,593 | 1,594 | 1,594 | NONE | null | # ❓ Questions & Help
GPT2Tokenizer remove the ' ' (space) if it is at the end of text?
## Details
<!-- Description of your issue -->
Hello~ I'm using transformers v2.11.0 to run some GPT2 model.
When I tested the code, I found that the token id list generated by GPT2Tokenizer had confusing result when I input a text string end with a space.
To reproduce:
```
from transformers import GPT2Tokenizer
tokenizer_file = 'path_to_your_tokenzier_files'
tokenizer = GPT2Tokenizer.from_pretrained(tokenizer_file)
# case 1: without a space at the end input text string
text_1 = 'import tensorflow as'
encoded_1 = tokenizer.encode(text_1, add_special_tokens=False)
print(encoded_1)
# case 2: with a space at the end input text string
text_2 = 'import tensorflow as ' # noted there is a space at the end
encoded_2 = tokenizer.encode(text_2, add_special_tokens=False)
print(encoded_2)
```
the output is confusing
```
# case 1:
[618, 1969, 573]
# case 2:
[618, 1969, 573]
```
both these two case output the same token id list, with my tokenzier is [618, 1969, 573]. Which means the space at the end of case2 is ignored in the tokenization process.
I debug the code and found that the space is ignored at this step
https://github.com/huggingface/transformers/blob/b42586ea560a20dcadb78472a6b4596f579e9043/src/transformers/tokenization_utils.py#L1288
Is this step with some propose?
I also tried the ByteLevelBPETokenizer in tokenizers project
```
from tokenizers.implementations import ByteLevelBPETokenizer
tokenizer_file = 'path_to_your_tokenzier_files'
tokenizer = ByteLevelBPETokenizer(tokenizer_file)
# case 1: without a space at the end input text string
text_1 = 'import tensorflow as'
encoded_1 = tokenizer.encode(text_1)
print(len(encoded_1))
print(encoded_1.ids)
print(encoded_1.tokens)
# case 2: with a space at the end input text string
text_2 = 'import tensorflow as '
encoded_2 = tokenizer.encode(text_2)
print(len(encoded_2))
print(encoded_2.ids)
print(encoded_2.tokens)
```
the output are as expected:
```
# case 1
3
[618, 5015, 573]
['import', 'Ġtensorflow', 'Ġas']
# case 2
4
[618, 5015, 573, 231]
['import', 'Ġtensorflow', 'Ġas', 'Ġ']
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5340/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5340/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5339 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5339/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5339/comments | https://api.github.com/repos/huggingface/transformers/issues/5339/events | https://github.com/huggingface/transformers/issues/5339 | 646,856,604 | MDU6SXNzdWU2NDY4NTY2MDQ= | 5,339 | Predefined tasks in T5 | {
"login": "vishal-burman",
"id": 19861874,
"node_id": "MDQ6VXNlcjE5ODYxODc0",
"avatar_url": "https://avatars.githubusercontent.com/u/19861874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vishal-burman",
"html_url": "https://github.com/vishal-burman",
"followers_url": "https://api.github.com/users/vishal-burman/followers",
"following_url": "https://api.github.com/users/vishal-burman/following{/other_user}",
"gists_url": "https://api.github.com/users/vishal-burman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vishal-burman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vishal-burman/subscriptions",
"organizations_url": "https://api.github.com/users/vishal-burman/orgs",
"repos_url": "https://api.github.com/users/vishal-burman/repos",
"events_url": "https://api.github.com/users/vishal-burman/events{/privacy}",
"received_events_url": "https://api.github.com/users/vishal-burman/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can take a look the paper to find out other tasks. AFAIK it includes GLUE tasks, and SQuAD QA as well.",
"https://arxiv.org/pdf/1910.10683\r\n\r\nOn page 45 the examples for many tasks are starting and ending on page 52",
"Thanks for the help. I will refer the paper."
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | ```
"task_specific_params": {
"summarization": {
"early_stopping": true,
"length_penalty": 2.0,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
},
```
The config file generated while saving a T5 model mentions the above-predefined tasks which can be done by just prepending the inputs with appropriate prefixes. I wanted to know if there is a list that shows any other predefined tasks which can be performed by T5. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5339/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5338 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5338/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5338/comments | https://api.github.com/repos/huggingface/transformers/issues/5338/events | https://github.com/huggingface/transformers/issues/5338 | 646,832,749 | MDU6SXNzdWU2NDY4MzI3NDk= | 5,338 | Confuse by "All learning rates are 0" | {
"login": "BeHappyForMe",
"id": 25237218,
"node_id": "MDQ6VXNlcjI1MjM3MjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/25237218?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BeHappyForMe",
"html_url": "https://github.com/BeHappyForMe",
"followers_url": "https://api.github.com/users/BeHappyForMe/followers",
"following_url": "https://api.github.com/users/BeHappyForMe/following{/other_user}",
"gists_url": "https://api.github.com/users/BeHappyForMe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BeHappyForMe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BeHappyForMe/subscriptions",
"organizations_url": "https://api.github.com/users/BeHappyForMe/orgs",
"repos_url": "https://api.github.com/users/BeHappyForMe/repos",
"events_url": "https://api.github.com/users/BeHappyForMe/events{/privacy}",
"received_events_url": "https://api.github.com/users/BeHappyForMe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I've got the same problem here. Have you solved it?",
"Same here. Any solution to solve? Or is this just a warning message that can be ignored?",
"Pinging @sshleifer ",
"Ignore it. I will try to get it out of the code. I think the lr starts at 0 and climbs up for `warmup_steps` iterations."
] | 1,593 | 1,597 | 1,593 | NONE | null | transformers/examples/seq2seq/finetune.py 219row :
if max(scheduler.get_last_lr()) > 0:
warnings.warn("All learning rates are 0")
why, is there something i left?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5338/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5337 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5337/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5337/comments | https://api.github.com/repos/huggingface/transformers/issues/5337/events | https://github.com/huggingface/transformers/pull/5337 | 646,796,514 | MDExOlB1bGxSZXF1ZXN0NDQwOTc4Mjkw | 5,337 | arxiv-ai-gpt2 model card | {
"login": "chrisliu298",
"id": 59010212,
"node_id": "MDQ6VXNlcjU5MDEwMjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/59010212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisliu298",
"html_url": "https://github.com/chrisliu298",
"followers_url": "https://api.github.com/users/chrisliu298/followers",
"following_url": "https://api.github.com/users/chrisliu298/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisliu298/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisliu298/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisliu298/subscriptions",
"organizations_url": "https://api.github.com/users/chrisliu298/orgs",
"repos_url": "https://api.github.com/users/chrisliu298/repos",
"events_url": "https://api.github.com/users/chrisliu298/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisliu298/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5337?src=pr&el=h1) Report\n> Merging [#5337](https://codecov.io/gh/huggingface/transformers/pull/5337?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/efae6645e223f29cf05eeafe95105a9f869b66dd&el=desc) will **decrease** coverage by `0.18%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5337?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5337 +/- ##\n==========================================\n- Coverage 77.69% 77.51% -0.19% \n==========================================\n Files 138 138 \n Lines 24291 24291 \n==========================================\n- Hits 18872 18828 -44 \n- Misses 5419 5463 +44 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5337?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5337/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `49.40% <0.00%> (-42.04%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5337/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `57.14% <0.00%> (-38.10%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5337/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.56% <0.00%> (-1.92%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5337/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5337/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.37% <0.00%> (+0.72%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5337/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5337/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5337/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5337?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5337?src=pr&el=footer). Last update [1af58c0...bf2c4ac](https://codecov.io/gh/huggingface/transformers/pull/5337?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks for the reply! \r\n\r\nYeah, if that is not allowed, I'll just remove it an provide it somewhere else. I'll also remove the extra lines in the model card.\r\n\r\n---\r\n\r\nJust removed the unnecessary lines. If the code is not appropriate, I'll also remove that immediately.",
"> Thanks for the reply!\r\n> \r\n> Yeah, if that is not allowed, I'll just remove it an provide it somewhere else. I'll also remove the extra lines in the model card.\r\n> \r\n> Just removed the unnecessary lines. If the code is not appropriate, I'll also remove that immediately.\r\n\r\nIMO, it makes sense if you upload the code to Github Gist and add a link here. Adding code in this folder is not favorable because we won't be able to maintain any code here and these codes are excluded from the unit testing!",
"Got it and I just removed it. I'll provide a link to GitHub gist in the model card later when it's ready. Sorry for the inconvenience.",
"Nice! Also made me think of @LysandreJik's [`lysandre/arxiv-nlp`](https://huggingface.co/lysandre/arxiv-nlp)"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | Add model card and generation script for model arxiv_ai_gpt2. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5337/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5337",
"html_url": "https://github.com/huggingface/transformers/pull/5337",
"diff_url": "https://github.com/huggingface/transformers/pull/5337.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5337.patch",
"merged_at": 1593435201000
} |
https://api.github.com/repos/huggingface/transformers/issues/5336 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5336/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5336/comments | https://api.github.com/repos/huggingface/transformers/issues/5336/events | https://github.com/huggingface/transformers/issues/5336 | 646,765,713 | MDU6SXNzdWU2NDY3NjU3MTM= | 5,336 | BillSum dataset finetuning | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Interesting! \r\nI wonder what you get starting from bart-large. \r\nIf you use the new finetune.py with --logger wandb I could more easily see your logs and suggest hyperparameters.\r\n",
"Thanks! and Sure, let me do that and share the results soon!",
"Took me a while to get to this :/ the numbers look as expected when fine-tuned with the latest finetune.py! :)\r\nMust have been a bug in my modified script, updated that too and confirmed the results:\r\ndBART:\r\n\"val_avg_loss\": 1.3925694227218628,\r\n\"val_avg_loss\": 1.3925694227218628,\r\n\"val_avg_rouge1\": 0.576209431885421,\r\n\"val_avg_rouge2\": 0.34462863342832595,\r\n\"val_avg_rougeL\": 0.3999874418853423,\r\n\"val_avg_gen_time\": 6.152730942964554,\r\n\"val_avg_summ_len\": 132.768 \r\n\r\nT5:\r\n\"val_avg_loss\": 1.6064366102218628,\r\n\"val_avg_rouge1\": 0.5291717003599621,\r\n\"val_avg_rouge2\": 0.3049920551697407,\r\n\"val_avg_rougeL\": 0.37437278542325914,\r\n\"val_avg_gen_time\": 2.408391712649965,\r\n\"val_avg_summ_len\": 171.18881856540085\r\n\r\n\r\nJust a quick query though if you could help me out, for a lot of the predictions in the resulting test_generation file, I am getting summaries starting with lower case, and then generating a coherent summary from there on.\r\nI looked at my dataset and the code and fine-tuned the models again after some pre-processing but still the same result.\r\nAny ideas?? This was for both T5 and dBart btw.\r\n\r\nThanks! \r\n",
"Can I see an example of an input, output pair that demonstrates the issue you are facing?",
"Here you go...apologies I should have shared the examples in the previous post itself :)\r\n**Example 1:\r\nSrc :** This Act may be cited as the \"Military Call-Up Relief Act\". (a) Waiver For Certain Distributions. (1) In general. Section 72(t)(2) of the Internal Revenue Code of 1986 is amended by adding at the end the following: (G) Distributions to individuals performing national emergency active duty. Any distribution to an individual who, at the time of the distribution, is a member of a reserve component called or ordered to active duty pursuant to a provision of law referred to in section 101(a)(B) of title 10, United States Code, during the period of the national emergency declared by the President on September 14, 2001. (2) Waiver of underpayment penalty. Section 6654(e)(3) of such Code is amended by adding at the end the following: (C) Certain early withdrawals from retirement plans. No addition to tax shall be imposed under subsection (a) with respect to any underpayment to the extent such underpayment was created or increased by any distribution described in section 72(t)(2)(G). (3) Effective date. The amendments made by this subsection shall apply to distributions made to an individual after September 13, 2001. (b) Catch-up Contributions Allowed. (1) Individual retirement accounts. Section 219(b)(5) of the Internal Revenue Code of 1986 is amended by adding at the end the following: (D) Catch-up contributions for certain distributions. In the case of an individual who has received a distribution described in section 72(t)(2)(G), the deductible amount for any taxable year shall be increased by an amount equal to (i) the aggregate amount of such distributions made with respect to such individual, over \" the aggregate amount of such distributions previously taken into account under this subparagraph or section 414(w). (2) Roth iras. Section 408A(c) of such Code is amended by redesignating paragraph (7) as paragraph (8) and by inserting after paragraph (6) the following: (7) Catch-up contributions for certain distributions. Any contribution described in section 219(b)(5)(D) shall not be taken into account for purposes of paragraph (2). (3) Employer plans. Section 414 of such Code is amended by adding at the end the following: (w) Catch-up contributions for certain distributions. (1) In general. An applicable employer plan shall not be treated as failing to meet any requirement of this title solely because the plan permits an applicable participant to make additional elective deferrals in any plan year. (2) Limitation on amount of additional deferrals. (A) In general. A plan shall not permit additional elective deferrals under paragraph (1) for any year in an amount greater than the lesser of (i) the applicable dollar amount, or \" the excess of (I) the participant's compensation (as defined in section 415(c) for the year, over \" any other elective deferrals of the participant for such year which are made without regard to this subsection. (B) Applicable dollar amount. For purposes of this paragraph, the applicable dollar amount with respect to a participant shall be an amount equal to (i) the aggregate amount of distributions described in section 72(t)(2)(G) made with respect to such participant, over \" the aggregate amount of such distributions previously taken into account under this subsection or section 219(b)(5)(B). (3) Treatment of contributions. Rules similar to the rules of paragraphs (3) and (4) of subsection (v) shall apply with respect to contributions made under this subsection. (4) Definitions. For purposes of this subsection, the terms 'applicable employer plan' and 'elective deferral' have the same meanings given such terms in subsection (v)(6). (4) Conforming amendment. Section 414(v)(2)(A) of such Code is amended by inserting (other than deferrals under subsection \" after \"deferrals\". (5) Effective date. The amendments made by this subsection shall apply to contributions in taxable years ending after December 31, 2001.\r\n\r\n**Ground Truth:** Amend the internal revenue code of 1986 to provide a waiver of the early withdrawal penalty for distributions from qualified retirement plans to individuals called to active duty during the national emergency declared by the president on september 14, 2001, and for other purposes. Military Call-up Relief Act - Amends the Internal Revenue Code to waive the ten percent early withdrawal penalty for distributions from qualified retirement plans to individuals called to active duty during the national emergency declared by the President on September 14, 2001.\r\n\r\n**Prediction:** military call-up relief act - Amends the Internal Revenue Code to allow a tax-exempt distribution to an individual who is a member of a reserve component called or ordered to active duty during the period of the national emergency declared by the President on September 14, 2001. Requires an employer plan to not be treated as failing for purposes of this Act. Provides for a catch-up contribution for certain distributions.\r\n\r\n**Example 2:\r\nSrc:** This Act may be cited as the \"National Climate Service Act of 2009\". The Congress finds the following: (1) Weather, climate change, and climate variability affect public safety, environmental services and security, human health, agriculture, energy use, water resources, and other factors vital to national security and human welfare. (2) Climate forecasts create opportunities for society to prepare, potentially reducing the costs of climate-related events, as well as serving national needs related to enhancing economic growth, managing risk, protecting life and property, and promoting environmental stewardship. (3) Information on predicted climate and climate impacts is not being fully disseminated or used well, despite the increasing predictability of climate. (4) The United States lacks adequate research, infrastructure, and coordinated outreach and communication mechanisms to meet national climate monitoring, prediction, and decision support needs for adapting to and mitigating the impacts of climate change and climate variability. (5) Increasing societal resilience to climate impacts requires understanding climate trends and variations as well as possible, understanding the impacts of climate on human and nonhuman systems, providing decision-relevant tools based on that information, and increasing society's capacity to act on that information. It is the purpose of this Act to establish a National Climate Service that will assist the Nation and the world in understanding, anticipating, and responding to climate, climate change, and climate variability and their impacts and implications. The Service shall inform the public through the sustained production and delivery of authoritative, timely, useful information about impacts on local, State, regional, tribal, national, and global scales. The Service shall be user-centric, by ensuring that the information is accessible, consistent with users' ability to respond, and based on user needs and limitations. The Service shall provide such usable information through a sustained network of observations, modeling, and research activities. (a) Establishment. (1) In general. The Secretary of Commerce shall establish within the Climate Program Office of the National Oceanic and Atmospheric Administration a National Climate Service not later than one year after the date of enactment of this Act. The Service shall include a national center and a network of regional and local facilities for operational climate observation, modeling, and research. (2) General purpose. The Service shall inform the public through the sustained production and delivery of authoritative, timely, useful information about impacts on local, State, regional, tribal, national, and global scales. (3) Specific services. The Service, at minimum, shall (A) serve as a clearinghouse and technical access point to stakeholders for regionally and nationally relevant information on climate, climate impacts, and adaptation, developing comprehensive databases of information relevant to specific regional and national stakeholder needs. (B) provide education on climate impacts, vulnerabilities, and application of climate information in decisionmaking. (C) design decision-support tools that facilitate use of climate information in stakeholders' near-term operations and long-term planning (D) facilitate user access to climate and climate impacts experts for technical assistance in use of climate information and to inform the climate forecast community of their information needs. (E) provide researcher, modeler, and observations experts access to users to help guide direction of research, modeling, and observation activities. And (F) propose and evaluate adaptation strategies for climate variability and change. (4) Specific functions. The Service, at minimum, shall (A) integrate global, national, and regional observations to produce information and assessments of use to stakeholders and researchers, (B) develop climate models for decision support. (C) perform basic and applied research on climate dynamics and impacts relevant to stakeholder interests. (D) create and maintain an operational delivery system and facilitate transition of new climate applications products to Service member agencies. (E) establish an atmospheric monitoring and verification program utilizing aircraft, satellite, ground sensors, ocean and coastal observing systems, and modeling capabilities to monitor, measure, and verify greenhouse gas concentrations and emissions throughout the global oceans and atmosphere. (F) develop and maintain a dialog among research teams, Federal agencies, and stakeholders for developing information relevant for planning and decisionmaking. (G) identify climate-related vulnerabilities and build national capacity to increase resilience. (H) articulate regional and national climate issues and concerns in regional and national policy arenas and facilitate regional-national communications on Service needs and performance. And (I) outreach to stakeholder groups. (b) Action Plan. Within 1 year after the date of enactment of this Act, the Secretary of Commerce shall submit to the Committee on Commerce, Science, and Transportation of the Senate and the Committee on Science and Technology of the House of Representatives a plan of action for the National Climate Service. The plan, at a minimum, shall (1) provide for the interpretation and communication of climate data, conditions, predictions, projections, and risks on an ongoing basis to decision and policy makers at the local, regional, and national levels. (2) design, deploy, and operate a national climate observing system that closes gaps in existing coverage. (3) support infrastructure and ability to archive and ensure the quality of climate data, and make federally funded model simulations and other relevant climate information available from Global Change Research Program activities and other sources. (4) include a program for long-term stewardship, quality control, development of relevant climate products, and efficient access to all relevant climate data, products, and model simulations. (5) establish a national coordinated modeling strategy, including a national climate modeling center to provide a dedicated capability for modeling and forecasting scenarios, and a regular schedule of projections on long-term and short- term time horizons over a range of scales, including regional scales. (6) improve integrated modeling, assessment, and predictive capabilities needed to document and forecast climate changes and impacts, and to guide national, regional, and local planning and decisionmaking. (7) provide a system of regular consultation and coordination with Federal agencies, States, tribes, nongovernmental organizations, the private sector, and the academic community to ensure (A) that the information requirements of these groups are well incorporated. And (B) timely and full sharing, dissemination and use of climate information and services in risk preparedness, planning, decisionmaking, and early warning and natural resources management, both domestically and internationally. (8) develop standards, evaluation criteria, and performance objectives to ensure that the Service meets the evolving information needs of the public, policy makers, and decisionmakers in the face of a changing climate, (9) develop funding estimates to implement the plan. And support competitive research programs that will improve elements of the Service described in this Act through the Climate Program Office within the Service headquarter function. (c) Director. The Administrator shall appoint a Director of the Service, who shall oversee all processes associated with managing the organization and executing the functions and actions described in this Act. (d) National Climate Service Advisory Council. The Administrator shall, in consultation with the Chairmen and ranking minority members of the Committee on Commerce, Science, and Transportation of the Senate and the Committee on Science and Technology of the House of Representatives, and the National Academy of Sciences, appoint the membership of a National Climate Service Advisory Council, with members serving 4-year terms, that shall include a diverse membership from appropriate Federal, State, and local government, universities, and nongovernment and private sectors who use climate information and cover a range of sectors, such as water, drought, fisheries, coasts, agriculture, health, natural resources, transportation, and insurance. The Council shall advise the Director of the Service of key priorities in climate-related issues that require the attention of the Service. The Council shall be responsible for promoting coordination across regional, national, and international concerns and the assessment of evolving information needs. Functions vested in any Federal officer or agency by this Act or under the program established under this Act may be exercised through the facilities and personnel of the agency involved or, to the extent provided or approved in advance in appropriation Acts, by other persons or entities under contracts or grant arrangements entered into by such officer or agency. The Secretary of Commerce shall prepare and submit to the President and the Congress, not later than March 31 of each year, a report on the activities conducted pursuant to this Act during the preceding fiscal year, including (1) a summary of the achievements of the National Climate Service during the previous fiscal year. And (2) an analysis of the progress made toward achieving the goals and objectives of the Service. (1) Administrator. The term \"Administrator\" means the Administrator of the National Oceanic and Atmospheric Administration. (2) Advisory council. The term \"Advisory Council\" refers to the National Climate Service Advisor Council. (3) Climate change. The term \"climate change\" means any change in climate over time, whether due to natural variability or as a result of human activity. (4) Director. The term \"Director\" means the director of the National Oceanic and Atmospheric Administration's National Climate Service. (5) Secretary. The term \"Secretary\" means the Secretary of Commerce. (6) Service. The term \"Service\" means the National Oceanic and Atmospheric Administration's National Climate Service. There are authorized to be appropriated to the Secretary to carry out this Act (1) $300,000,000 for fiscal year 2011. (2) $350,000,000 for fiscal year 2012. (3) $400,000,000 for fiscal year 2013. (4) $450,000,000 for fiscal year 2014. (5) $500,000,000 for fiscal year 2015.\r\n\r\n**Ground Truth:** Provide for the establishment of a national climate service, and for other purposes. National Climate Service Act of 2009 - Requires the Secretary of Commerce to establish within the Climate Program Office of the National Oceanic and Atmospheric Administration a National Climate Service that includes a national center and a network of regional and local facilities for operational climate observation, modeling, and research. Requires the Service to: (1) inform the public about climate impacts. (2) serve as a clearinghouse and technical access point to stakeholders for information on climate, climate impacts, and adaptation, and relevant comprehensive databases of information. (3) provide education on climate impacts, vulnerabilities, and application of climate information in decisionmaking. (4) design decision-support tools that facilitate use of climate information in stakeholders' near-term operations and long-term planning. (5) facilitate user access to climate experts for technical assistance in the use of climate information and to inform the climate forecast community of their information needs. (6) provide researcher, modeler, and observations experts access to users to help guide direction of their activities. And (7) propose and evaluate adaptation strategies for climate variability and change. Sets forth the Service's functions, including establishing an atmospheric monitoring and verification program utilizing aircraft, satellite, ground sensors, ocean and coastal observing systems, and modeling capabilities to monitor, measure, and verify greenhouse gas concentrations and emissions throughout the oceans and atmosphere. Requires the Secretary to report to specified congressional committees on a plan of action for the Service. Requires the Administrator of NOAA to appoint a Director of the Service. Requires the Director to appoint members of a National Climate Service Advisory Council to promote coordination across regional, national, and international concerns and assess information needs.\r\n\r\n**Prediction:** the Secretary of Commerce shall establish within the Climate Program Office of the National Oceanic and Atmospheric Administration a National Climate Service that will assist the Nation and the world in understanding, anticipating, and responding to climate, climate change, and climate variability and their impacts and implications. The Service shall inform the public through the sustained production and delivery of authoritative, timely, useful information about impacts on local, State, regional, tribal, national, and global scales. The service shall be user-centric, by ensuring that the information is accessible, consistent with users' ability to respond, and based on user needs and limitations. ",
"I encountered the similar with finetune with Fairseq version bart-large, https://github.com/pytorch/fairseq/issues/2347. Looks like it is a in complete sentence or starting from a middle of a sentence. \r\nIt will be great if you shed some light on this.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,603 | 1,603 | CONTRIBUTOR | null | (from distilbart issue).
Hey @sshleifer , thanks for the distilled BART version I was able to fine tune it with the same script on BillSum dataset as T5 but the numbers are way different between the two. I just wanted to understand if I might be doing something wrong with regards to fine tuning distilBART, does it require student training everytime?
Reference numbers on BillSum Dataset:
T5-base:
avg_train_loss = tensor(1.5333, device='cuda:0')
avg_val_loss = tensor(1.4528, device='cuda:0')
epoch = 1
loss = tensor(1.6734, device='cuda:0')
rouge1 = 0.49188267841912325
rouge2 = 0.26436589848185027
rougeL = 0.3591894400892483
train_loss = tensor(1.6734, device='cuda:0')
val_loss = tensor(1.4528, device='cuda:0')
dBART-cnn-12-6:
avg_train_loss = tensor(1.3013, device='cuda:0')
avg_val_loss = tensor(1.4013, device='cuda:0')
epoch = 1
loss = tensor(1.4901, device='cuda:0')
rouge1 = 0.3681518923769047
rouge2 = 0.15683286277623087
rougeL = 0.23453727441540043
train_loss = tensor(1.4901, device='cuda:0')
val_loss = tensor(1.4013, device='cuda:0')
PS. I am using a modified version of the older finetune.py so it doesn't have Rouge for validation epochs.
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5336/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5335 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5335/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5335/comments | https://api.github.com/repos/huggingface/transformers/issues/5335/events | https://github.com/huggingface/transformers/issues/5335 | 646,691,111 | MDU6SXNzdWU2NDY2OTExMTE= | 5,335 | BertForPreTraining and BertModel when loading TF checkpoints | {
"login": "jungwhank",
"id": 53588015,
"node_id": "MDQ6VXNlcjUzNTg4MDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/53588015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jungwhank",
"html_url": "https://github.com/jungwhank",
"followers_url": "https://api.github.com/users/jungwhank/followers",
"following_url": "https://api.github.com/users/jungwhank/following{/other_user}",
"gists_url": "https://api.github.com/users/jungwhank/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jungwhank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jungwhank/subscriptions",
"organizations_url": "https://api.github.com/users/jungwhank/orgs",
"repos_url": "https://api.github.com/users/jungwhank/repos",
"events_url": "https://api.github.com/users/jungwhank/events{/privacy}",
"received_events_url": "https://api.github.com/users/jungwhank/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! What is your library version? The `output_hidden_states` as argument to the call method was only added in the v3.0.0 version, which was released this morning."
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | Hello,
I want to load tensorflow checkpoints and when I run code below
```python
config = BertConfig.from_json_file('./bert_config.json')
config.output_hidden_states = True
bert_model = BertForPreTraining.from_pretrained('./model.ckpt', from_tf=True, config=config)
......
self.bert = bert_model
.......
bert_seq_out, _ = self.bert(input_ids, token_type_ids=segment_ids, attention_mask=input_mask,
output_hidden_states=False)
```
got error like this
```
Traceback (most recent call last):
File "train_tf.py", line 738, in <module>
neg_log_likelihood = model.neg_log_likelihood(input_ids, segment_ids, input_mask, label_ids)
File "train_tf.py", line 605, in neg_log_likelihood
bert_feats = self._get_bert_features(input_ids, segment_ids, input_mask)
File "train_tf.py", line 542, in _get_bert_features
bert_seq_out, _ = self.bert(input_ids, token_type_ids=segment_ids, attention_mask=input_mask, output_hidden_states=False)
File "/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'output_hidden_states'
```
I think it is because I use `BertForPreTraining` instead of `BertModel`,
but when I use `BertModel`, I got error like below
```
/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
592 return modules[name]
593 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 594 type(self).__name__, name))
595
596 def __setattr__(self, name, value):
AttributeError: 'BertModel' object has no attribute 'bias'
```
How can I fix and load TF checkpoint correctly?
any help will be appreciated | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5335/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5334 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5334/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5334/comments | https://api.github.com/repos/huggingface/transformers/issues/5334/events | https://github.com/huggingface/transformers/issues/5334 | 646,688,681 | MDU6SXNzdWU2NDY2ODg2ODE= | 5,334 | Add "labels" functionality for all TF Causal LM and Masked LM models | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Awesome! I'm in!!! ",
"Yes please! Happy to help if you need it."
] | 1,593 | 1,594 | 1,594 | MEMBER | null | # 🚀 Feature request
Currently, it is not possible to calculate the loss of a CLM or MLM model using TF:
```python
from transformers import TFBertForMaskedLM, BertTokenizerFast
model = TFBertForMaskedLM.from_pretrained("bert-base-uncased")
tok = BertTokenizerFast.from_pretrained("bert-base-uncased")
input_ids = tok("This is a test string.").input_ids
loss = model(input_ids, labels=input_ids)[0] # This is currently not possible
```
Currently the `call(...)` function does not accept a `labels` input and does not return the
loss. This should be implemented similar to how it was done for `BertForSequenceClassification`:
https://github.com/huggingface/transformers/blob/393b8dc09a97197df1937a7e86c0c6b4ce69c7e9/src/transformers/modeling_tf_bert.py#L918
All CLM and MLM TF models should be updated to accept `labels` as an input and return the corresponding loss.
## Motivation
1. This allows to use TFTrainer for CLM and MLM models.
2. This aligns TF with PT API
## Your contribution
Starting from 29.06/30.06 I want to implement all these features (with some help from @jplu if possible ;-))
Pinging @LysandreJik @julien-c @sgugger for notification.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5334/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5334/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5333 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5333/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5333/comments | https://api.github.com/repos/huggingface/transformers/issues/5333/events | https://github.com/huggingface/transformers/issues/5333 | 646,681,088 | MDU6SXNzdWU2NDY2ODEwODg= | 5,333 | XLNet with high CPU usage | {
"login": "yourh",
"id": 28811637,
"node_id": "MDQ6VXNlcjI4ODExNjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/28811637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yourh",
"html_url": "https://github.com/yourh",
"followers_url": "https://api.github.com/users/yourh/followers",
"following_url": "https://api.github.com/users/yourh/following{/other_user}",
"gists_url": "https://api.github.com/users/yourh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yourh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yourh/subscriptions",
"organizations_url": "https://api.github.com/users/yourh/orgs",
"repos_url": "https://api.github.com/users/yourh/repos",
"events_url": "https://api.github.com/users/yourh/events{/privacy}",
"received_events_url": "https://api.github.com/users/yourh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,599 | 1,599 | NONE | null | The same issue as these two:
https://github.com/huggingface/transformers/issues/1722#issue-517212978
https://github.com/huggingface/transformers/issues/1529#issue-507567684 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5333/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5333/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5332 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5332/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5332/comments | https://api.github.com/repos/huggingface/transformers/issues/5332/events | https://github.com/huggingface/transformers/issues/5332 | 646,655,857 | MDU6SXNzdWU2NDY2NTU4NTc= | 5,332 | Link to the example/summarization in doc is broken | {
"login": "ttxs69",
"id": 30420918,
"node_id": "MDQ6VXNlcjMwNDIwOTE4",
"avatar_url": "https://avatars.githubusercontent.com/u/30420918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ttxs69",
"html_url": "https://github.com/ttxs69",
"followers_url": "https://api.github.com/users/ttxs69/followers",
"following_url": "https://api.github.com/users/ttxs69/following{/other_user}",
"gists_url": "https://api.github.com/users/ttxs69/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ttxs69/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ttxs69/subscriptions",
"organizations_url": "https://api.github.com/users/ttxs69/orgs",
"repos_url": "https://api.github.com/users/ttxs69/repos",
"events_url": "https://api.github.com/users/ttxs69/events{/privacy}",
"received_events_url": "https://api.github.com/users/ttxs69/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Duplicate of #5309"
] | 1,593 | 1,593 | 1,593 | NONE | null | The link to the example/summarization folder in [the-big-table-of-tasks](https://huggingface.co/transformers/examples.html#the-big-table-of-tasks) is broken because the folder's name has been changed to seq2seq | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5332/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5331 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5331/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5331/comments | https://api.github.com/repos/huggingface/transformers/issues/5331/events | https://github.com/huggingface/transformers/pull/5331 | 646,604,169 | MDExOlB1bGxSZXF1ZXN0NDQwODU3NTQ4 | 5,331 | Adds train_batch_size, eval_batch_size, and n_gpu to to_sanitized_dict output for logging. | {
"login": "jaymody",
"id": 26451316,
"node_id": "MDQ6VXNlcjI2NDUxMzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/26451316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaymody",
"html_url": "https://github.com/jaymody",
"followers_url": "https://api.github.com/users/jaymody/followers",
"following_url": "https://api.github.com/users/jaymody/following{/other_user}",
"gists_url": "https://api.github.com/users/jaymody/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaymody/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaymody/subscriptions",
"organizations_url": "https://api.github.com/users/jaymody/orgs",
"repos_url": "https://api.github.com/users/jaymody/repos",
"events_url": "https://api.github.com/users/jaymody/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaymody/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5331?src=pr&el=h1) Report\n> Merging [#5331](https://codecov.io/gh/huggingface/transformers/pull/5331?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0267668c3d648c6e41afda97f5df8671ee880ac3&el=desc) will **decrease** coverage by `1.03%`.\n> The diff coverage is `80.40%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5331?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5331 +/- ##\n==========================================\n- Coverage 77.01% 75.97% -1.04% \n==========================================\n Files 128 138 +10 \n Lines 21615 24292 +2677 \n==========================================\n+ Hits 16646 18455 +1809 \n- Misses 4969 5837 +868 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5331?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2N0cmwucHk=) | `97.05% <ø> (ø)` | |\n| [src/transformers/configuration\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/5331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.22% <ø> (ø)` | |\n| [src/transformers/configuration\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `97.14% <ø> (ø)` | |\n| [src/transformers/configuration\\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JvYmVydGEucHk=) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <ø> (ø)` | |\n| ... and [170 more](https://codecov.io/gh/huggingface/transformers/pull/5331/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5331?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5331?src=pr&el=footer). Last update [1af58c0...161e09d](https://codecov.io/gh/huggingface/transformers/pull/5331?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"It seems that this is also true for the wandb config logging, but instead it uses `vars(args)` instead of `args.to_sanitized_dict()`. Is there any reason the following line is not using`to_sanitized_dict()`?\r\n\r\nhttps://github.com/huggingface/transformers/blob/1af58c07064d8f4580909527a8f18de226b226ee/src/transformers/trainer.py#L330\r\n\r\nHere's a diff between my new implementation of `arg.to_sanitized_dict()` and `vars(args)`:\r\n\r\n```\r\n>>> from transformers import TrainingArguments\r\n>>> args = TrainingArguments(\"some_dir\")\r\n>>> set(vars(args).items()) - set(args.to_sanitized_dict().items())\r\n{\r\n (\"__cached__setup_devices\", (device(type=\"cpu\"), 0)),\r\n (\"tpu_num_cores\", None),\r\n (\"per_gpu_train_batch_size\", None),\r\n (\"per_gpu_eval_batch_size\", None),\r\n (\"save_total_limit\", None),\r\n}\r\n>>> set(args.to_sanitized_dict().items()) - set(vars(args).items())\r\n{\r\n (\"tpu_num_cores\", \"None\"),\r\n (\"per_gpu_train_batch_size\", \"None\"),\r\n (\"train_batch_size\", 8),\r\n (\"n_gpu\", 0),\r\n (\"eval_batch_size\", 8),\r\n (\"per_gpu_eval_batch_size\", \"None\"),\r\n (\"save_total_limit\", \"None\"),\r\n}\r\n```\r\n\r\nThe main difference is that in `to_sanitized_dict()` the `__cached_setup_devices` property is not saved, and instead of `None`, the value is stringified to `\"None\"` (and of course, with my new changes `train_batch_size`, `eval_batch_size`, and `n_gpu` are also recoreded).\r\n\r\nGonna add another commit to make the wandb config logging use `to_sanitized_dict`.",
"Hi @JetRunner , was hoping I can get an update on this PR. The test fail seems to be unrelated to my committed code.",
"@jaymody OK please wait for another approval "
] | 1,593 | 1,596 | 1,596 | CONTRIBUTOR | null | Closes #5330
```
>>> from transformers import TrainingArguments
>>> args = TrainingArguments("dir")
>>> args.to_sanitized_dict()
{
"output_dir": "dir",
"overwrite_output_dir": False,
"do_train": False,
...
"train_batch_size": 8,
"eval_batch_size": 8,
"n_gpu": 0,
}
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5331/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5331",
"html_url": "https://github.com/huggingface/transformers/pull/5331",
"diff_url": "https://github.com/huggingface/transformers/pull/5331.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5331.patch",
"merged_at": 1596459639000
} |
https://api.github.com/repos/huggingface/transformers/issues/5330 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5330/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5330/comments | https://api.github.com/repos/huggingface/transformers/issues/5330/events | https://github.com/huggingface/transformers/issues/5330 | 646,603,944 | MDU6SXNzdWU2NDY2MDM5NDQ= | 5,330 | Better hyperparameter tensorboard logging in Trainer. | {
"login": "jaymody",
"id": 26451316,
"node_id": "MDQ6VXNlcjI2NDUxMzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/26451316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaymody",
"html_url": "https://github.com/jaymody",
"followers_url": "https://api.github.com/users/jaymody/followers",
"following_url": "https://api.github.com/users/jaymody/following{/other_user}",
"gists_url": "https://api.github.com/users/jaymody/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaymody/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaymody/subscriptions",
"organizations_url": "https://api.github.com/users/jaymody/orgs",
"repos_url": "https://api.github.com/users/jaymody/repos",
"events_url": "https://api.github.com/users/jaymody/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaymody/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,593 | 1,596 | 1,596 | CONTRIBUTOR | null | # 🚀 Feature request
`Trainer` should write `train_batch_size` and `eval_batch_size` to tensorboard. Currently only `per_device_` batch sizes are logged as hyperparamters, which means unless you know how many gpus you trained on (which is also not logged), you can't know the actual batch sizes used in training.
## Your contribution
Submitted a PR: [Adds train_batch_size, eval_batch_size, and n_gpu to to_sanitized_dict output for logging.](https://github.com/huggingface/transformers/pull/5331#issue-440857548)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5330/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5330/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5329 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5329/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5329/comments | https://api.github.com/repos/huggingface/transformers/issues/5329/events | https://github.com/huggingface/transformers/issues/5329 | 646,603,281 | MDU6SXNzdWU2NDY2MDMyODE= | 5,329 | Add option to keep tb_writer open after training is done. | {
"login": "jaymody",
"id": 26451316,
"node_id": "MDQ6VXNlcjI2NDUxMzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/26451316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaymody",
"html_url": "https://github.com/jaymody",
"followers_url": "https://api.github.com/users/jaymody/followers",
"following_url": "https://api.github.com/users/jaymody/following{/other_user}",
"gists_url": "https://api.github.com/users/jaymody/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaymody/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaymody/subscriptions",
"organizations_url": "https://api.github.com/users/jaymody/orgs",
"repos_url": "https://api.github.com/users/jaymody/repos",
"events_url": "https://api.github.com/users/jaymody/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaymody/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,599 | 1,599 | CONTRIBUTOR | null | Currently, `Trainer` automatically closes the tensorboard writer `tb_writer` after the training loop is done. Often I'll want to add extra observations, metrics, data, etc ... to the tensorboard event that requires the trained model (ie after the training loop is done). For example.
```
tb_writer = SummaryWriter(log_dir="some_dir")
# do some stuff with tb_writer before training
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=test_dataset,
compute_metrics=compute_metrics,
tb_writer=tb_writer,
)
trainer.train()
# do extra stuff with tb_writer after the model is trained (that requires a trained model)
```
## Your contribution
Submitted a PR: [Adds option to keep tb_writer open after training finishes](https://github.com/huggingface/transformers/pull/5328#issue-440856807) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5329/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/5328 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5328/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5328/comments | https://api.github.com/repos/huggingface/transformers/issues/5328/events | https://github.com/huggingface/transformers/pull/5328 | 646,603,125 | MDExOlB1bGxSZXF1ZXN0NDQwODU2ODA3 | 5,328 | Adds option to keep tb_writer open after training finishes | {
"login": "jaymody",
"id": 26451316,
"node_id": "MDQ6VXNlcjI2NDUxMzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/26451316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaymody",
"html_url": "https://github.com/jaymody",
"followers_url": "https://api.github.com/users/jaymody/followers",
"following_url": "https://api.github.com/users/jaymody/following{/other_user}",
"gists_url": "https://api.github.com/users/jaymody/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaymody/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaymody/subscriptions",
"organizations_url": "https://api.github.com/users/jaymody/orgs",
"repos_url": "https://api.github.com/users/jaymody/repos",
"events_url": "https://api.github.com/users/jaymody/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaymody/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5328?src=pr&el=h1) Report\n> Merging [#5328](https://codecov.io/gh/huggingface/transformers/pull/5328?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/393b8dc09a97197df1937a7e86c0c6b4ce69c7e9&el=desc) will **increase** coverage by `0.37%`.\n> The diff coverage is `50.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5328?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5328 +/- ##\n==========================================\n+ Coverage 77.54% 77.91% +0.37% \n==========================================\n Files 138 138 \n Lines 24284 24284 \n==========================================\n+ Hits 18831 18922 +91 \n+ Misses 5453 5362 -91 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5328?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.38% <50.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.26% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.37% <0.00%> (+0.72%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.62% <0.00%> (+1.47%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (+38.09%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5328?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5328?src=pr&el=footer). Last update [393b8dc...d390b0d](https://codecov.io/gh/huggingface/transformers/pull/5328?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@JetRunner I'm not exactly sure why the circleci build failed. I had the same result from this pull request https://github.com/huggingface/transformers/pull/5331, but I can't figure out how if at all it's related to my added code.",
"It’s not, don’t worry.\nLet’s see if we can get that fixed. Otherwise, we’ll still be able to merge this PR since the CI failure has nothing to do with your added code.",
"@sgugger do you want to take a look at this?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,593 | 1,604 | 1,604 | CONTRIBUTOR | null | Set the `close_tb_writer` parameter to `False` in `Trainer.train()` to keep the tensorboard writer open.
```
tb_writer = SummaryWriter(log_dir="some_dir")
# do some stuff with tb_writer before training
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=test_dataset,
compute_metrics=compute_metrics,
tb_writer=tb_writer,
)
trainer.train(close_tb_writer=False)
# you can now use tb_writer even after training!
```
Closes #5329 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5328/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5328/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5328",
"html_url": "https://github.com/huggingface/transformers/pull/5328",
"diff_url": "https://github.com/huggingface/transformers/pull/5328.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5328.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/5327 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5327/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5327/comments | https://api.github.com/repos/huggingface/transformers/issues/5327/events | https://github.com/huggingface/transformers/pull/5327 | 646,556,420 | MDExOlB1bGxSZXF1ZXN0NDQwODIyNDMx | 5,327 | [mBART] skip broken forward pass test, stronger integration test | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2009457320,
"node_id": "MDU6TGFiZWwyMDA5NDU3MzIw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/translation",
"name": "translation",
"color": "b2d2f4",
"default": false,
"description": "machine translation utilities and models"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5327?src=pr&el=h1) Report\n> Merging [#5327](https://codecov.io/gh/huggingface/transformers/pull/5327?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/efae6645e223f29cf05eeafe95105a9f869b66dd&el=desc) will **increase** coverage by `0.21%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5327?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5327 +/- ##\n==========================================\n+ Coverage 77.69% 77.90% +0.21% \n==========================================\n Files 138 138 \n Lines 24291 24292 +1 \n==========================================\n+ Hits 18872 18924 +52 \n+ Misses 5419 5368 -51 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5327?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5327/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `93.87% <100.00%> (+6.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5327/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5327/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5327/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.62% <0.00%> (+0.14%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5327/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.37% <0.00%> (+0.72%)` | :arrow_up: |\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5327/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5327/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5327?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5327?src=pr&el=footer). Last update [1af58c0...e077948](https://codecov.io/gh/huggingface/transformers/pull/5327?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Looks good to me"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | (1) I fixed the config of mbart to go from BLEU score 12 to BLEU score 26. This requires that the `decoder_start_token_id=lang_code['ro']` but that we filter that out when we decode.
(2) I added another, harder, integration test case. The expected result is only 1 word different than GT. This also adds implicit coverage for batched generation.
(3) tried for awhile to fix the broken slow GPU `test_enro_forward`.
On commits where the test passed, it doesn't pass any more. And the model hadn't been updated on S3 since March 26. So for now we skip it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5327/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5327",
"html_url": "https://github.com/huggingface/transformers/pull/5327",
"diff_url": "https://github.com/huggingface/transformers/pull/5327.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5327.patch",
"merged_at": 1593371309000
} |
https://api.github.com/repos/huggingface/transformers/issues/5326 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5326/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5326/comments | https://api.github.com/repos/huggingface/transformers/issues/5326/events | https://github.com/huggingface/transformers/pull/5326 | 646,553,389 | MDExOlB1bGxSZXF1ZXN0NDQwODIwMTQ3 | 5,326 | In the run_ner.py example, give the optional label arg a default value | {
"login": "xuhdev",
"id": 325476,
"node_id": "MDQ6VXNlcjMyNTQ3Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/325476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xuhdev",
"html_url": "https://github.com/xuhdev",
"followers_url": "https://api.github.com/users/xuhdev/followers",
"following_url": "https://api.github.com/users/xuhdev/following{/other_user}",
"gists_url": "https://api.github.com/users/xuhdev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xuhdev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xuhdev/subscriptions",
"organizations_url": "https://api.github.com/users/xuhdev/orgs",
"repos_url": "https://api.github.com/users/xuhdev/repos",
"events_url": "https://api.github.com/users/xuhdev/events{/privacy}",
"received_events_url": "https://api.github.com/users/xuhdev/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5326?src=pr&el=h1) Report\n> Merging [#5326](https://codecov.io/gh/huggingface/transformers/pull/5326?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5543b30aa6b52da3c8f7d9e525b0edc26226d717&el=desc) will **increase** coverage by `0.28%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5326?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5326 +/- ##\n==========================================\n+ Coverage 77.42% 77.70% +0.28% \n==========================================\n Files 138 138 \n Lines 24284 24284 \n==========================================\n+ Hits 18803 18871 +68 \n+ Misses 5481 5413 -68 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5326?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.65% <0.00%> (-0.73%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.48% <0.00%> (+1.32%)` | :arrow_up: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `87.50% <0.00%> (+58.65%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5326?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5326?src=pr&el=footer). Last update [5543b30...bb7d305](https://codecov.io/gh/huggingface/transformers/pull/5326?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"LGTM!"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | Otherwise, if label is not specified, the following error occurs:
Traceback (most recent call last):
File "run_ner.py", line 303, in <module>
main()
File "run_ner.py", line 101, in main
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
File "/home/user/anaconda3/envs/bert/lib/python3.7/site-packages/transformers/hf_argparser.py", line 159, in parse_json_file
obj = dtype(**inputs)
TypeError: __init__() missing 1 required positional argument: 'labels' | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5326/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5326",
"html_url": "https://github.com/huggingface/transformers/pull/5326",
"diff_url": "https://github.com/huggingface/transformers/pull/5326.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5326.patch",
"merged_at": 1593560736000
} |
https://api.github.com/repos/huggingface/transformers/issues/5325 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5325/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5325/comments | https://api.github.com/repos/huggingface/transformers/issues/5325/events | https://github.com/huggingface/transformers/pull/5325 | 646,537,923 | MDExOlB1bGxSZXF1ZXN0NDQwODA3OTE3 | 5,325 | Added a model card README.md for my pretrained model. | {
"login": "Pradhy729",
"id": 49659913,
"node_id": "MDQ6VXNlcjQ5NjU5OTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/49659913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pradhy729",
"html_url": "https://github.com/Pradhy729",
"followers_url": "https://api.github.com/users/Pradhy729/followers",
"following_url": "https://api.github.com/users/Pradhy729/following{/other_user}",
"gists_url": "https://api.github.com/users/Pradhy729/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pradhy729/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pradhy729/subscriptions",
"organizations_url": "https://api.github.com/users/Pradhy729/orgs",
"repos_url": "https://api.github.com/users/Pradhy729/repos",
"events_url": "https://api.github.com/users/Pradhy729/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pradhy729/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5325?src=pr&el=h1) Report\n> Merging [#5325](https://codecov.io/gh/huggingface/transformers/pull/5325?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5543b30aa6b52da3c8f7d9e525b0edc26226d717&el=desc) will **increase** coverage by `0.47%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5325?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5325 +/- ##\n==========================================\n+ Coverage 77.42% 77.90% +0.47% \n==========================================\n Files 138 138 \n Lines 24284 24284 \n==========================================\n+ Hits 18803 18919 +116 \n+ Misses 5481 5365 -116 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5325?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5325/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5325/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.48% <0.00%> (+1.32%)` | :arrow_up: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5325/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `87.50% <0.00%> (+58.65%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5325/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5325?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5325?src=pr&el=footer). Last update [5543b30...523ec13](https://codecov.io/gh/huggingface/transformers/pull/5325?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Nice work. What's your intended use case, @Pradhy729? Topic classification? Something else?",
"Thanks! A few different use cases actually. Better entity recognition, topic classification, sentiment extraction and hopefully play around with summarization as well. "
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5325/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5325/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5325",
"html_url": "https://github.com/huggingface/transformers/pull/5325",
"diff_url": "https://github.com/huggingface/transformers/pull/5325.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5325.patch",
"merged_at": 1593419355000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5324 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5324/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5324/comments | https://api.github.com/repos/huggingface/transformers/issues/5324/events | https://github.com/huggingface/transformers/pull/5324 | 646,513,745 | MDExOlB1bGxSZXF1ZXN0NDQwNzg4NTA2 | 5,324 | More model cards | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5324?src=pr&el=h1) Report\n> Merging [#5324](https://codecov.io/gh/huggingface/transformers/pull/5324?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5543b30aa6b52da3c8f7d9e525b0edc26226d717&el=desc) will **increase** coverage by `0.05%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5324?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5324 +/- ##\n==========================================\n+ Coverage 77.42% 77.48% +0.05% \n==========================================\n Files 138 138 \n Lines 24284 24284 \n==========================================\n+ Hits 18803 18816 +13 \n+ Misses 5481 5468 -13 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5324?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.26% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.93% <0.00%> (+0.66%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.62% <0.00%> (+1.47%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5324?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5324?src=pr&el=footer). Last update [5543b30...711a3d4](https://codecov.io/gh/huggingface/transformers/pull/5324?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks for fixing my typos :-)"
] | 1,593 | 1,593 | 1,593 | COLLABORATOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5324/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5324/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5324",
"html_url": "https://github.com/huggingface/transformers/pull/5324",
"diff_url": "https://github.com/huggingface/transformers/pull/5324.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5324.patch",
"merged_at": 1593421565000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5323 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5323/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5323/comments | https://api.github.com/repos/huggingface/transformers/issues/5323/events | https://github.com/huggingface/transformers/pull/5323 | 646,468,030 | MDExOlB1bGxSZXF1ZXN0NDQwNzUxNDQ2 | 5,323 | New model sharing tutorial | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5323?src=pr&el=h1) Report\n> Merging [#5323](https://codecov.io/gh/huggingface/transformers/pull/5323?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5543b30aa6b52da3c8f7d9e525b0edc26226d717&el=desc) will **increase** coverage by `0.48%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5323?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5323 +/- ##\n==========================================\n+ Coverage 77.42% 77.91% +0.48% \n==========================================\n Files 138 138 \n Lines 24284 24284 \n==========================================\n+ Hits 18803 18921 +118 \n+ Misses 5481 5363 -118 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5323?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.26% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.62% <0.00%> (+1.47%)` | :arrow_up: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `87.50% <0.00%> (+58.65%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5323?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5323?src=pr&el=footer). Last update [5543b30...00ff8c3](https://codecov.io/gh/huggingface/transformers/pull/5323?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"very cool! Let's share it with the hugging face twitter account when it's live to get more people to share their models on the hub!",
"That would be now then, it's live in the [master docs](https://huggingface.co/transformers/master/model_sharing.html)."
] | 1,593 | 1,593 | 1,593 | COLLABORATOR | null | This PR removes the old serialization guide which was a bit outdated and adds the new instructions to the current model sharing tutorial (rewritten in rst to have proper links to the rest of the docs).
Also it adds a link to the trainer tutorial recently merged in the quicktour.
Preview is [here](https://53867-155220641-gh.circle-artifacts.com/0/docs/_build/html/model_sharing.html) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5323/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5323",
"html_url": "https://github.com/huggingface/transformers/pull/5323",
"diff_url": "https://github.com/huggingface/transformers/pull/5323.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5323.patch",
"merged_at": 1593270603000
} |
https://api.github.com/repos/huggingface/transformers/issues/5322 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5322/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5322/comments | https://api.github.com/repos/huggingface/transformers/issues/5322/events | https://github.com/huggingface/transformers/pull/5322 | 646,465,736 | MDExOlB1bGxSZXF1ZXN0NDQwNzQ5NjE2 | 5,322 | examples/seq2seq/run_eval.py fixes and docs | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5322?src=pr&el=h1) Report\n> Merging [#5322](https://codecov.io/gh/huggingface/transformers/pull/5322?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5543b30aa6b52da3c8f7d9e525b0edc26226d717&el=desc) will **increase** coverage by `0.45%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/5322?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5322 +/- ##\n==========================================\n+ Coverage 77.42% 77.88% +0.45% \n==========================================\n Files 138 138 \n Lines 24284 24284 \n==========================================\n+ Hits 18803 18914 +111 \n+ Misses 5481 5370 -111 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5322?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.74% <0.00%> (+0.58%)` | :arrow_up: |\n| [src/transformers/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `87.50% <0.00%> (+58.65%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5322?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5322?src=pr&el=footer). Last update [5543b30...eae8c2d](https://codecov.io/gh/huggingface/transformers/pull/5322?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,593 | 1,593 | 1,593 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5322/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/5322",
"html_url": "https://github.com/huggingface/transformers/pull/5322",
"diff_url": "https://github.com/huggingface/transformers/pull/5322.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/5322.patch",
"merged_at": 1593213643000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/5321 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/5321/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/5321/comments | https://api.github.com/repos/huggingface/transformers/issues/5321/events | https://github.com/huggingface/transformers/issues/5321 | 646,465,102 | MDU6SXNzdWU2NDY0NjUxMDI= | 5,321 | [{m}bart] Fix final_logits bias warning | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"also\r\n```bash\r\nSome weights of the model checkpoint at facebook/mbart-large-en-ro were not used when initializing BartForConditionalGeneration: ['lm_head.weight']\r\n```",
"Hey @sshleifer \r\nWant me to pick this one up?",
"It's non trivial. You probably have to update S3. I would pick another.",
"Maybe try to understand why \r\n```\r\ntests/test_pipelines.py::MonoColumnInputTestCase::test_torch_fill_mask_results\r\n```\r\nhas started failing, half of #5265 "
] | 1,593 | 1,595 | 1,595 | CONTRIBUTOR | null | This warning is safe to ignore, but can be easily fixed.
```bash
Some weights of BartForConditionalGeneration were not initialized from the model checkpoint at facebook/mbart-large-en-ro and are newly initialized: ['final_logits_bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/5321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/5321/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.