repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
2,191
closed
Numpy compatibility for sentence piece
12-16-2019 20:32:02
12-16-2019 20:32:02
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2191?src=pr&el=h1) Report > Merging [#2191](https://codecov.io/gh/huggingface/transformers/pull/2191?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ceae85ad60da38cacb14eca49f752669a4fe31dc?src=pr&el=desc) will **decrease** coverage by `<.01%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2191/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2191?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2191 +/- ## ========================================== - Coverage 79.92% 79.92% -0.01% ========================================== Files 131 131 Lines 19469 19470 +1 ========================================== Hits 15561 15561 - Misses 3908 3909 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2191?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2191/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `92.1% <100%> (+0.01%)` | :arrow_up: | | [transformers/tests/utils.py](https://codecov.io/gh/huggingface/transformers/pull/2191/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3V0aWxzLnB5) | `90.62% <0%> (-3.13%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2191?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2191?src=pr&el=footer). Last update [ceae85a...cb6d54b](https://codecov.io/gh/huggingface/transformers/pull/2191?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Awesome!
transformers
2,190
closed
Adding Finnish BERT.
We have trained BERT-base on Finnish text and wish to have it included in the library. Both cased and uncased models are available. You can see the paper [here](https://arxiv.org/abs/1912.07076) and a website for the model can be found [here](http://turkunlp.org/FinBERT/). These changes passed all the relevant tests (\*auto\*, \*common\*, \*_bert_test\*) including `test_model_from_pretrained` with the Finnish models.
12-16-2019 17:24:32
12-16-2019 17:24:32
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2190?src=pr&el=h1) Report > Merging [#2190](https://codecov.io/gh/huggingface/transformers/pull/2190?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e92bcb7eb6c5b9b6ed313cc74abaab50b3dc674f?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2190/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2190?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2190 +/- ## ======================================= Coverage 81.35% 81.35% ======================================= Files 120 120 Lines 18254 18254 ======================================= Hits 14851 14851 Misses 3403 3403 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2190?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2190/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9iZXJ0LnB5) | `96.38% <ø> (ø)` | :arrow_up: | | [transformers/configuration\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2190/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYmVydC5weQ==) | `87.09% <ø> (ø)` | :arrow_up: | | [transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2190/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2JlcnQucHk=) | `96.31% <ø> (ø)` | :arrow_up: | | [transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2190/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2JlcnQucHk=) | `87.75% <ø> (ø)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2190?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2190?src=pr&el=footer). Last update [e92bcb7...3c1aede](https://codecov.io/gh/huggingface/transformers/pull/2190?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This looks really awesome, thanks for sharing! **Uskomatonta**!! (yes this is the only Finnish word I know) The new recommended way of uploading the files is inside folders, that way you'll be able to do `AutoModel.from_pretrained("TurkuNLP/bert-base-finnish-[un]cased-v1")` out-of-the-box, without even having to modify the lib's code (though we may want to add "bert-base-finnish-[un]cased-v1" as a shortcut name anyways). (We just [updated the documentation](https://github.com/huggingface/transformers/commit/855ff0e91d8b3bd75a3b1c1316e2efd814373764#commitcomment-36452545) this morning so this is very new) Do you want to re-upload the files inside folders? Or I can do it on our side too. Also the ArXiv link in the PR's post seems broken, is the paper not public yet? Anyways, thanks for sharing, this is awesome!<|||||>> (We just updated the documentation this morning so this is very new) Ah, just missed it then. > Do you want to re-upload the files inside folders? Or I can do it on our side too. Would be great if you could do it, thanks. > Also the ArXiv link in the PR's post seems broken, is the paper not public yet? The paper is scheduled to be announced on arxiv at 8pm EST so I'll fix the link tomorrow (in 12 hours or so).<|||||>Looks like a URL was incorrect in the original PR, fixed it. Merging now, thank you again!<|||||>By the way, would you guys be interesting in beta-testing a new way of pre-training a tokenizer on a corpus, @haamis? @n1t0 is working on something that might be of interest to you.<|||||>Also @haamis we're rolling out model and contributor pages: e.g. https://huggingface.co/TurkuNLP Anything you would like to see added to this page? How can it be most helpful? Thanks!<|||||>> By the way, would you guys be interesting in beta-testing a new way of pre-training a tokenizer on a corpus, @haamis? > > @n1t0 is working on something that might be of interest to you. I saw there was a faster implementation of the tokenizer in the works, improved speed would be nice. > Also @haamis we're rolling out model and contributor pages: e.g. https://huggingface.co/TurkuNLP > > Anything you would like to see added to this page? How can it be most helpful? Thanks! That looks good to me. Sorry about the late reply.
transformers
2,189
closed
Add support for XLM-RoBERTa
Hi, this model adds support for the recently released XLM-RoBERTa model from the Facebook AI team. XLM-RoBERTa is described in the ["Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) paper from Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. The model itself is integrated into `fairseq` library and weights for the base and large XLM-R are available, see example [here](). ## Results ### NER This PR also extends the `run_ner` script to support XLM-R. Results for NER (CoNLL datasets): #### Base model | Model | English | Dutch | Spanish | German | Avg. | ---------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | Paper | - (dev) / 91.95 | - (dev) / 91.21 | - (dev) / 88.46 | - (dev) / 83.65 | - (dev) / 88.82 | Reproduced | 95.31 (dev) / 91.20 | 91.66 (dev) / 91.37 | 85.23 (dev) / 88.15 | 87.11 (dev) / 84.02 | 89.83 (dev) / 88.69 #### Large model | Model | English | Dutch | Spanish | German | Avg. | ---------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | Paper | - (dev) / 92.74 | - (dev) / 93.25 | - (dev) / 89.04 | - (dev) / 85.53 | - (dev) / 90.14 | Reproduced | 96.84 (dev) / 92.80 | 94.02 (dev) / 94.41 | 88.94 (dev) / 89.30 | 88.60 (dev) / 86.04 | 92.10 (dev) / 90.64 Parameters used for reproducing the paper results: 20 training epochs with a learning rate of `5.0e-6` and a batch size of 16. Only one run is reported here. ## Tasks * [x] Upload model to 🤗/ Transformers S3 * [x] Add support for base model (convert script needs to be adjusted) * [x] Report results for NER (CoNLL datasets) * [x] Add XLM-R to `Auto*` interfaces * [x] Check tokenization methods
12-16-2019 16:13:45
12-16-2019 16:13:45
@stefan-it Following the merge of #1959, you should not have to duplicate the weights conversion script anymore. It should work out of the box, `fairseq.XLMRModel` being a subclass of `fairseq.RobertaModel`.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2189?src=pr&el=h1) Report > Merging [#2189](https://codecov.io/gh/huggingface/transformers/pull/2189?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2f1c745cded91b2f6cfed5b502ea5cbd7d6b9ac7?src=pr&el=desc) will **increase** coverage by `1.12%`. > The diff coverage is `45.45%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2189/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2189?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2189 +/- ## ========================================== + Coverage 80.2% 81.33% +1.12% ========================================== Files 125 125 Lines 18444 18458 +14 ========================================== + Hits 14793 15012 +219 + Misses 3651 3446 -205 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2189?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/configuration\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2189/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYmVydC5weQ==) | `100% <ø> (ø)` | :arrow_up: | | [transformers/modeling\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2189/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbV9yb2JlcnRhLnB5) | `100% <ø> (ø)` | :arrow_up: | | [transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2189/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2JlcnQucHk=) | `87.75% <ø> (ø)` | :arrow_up: | | [transformers/tokenization\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2189/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9kaXN0aWxiZXJ0LnB5) | `100% <ø> (ø)` | :arrow_up: | | [transformers/configuration\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2189/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25feGxtX3JvYmVydGEucHk=) | `100% <ø> (ø)` | :arrow_up: | | [transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2189/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2JlcnQucHk=) | `96.31% <ø> (ø)` | :arrow_up: | | [transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2189/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9iZXJ0LnB5) | `96.38% <ø> (ø)` | :arrow_up: | | [transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2189/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl94bG1fcm9iZXJ0YS5weQ==) | `36.92% <0%> (-0.58%)` | :arrow_down: | | [transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2189/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2ZpbGVfdXRpbHMucHk=) | `71.42% <100%> (ø)` | :arrow_up: | | [transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2189/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2F1dG8ucHk=) | `36.36% <28.57%> (-0.53%)` | :arrow_down: | | ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/2189/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2189?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2189?src=pr&el=footer). Last update [2f1c745...3376adc](https://codecov.io/gh/huggingface/transformers/pull/2189?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@julien-c Thanks for that hint! I tested the script (#1959) and it's working with XLM-R (training was sucessful). I'm going to test the base model now 😅<|||||>This is really awesome @stefan-it! Merging now!<|||||>@stefan-it Hi, how to save the XLM-Roberta model? I tried `torch.save` for the model but reported `*** TypeError: can't pickle SwigPyObject objects`
transformers
2,188
closed
About QuestionAnswering on SQuAD2.0 Dataset
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Do anyone understand the paper " QuestionAnswering on SQuAD2.0 Dataset" in section 5.1. It says "As we increase the number of epochs in training, the performance of the answerable questions is improved while the performance for the non-answerable questions drop hugely" <img width="393" alt="擷取" src="https://user-images.githubusercontent.com/32416416/70907720-88033d00-2044-11ea-9ab9-1f2437aefd11.PNG"> I want to know why cause this situation? And it also says One possible solution is that, the no-answer indicator is only from the first [CLS] token, the value of attentions to the [CLS] token may be much weaker than the word-word attention. Hence the Transformer may focus less on the attention associated with the [CLS] token. what does this mean about " the Transformer may focus less on the attention associated with the [CLS] token." Can anyone help? Thans a lot
12-16-2019 12:46:21
12-16-2019 12:46:21
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,187
closed
Output diverging on different GPUs using same prompt?
## 🐛 Bug Wondering if anyone else is noticing this, or I'm missing something. GPT-2 transformer run_generation.py Running on both AWS p2 and p3, which have different GPUs. Same text seed, same numerical seed (default: 42) The output is identical for a long segment, and then suddenly diverges, picking a different word on the P3, and then carrying on in a different direction after that. I'll do some more investigation and try to post examples. But first: Has anyone else noticed anything like this? Is this normal behavior?
12-16-2019 04:36:08
12-16-2019 04:36:08
How do you set your seed? Personally I use the following, which sets... a lot of seeds but also useful variables for the backend. ```python def set_seed(seed: Optional[int]): """ Set all seeds to make results reproducible (deterministic mode). When seed is None, disables deterministic mode. """ if seed is not None: torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False np.random.seed(seed) random.seed(seed) os.environ['PYTHONHASHSEED'] = str(seed) ``` Also make sure that you are setting your model to `.eval()`.<|||||>Output is deterministic and reproducible on each GPU. I.e., on the AWS P3, I get Output A consistently whenever I run it with the same input On AWS P2, I get Output B consistently whenever I run it with the same input. Could there be any chance that the default seeding is somehow based on the GPU identifier? The other weird thing is that the divergence between Output A and Output B happens very far along in the output stream. They are the same for a long while, many hundreds of tokens, and then they diverge. I would think that a seed difference would cause them to diverge right away. <|||||>Hm, as far as I know different GPUs should still give consistent results with a fixed seed. Does it also happen with other generative models?<|||||>gpu-specific numerical imprecision?<|||||>Haven't tried other models, just GPT2. It's also hard to provide a reproduction case, because this happens with a big seed (something like 500 words), and happens a few thousand words into the output. To achieve this long output, which is longer than the 1024 tokens in the model vector, I'm reseeding over and over with the output, daisy chaining the last output to generate the next input. There is no nondeterminism in this reseeding process, though, and I've verified that the input to the sampling function is identical every step of the way, until that one word where one GPU finally makes a different choice. And this different choice is deterministic on each GPU. It always picks a different word at that spot. I can at least provide the output samples to show you what I mean. Also, I'll leave this open for now in case someone can shed some light on this or has the same problem. But after quite a bit of testing, I've determined that it's nothing on my end that is causing the divergence. Probably some GPU-specific imprecision that I can't control. It's also not a problem, exactly, but just something unexpected that made me take a closer look at my own code. Anyway, search for the word "hunting" in each of these to see where the output diverges. AWS P2 output sample: > When Oedipa got home she cried, didn't know how much, for many long minutes, and then came up with that often enough, two people's wanton moments being too much. Wendell lay across the back seat of her car and was sitting up when she arrived home. She started holding him more carefully, so as not to bump his head on the window, and almost drifted off until she could think of little else. "In other words, I won't remember my husband's name, Wendell," she told herself. > > "Or maybe it won't be him, and I'll know who died." She opened her window and sat down with her arms on the steering wheel and turned the key. "I won't be able to take it, won't be able to get there." > > She lost two hours, wasn't able to drive. She was sad. And maybe this wasn't the time to get angry. Oedipa had something of a position of privilege; she had just come through a dozen solid months with the murder of one man and a whole quarter of a century of acquaintance with that same man, when the only memory she could derive was of his unholy ghost that seemed to hide away as often as it was borne up to meet it. Was that at all real, her itchy sense that somebody was out there who wasn't quite supposed to be there, trailing slowly across the sun-kissed fields of lowlands and behind the straight and narrow lanes of what appeared to be an English village? > > What happened in the fall of 1966 and early 1967 wasn't the best of times, but if she had to go by only the state's case and two sworn affidavits, it was bad, she thought, with things festering under her. I think it's worse here than when I came here, she said herself, and then shifted her viewpoint; people are really bad here, people are all over the map, and yet she sees them all the same. > It could have all been better, she thought. "He was even worse than he was before," she thought, "and I'm the mother of his child." I really wish that he were here, she said, and felt a rumbling in her, remembering and disbelieving this last sentence, old hat. > > By the time she finished cooking breakfast the next morning, she felt a familiar course of fatigue (she knew this feeling, had felt it before and gone through it) but not quite because of anything she'd done. It was the same stuff of the weeks gone by. By three o'clock in the afternoon she was feverishly studying the latest installment in the The Winnipeg Free Press and wondering, if she was going to have another baby, what time they were going to have the baby. Later, after Wendell had gone to bed, and she had fallen into a restless, frenzied sleep, it became clear that those thoughts had been heading away from the child, toward her father. Her husband was supposed to come out and hug her one more time. And something strange happened: no hug. > > > Part 1 > > Fifteen-thirty on that sunny October day in the early nineteen-seventies in Lake of the Woods, an hour's drive from Indiana, was a normal day. It was a typical county-issue late afternoon: a burst of snow, mostly covering the ground at eight or nine o'clock; an overweight man riding a first-class brown ticket train, in cotton dress and gold plate badge, who was carrying a sapphire metallic briefcase into St. Martinville, Oklahoma City; he stood before the crescent-shaped office of the Mulberry County Auditor; one of those large stainless steel doors opened and a ruggedly handsome man walked out in his tan, easy-looking suit, without his outer appearance a warning sign to any observer. The man was Wendell Sams, chief of police of Mulberry County, and his name was Earl Sams. > > Earl Sams had been a cop for nineteen years. He'd been born on this farm in 1917 and made it into adulthood with farm-yard kinbaku and wide experience of the milieu of farmers' wives, country festivals, "cutesy songs and melodies and songs of the land," hunting a pig in a Louisiana cotton field, a hiker frolicking with a deer in the Ozark hills, living in two houses together, raising and maintaining eighty-seven kids, three cars, two planes, and a private railroad and a utility truck. ("It wasn't very good farming and it wasn't very good trucking, but it was only ten miles from downtown Atlanta," Earl Sams had once said.) Then there was the acreage; old-school equipment, all sorts of "pedestrian carts," tailgates and fountains of soft drinks, canned goods, canned food, that dreaded kimchi corn. > > When Earl Sams came along, the town was failing slowly. the factory and mill district around St. Martinville had been neglected, falling on hard times after the corn companies had shut their doors. It had two hospitals and a sheriff's office, but the infant would have to wait, it would have to wait. Nobody wanted to move into the county to take advantage of the increased area that Sams had planned to buy with the money he had just received in his current job. The road was lined with ranch houses and pulled up to many of them by the local back roads, where in the summer, with the grass growing deep and fast, they could have gravel runways that stretched over miles of dirt. There were a couple of old county airfields which now had strip lights and power lines just to the north. The county used to have a train depot in the 1920s, but the local farmer's crew did not like to travel from Kansas City to Hickman or Chapel Hill and start their shift, so in order to stay in business, the depot had been razed, leaving nothing but a fence and a ditto house. > AWS P3 output sample: > > When Oedipa got home she cried, didn't know how much, for many long minutes, and then came up with that often enough, two people's wanton moments being too much. Wendell lay across the back seat of her car and was sitting up when she arrived home. She started holding him more carefully, so as not to bump his head on the window, and almost drifted off until she could think of little else. "In other words, I won't remember my husband's name, Wendell," she told herself. > > "Or maybe it won't be him, and I'll know who died." She opened her window and sat down with her arms on the steering wheel and turned the key. "I won't be able to take it, won't be able to get there." > > She lost two hours, wasn't able to drive. She was sad. And maybe this wasn't the time to get angry. Oedipa had something of a position of privilege; she had just come through a dozen solid months with the murder of one man and a whole quarter of a century of acquaintance with that same man, when the only memory she could derive was of his unholy ghost that seemed to hide away as often as it was borne up to meet it. Was that at all real, her itchy sense that somebody was out there who wasn't quite supposed to be there, trailing slowly across the sun-kissed fields of lowlands and behind the straight and narrow lanes of what appeared to be an English village? > > What happened in the fall of 1966 and early 1967 wasn't the best of times, but if she had to go by only the state's case and two sworn affidavits, it was bad, she thought, with things festering under her. I think it's worse here than when I came here, she said herself, and then shifted her viewpoint; people are really bad here, people are all over the map, and yet she sees them all the same. > It could have all been better, she thought. "He was even worse than he was before," she thought, "and I'm the mother of his child." I really wish that he were here, she said, and felt a rumbling in her, remembering and disbelieving this last sentence, old hat. > > By the time she finished cooking breakfast the next morning, she felt a familiar course of fatigue (she knew this feeling, had felt it before and gone through it) but not quite because of anything she'd done. It was the same stuff of the weeks gone by. By three o'clock in the afternoon she was feverishly studying the latest installment in the The Winnipeg Free Press and wondering, if she was going to have another baby, what time they were going to have the baby. Later, after Wendell had gone to bed, and she had fallen into a restless, frenzied sleep, it became clear that those thoughts had been heading away from the child, toward her father. Her husband was supposed to come out and hug her one more time. And something strange happened: no hug. > > > Part 1 > > Fifteen-thirty on that sunny October day in the early nineteen-seventies in Lake of the Woods, an hour's drive from Indiana, was a normal day. It was a typical county-issue late afternoon: a burst of snow, mostly covering the ground at eight or nine o'clock; an overweight man riding a first-class brown ticket train, in cotton dress and gold plate badge, who was carrying a sapphire metallic briefcase into St. Martinville, Oklahoma City; he stood before the crescent-shaped office of the Mulberry County Auditor; one of those large stainless steel doors opened and a ruggedly handsome man walked out in his tan, easy-looking suit, without his outer appearance a warning sign to any observer. The man was Wendell Sams, chief of police of Mulberry County, and his name was Earl Sams. > > Earl Sams had been a cop for nineteen years. He'd been born on this farm in 1917 and made it into adulthood with farm-yard kinbaku and wide experience of the milieu of farmers' wives, country festivals, "cutesy songs and melodies and songs of the land," hunting the dawn and the sunset, among a horticultural and industrial past that, through mites and lies, was dressed up as simple piety, tradition, or familial environment. He was in his fifties, so that he looked little of his twenty-nine years and how well he matched the image of old-time independence he had been given. He'd been a lieutenant in the Corps of Coroners of Pulaski County, Arkansas, and then-run and more or less ad-hoc detective lieutenant on Davenport Road, for a short time in Lake County. The Mulberry County office seemed to be an unchanged place, although new signs had gone up informing the new chief that there would be a transfer. And yet Earl Sams liked it here, liked the cabin, the lake and the horses, and when the announcement was made that it would be his desk, he could just have stayed. > > His first words as chief were flat, indifferent. "To whom it may concern, sir," he said, "my name is Earl Sams." > > He had a wife, Rosalie, and they had four children; four boys and two girls. The boys were all little, and a couple of them were to be would-be sons-in-law. The eldest of them was fourteen and half; he was usually called Denton, for something and something else; and one of the girls was sixteen, a girl who had held her own in the boys' club, although now he could't really remember any one there. The boys had always liked him, laughed at him when he played catch with them, had found him amusing to be around. He liked them all, too, although when they were adults he didn't bother to find out their names, their names, their ages or their ages' dates. They were all small and stocky, tall but skinny, and bright. They were working people, but they kept his life in the background and seemed neither to get for him the unnecessary attention nor to get annoyed when he took time out to buy the place a new color and put up new signs. He bought the land himself, with luck, in an estate in Manitoba or Kansas or Tennessee, and he kept busy with old times, with horses and the crew, mending camp and clearing the woods and digging the drift and sharpening the bow, or tracking down grubs and other insects. The crew consisted of nothing but horses: two black from Arkansas, two Indian, one old pit bull. They tended the corn and lifted pigs and other sheep, took care of the cows, mopped up the mud, raised the chickens and pulled in the fertilizer. The younger brother was kind, handy with tools, and the father was a man with a truck and a radio. He used it and offered as much service as he could. He was tall, lean and clean-cut, with the deeply cool eyes of an oil-field man or two and a shrewdly-willed attitude about his job. As Chief Sams he was even a little hard to like, not unlike the older residents of Mulberry, but it wasn't his fault. Old men make mistakes, and it was no fault of theirs. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,186
closed
summarization code is incomplete
Hi in the summarization code you have removed all the training part, why is that? Solely evaluating an existing model does not really have any point. While I really find this repo great, incomplete work like this summarization folder, defenitely degrade from the dignity of this repo. I greatly appreciate either remove this summarization folder, or properly implementing it.
12-15-2019 20:52:08
12-15-2019 20:52:08
I'm sorry that you're angry with Transformers library and its authors, but I'm not share your opinion. This framework is well documented, developed and updated (the most important part of each library). However, if you want to watch and/or train the model for the summarization task, you can refer [here](https://github.com/nlpyang/PreSumm), as said in the [README.md](https://github.com/huggingface/transformers/tree/master/examples/summarization). I share with you that, for completeness, it could be useful for many people a Python script that allows to train a summarization model with a custom dataset.<|||||>Hi I really believe you would better off fully remove this folder, previously training part was also included but was not complete, after weeks of waiting you decided to fully remove it? why is this? Please reconsider your decision of including such codes into the repository, have you ever asked yourself what is the point of evaluating on the already trained model, while not allowing user to actually train such models? If the user need to revert back to presum repo, then let the user also evaluate there, there is no point of including the codes which is not complete, this is a bad choice and hurt the dignity of repo in the long run. <|||||>@thomwolf I put Thomas in cc, I really believe adding such incomplete codes is not proper, and hurt the dignity of this repository in the long run.<|||||>@juliahane ... I gotta say that while I understand your frustration and what you are requesting, your attitude completely sucks and is unlikely to solicit a response from the huggingface team. There is no way in hell you can rationalize requesting the team to do away from the inference code based on the pre-trained model just because you can't fine-tune it for your own dataset. The code is complete insofar as what it intends to do. Now ... I'd love to have this be finetunable and would love to see what the HF team produces. In the meantime, I stepped through their code and figured out what you need to do in `modeling_bertabs.py` to make this just so. I'm glad to share with you, but in the words of Base Commander Colonel Nathan Jessup, "You gotta ask nicely."<|||||>Hi Thanks for your response. If i was not sounding nice i apologize for it. Unfortunately i do believe in every single word of what i said. Adding summarization just for evaluation does not help then let people also revert back to presum for it. I really dont get the point here of adding loading and calling pretrained models. Unfortunately i really believe such attitude from your team in long run hurt hugging face name for sure. People see your repo as the greatest repo for deep learning but if you start adding codes like this which does not train and pointless from my view it does change peoples mind. I am sorry this is the truth. Adding summarization code without allowing user to train is pointless. Also i expect you to be more welcoming towards complaints. There is really no point of loading pretrained models from another repo and let user to call it. This is a legitimate complain and your attitude of you gaurding gainst it completely sucks. On Mon, Dec 16, 2019, 11:19 PM ohmeow <[email protected]> wrote: > @juliahane <https://github.com/juliahane> ... I gotta say that while I > understand your frustration and what you are requesting, your attitude > completely sucks and is unlikely to solicit a response from the huggingface > team. > > There is no way in hell you can rationalize requesting the team to do away > from the inference code based on the pre-trained model just because you > can't fine-tune it for your own dataset. The code is complete insofar as > what it intends to do. > > Now ... I'd love to have this be finetunable and would love to see what > the HF team produces. In the meantime, I stepped through their code and > figured out what you need to do in modeling_bertabs.py to make this just > so. I'm glad to share with you, but in the words of Base Commander Colonel > Nathan Jessup, "You gotta ask nicely." > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/2186?email_source=notifications&email_token=AM3GZMZZO3EXCNUOZA3DNGTQY75IPA5CNFSM4J3B6S32YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEHAJ3TA#issuecomment-566271436>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AM3GZM4KS2VL5QBN5KX72GLQY75IPANCNFSM4J3B6S3Q> > . > <|||||>Also i would like to add this point. Previously huggingface added summarization codes that has evaluation part but it was not implemented and the code was failing is several parts basically huggingface uploaded fully not tested code. I respect good code. Everyone should respect writing good code which is tested. Later you removed that training part and leave evaluation part. Still not a complete code which really serve user no functionality than calling already trained models. Both acts of adding code which breaks in at least 10 parts not at all complete in anysense like adding flags and then not writing conditions.... Is really something which hurt your name in the long run. Resulting in people losing trust in huggingface. On Mon, Dec 16, 2019, 11:41 PM julia hane <[email protected]> wrote: > Hi > Thanks for your response. If i was not sounding nice i apologize for it. > Unfortunately i do believe in every single word of what i said. Adding > summarization just for evaluation does not help then let people also revert > back to presum for it. I really dont get the point here of adding loading > and calling pretrained models. Unfortunately i really believe such attitude > from your team in long run hurt hugging face name for sure. People see your > repo as the greatest repo for deep learning but if you start adding codes > like this which does not train and pointless from my view it does change > peoples mind. I am sorry this is the truth. Adding summarization code > without allowing user to train is pointless. Also i expect you to be more > welcoming towards complaints. There is really no point of loading > pretrained models from another repo and let user to call it. This is a > legitimate complain and your attitude of you gaurding gainst it completely > sucks. > > On Mon, Dec 16, 2019, 11:19 PM ohmeow <[email protected]> wrote: > >> @juliahane <https://github.com/juliahane> ... I gotta say that while I >> understand your frustration and what you are requesting, your attitude >> completely sucks and is unlikely to solicit a response from the huggingface >> team. >> >> There is no way in hell you can rationalize requesting the team to do >> away from the inference code based on the pre-trained model just because >> you can't fine-tune it for your own dataset. The code is complete insofar >> as what it intends to do. >> >> Now ... I'd love to have this be finetunable and would love to see what >> the HF team produces. In the meantime, I stepped through their code and >> figured out what you need to do in modeling_bertabs.py to make this just >> so. I'm glad to share with you, but in the words of Base Commander Colonel >> Nathan Jessup, "You gotta ask nicely." >> >> — >> You are receiving this because you were mentioned. >> Reply to this email directly, view it on GitHub >> <https://github.com/huggingface/transformers/issues/2186?email_source=notifications&email_token=AM3GZMZZO3EXCNUOZA3DNGTQY75IPA5CNFSM4J3B6S32YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEHAJ3TA#issuecomment-566271436>, >> or unsubscribe >> <https://github.com/notifications/unsubscribe-auth/AM3GZM4KS2VL5QBN5KX72GLQY75IPANCNFSM4J3B6S3Q> >> . >> > <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,185
closed
RuntimeError: CUDA error: device-side assert triggered when using Roberta
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): Roberta Language I am using the model on (English, Chinese....): English The problem arise when using: * [ ] the official example scripts: (give details) * [x] my own modified scripts: (give details) ``` class QuestModel(nn.Module): def __init__(self, n_classes=30): super(QuestModel, self).__init__() self.model_name = 'QuestModel' self.bert_model = models[MODEL_NAME].from_pretrained(MODEL_NAME) self.fc = nn.Linear(LINEAR_LAYER[MODEL_NAME], n_classes) self.dropout = nn.Dropout(p=0.2) def forward(self, ids, seg_ids): attention_mask = (ids > 0).float() layers, pool_out = self.bert_model(input_ids=ids, token_type_ids=seg_ids, attention_mask=attention_mask) out = self.dropout(pool_out) logit = self.fc(out) return logit ...... for i, (ids, seg_ids, labels) in enumerate(data_loader): ids, seg_ids, labels = ids.cuda(), seg_ids.cuda(), labels.cuda() outputs = data_parallel(model, (ids, seg_ids)) scores = torch.sigmoid(outputs) loss = loss_fn(outputs, labels) # loss = custom_loss(pred, y_batch.to(device)) preds.append(outputs.cpu().numpy()) original.append(labels.cpu().numpy()) avg_loss += loss.item() / len(data_loader) ...... ``` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details) Google quest QA label ## To Reproduce Steps to reproduce the behavior: 1. I have a script that worked fine when I use bert from transformers package 2. Then I change tokenizer and model to roberta 3. Always got the following errors <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ``` /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [130,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed. Traceback (most recent call last): File "/mnt/home/dunan/Learn/Kaggle/google_quest/kaggle_google_quest/train.py", line 267, in <module> train(args) File "/mnt/home/dunan/Learn/Kaggle/google_quest/kaggle_google_quest/train.py", line 138, in train score, val_loss = predict(model, val_loader) File "/mnt/home/dunan/Learn/Kaggle/google_quest/kaggle_google_quest/train.py", line 252, in predict preds.append(outputs.cpu().numpy()) RuntimeError: CUDA error: device-side assert triggered ``` <!-- A clear and concise description of what you expected to happen. --> ## Environment * OS: CentOS * Python version: 3.6 * PyTorch version: 1.2 * PyTorch Transformers version (or branch): 2.2.2 * Using GPU ? Yes * Distributed of parallel setup ? All happened when I using 1, 2, 4 Gpu * Any other relevant information: ## Additional context <!-- Add any other context about the problem here. -->
12-15-2019 19:28:02
12-15-2019 19:28:02
Bert, XLnet all work fine for me<|||||>Have you ever read in the Issues section, e.g. #1852, #1848, #1849 and #1805? They suggest different solutions for your problem, e.g. changing the input sequence limit to 128. > ## Bug > Model I am using (Bert, XLNet....): > Roberta > Language I am using the model on (English, Chinese....): > English > The problem arise when using: > > * [ ] the official example scripts: (give details) > * [x] my own modified scripts: (give details) > > ``` > class QuestModel(nn.Module): > def __init__(self, n_classes=30): > super(QuestModel, self).__init__() > self.model_name = 'QuestModel' > self.bert_model = models[MODEL_NAME].from_pretrained(MODEL_NAME) > self.fc = nn.Linear(LINEAR_LAYER[MODEL_NAME], n_classes) > self.dropout = nn.Dropout(p=0.2) > > def forward(self, ids, seg_ids): > attention_mask = (ids > 0).float() > layers, pool_out = self.bert_model(input_ids=ids, token_type_ids=seg_ids, attention_mask=attention_mask) > > out = self.dropout(pool_out) > logit = self.fc(out) > return logit > > ...... > for i, (ids, seg_ids, labels) in enumerate(data_loader): > ids, seg_ids, labels = ids.cuda(), seg_ids.cuda(), labels.cuda() > outputs = data_parallel(model, (ids, seg_ids)) > scores = torch.sigmoid(outputs) > loss = loss_fn(outputs, labels) > # loss = custom_loss(pred, y_batch.to(device)) > preds.append(outputs.cpu().numpy()) > original.append(labels.cpu().numpy()) > > avg_loss += loss.item() / len(data_loader) > ...... > ``` > > The tasks I am working on is: > > * [ ] an official GLUE/SQUaD task: (give the name) > * [x] my own task or dataset: (give details) > Google quest QA label > > ## To Reproduce > Steps to reproduce the behavior: > > 1. I have a script that worked fine when I use bert from transformers package > 2. Then I change tokenizer and model to roberta > 3. Always got the following errors > > ``` > /opt/conda/conda-bld/pytorch_1565272279342/work/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [130,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed. > Traceback (most recent call last): > File "/mnt/home/dunan/Learn/Kaggle/google_quest/kaggle_google_quest/train.py", line 267, in <module> > train(args) > File "/mnt/home/dunan/Learn/Kaggle/google_quest/kaggle_google_quest/train.py", line 138, in train > score, val_loss = predict(model, val_loader) > File "/mnt/home/dunan/Learn/Kaggle/google_quest/kaggle_google_quest/train.py", line 252, in predict > preds.append(outputs.cpu().numpy()) > RuntimeError: CUDA error: device-side assert triggered > ``` > > ## Environment > * OS: CentOS > * Python version: 3.6 > * PyTorch version: 1.2 > * PyTorch Transformers version (or branch): 2.2.2 > * Using GPU ? Yes > * Distributed of parallel setup ? All happened when I using 1, 2, 4 Gpu > * Any other relevant information: > > ## Additional context<|||||>Currently, roberta model has a problem when handling multiple sentences. Just removing token_type_ids from input or a bit modification will work. (https://github.com/huggingface/transformers/issues/1538)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,184
closed
T5Tokenizer: Using cls_token, but it is not set yet.
## 🐛 Bug Model I am using (Bert, XLNet....): T5 ## To Reproduce Steps to reproduce the behavior: 1. Load T5Tokenizer 2. Try getting the CLS or SEP token: `tokenizer.sep_token` or `tokenizer.cls_token` 3. An error will be raised "Using cls_token, but it is not set yet." Running the latest commit on the master branch. I imagine that the T5 implementation is not complete yet, but I thought I'd point it out anyway. I figured that T5 expects the same input as BERT, as stated in the source: https://github.com/huggingface/transformers/blob/e92bcb7eb6c5b9b6ed313cc74abaab50b3dc674f/transformers/modeling_t5.py#L651-L658 As an aside, I saw this incorrect mention of XLNetTokenizer in the T5Tokenizer. Probably overlooked while working on adding _yet another model_ to transformers. You guys are crazy! https://github.com/huggingface/transformers/blob/e92bcb7eb6c5b9b6ed313cc74abaab50b3dc674f/transformers/tokenization_t5.py#L112-L120
12-15-2019 19:25:08
12-15-2019 19:25:08
I'm **not** able to import T5Tokenizer in my environment: I received an **ImportError** exception. I'm using Python 3.6.9, OS Ubuntu 16.04, Transformers 2.2.2 (installed now with `pip install transformers`), PyTorch 1.3.1 and TensorFlow 2.0. What am I missing? The stack trace is the following: ``` >>> import transformers >>> transformers.__version__ >>> '2.2.2' >>> from transformers import T5Tokenizer Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name 'T5Tokenizer' ``` I can't import `T5Model` as well. > ## Bug > Model I am using (Bert, XLNet....): T5 > > ## To Reproduce > Steps to reproduce the behavior: > > 1. Load T5Tokenizer > 2. Try getting the CLS or SEP token: `tokenizer.sep_token` or `tokenizer.cls_token` > 3. An error will be raised "Using cls_token, but it is not set yet." > > Running the latest commit on the master branch. > > I imagine that the T5 implementation is not complete yet, but I thought I'd point it out anyway. I figured that T5 expects the same input as BERT, as stated in the source: > > https://github.com/huggingface/transformers/blob/e92bcb7eb6c5b9b6ed313cc74abaab50b3dc674f/transformers/modeling_t5.py#L651-L658 > > As an aside, I saw this incorrect mention of XLNetTokenizer in the T5Tokenizer. Probably overlooked while working on adding _yet another model_ to transformers. You guys are crazy! > > https://github.com/huggingface/transformers/blob/e92bcb7eb6c5b9b6ed313cc74abaab50b3dc674f/transformers/tokenization_t5.py#L112-L120<|||||>@TheEdoardo93 T5 is _not_ part of the 2.2.2 release. In other words, it is not part of a PyPi release yet. You'll need to uninstall transformers and install from this repo.<|||||>I've installed Transformers from source with `pip install git+https://github.com/huggingface/transformers.git` and now it works as expected! Thank you > @TheEdoardo93 T5 is _not_ part of the 2.2.2 release. In other words, it is not part of a PyPi release yet. You'll need to uninstall transformers and install from this repo.<|||||>Yes, thanks for trying it already @BramVanroy. Currently, T5 is still missing a few features to be easily usable. I'll add them over the coming days/weeks. Here is what we still need and we plan to add: - a clean sequence generation API (T5 is designed to be used in a text generation setting, you should read the paper if you haven't already by the way, it's a great paper) <= working on this right now at #1840 - a clean way to do model parallelism to spread the model on several GPUs. For instance, the biggest checkpoint is 42 GB (!) so you'll need a few GPU only to load the model <= Working on this after the seq generation (first draft at #2165 ) - a script to pre-process GLUE/SQUAD to set them in text generation setting <= Later<|||||>I just skimmed through https://github.com/huggingface/transformers/pull/1840 and it's great to see how you're focusing on the user-experience @thomwolf. I think that that's a very important thing, together with well structured and written documentation. The field is changing rapidly, and especially for people who are working more on the linguistics side of things or who are just getting into transformers, a clear API is a must to get started. Thanks a lot for that (and your and the team's work in general, of course)! Concerning https://github.com/huggingface/transformers/pull/2165, why don't you just require people to rent TPUv3-2048 with 32**TB** of memory? No biggey. I kid of course! Nice to see that efforts are underway to bring these big models to consumers as well. This work completely goes over my head, but I am curious to see what the end result is. Would that mean that we could load that 42GB checkpoint on our 4x V100 (16GB each)? Perhaps this issue should stay open and be closed when T5 is finished?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,183
closed
Unit of the prediction scores of a language model
I have used the base transformer models for downstream tasks for a while now but I haven't had the time to dig into how the models were actually trained. When looking at the *ForMaskedLM models, I can see the return tuple contains `prediction_scores` for each token. > prediction_scores: torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size) > Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). One could get the probabilities of vocabulary items by running a SoftMax over these prediction_scores, but my question is: what are these outputs themselves, what is their unit? In other words: during training, how were these outputs used? Since they are the primary output in the tuple, I suppose they were used in the loss function. At first I expected these to be **perplexity** but since they are returned before any softmax (and perplexity is 2^entropy), I don't see how that can be true. Still, these scores seem to be used to to get the most likely masked token in the [quickstart](https://huggingface.co/transformers/quickstart.html#bert-example). So if it's not probability and not perplexity, then what is its unit?
12-15-2019 16:16:37
12-15-2019 16:16:37
These are logits, i.e. unnormalized scores for each possible token at the masked token position. You can convert them in (normalized) probabilities by taking their softmax. I don't think you can really assign any unit to these scores, in particular, because they are not normalized so you can add any constant value to all these scores (as long as it's the same value for all tokens in the vocabulary) and still get the same probabilities after applying a softmax. We could return a softmax out of the model but if you only want to compute the argmax for instance (the most likely token), you can directly use these outputs so we don't want to force additional compute if some people don't need it. During training we don't use these output but the cross-entropy loss. The cross-entropy loss is obtained by first computing the logarithm of the softmax of these scores (log-probabilities) and then the negative log-likelihood of the target labels under this distribution. This is actually computed in one step by `torch.nn.CrossEntropyLoss` and returned as the loss of the model which is the first element of the tuple when you supply `mlm_labels` to a `XXXForMaskedLM` model.<|||||>Hey @thomwolf, thanks for taking the time to help me better understand the internals of the language models! Coincidentally, I was reading through this excellent article by Chip Huyen (@chiphuyen) (https://thegradient.pub/understanding-evaluation-metrics-for-language-models/). But if I understand you correctly, perplexity is not used in practice as a metric (I assume that it can't be evaluated anyway). Instead, CrossEntropyLoss is used, dealing with the MLM problem as a classification problem over C classes where C is the size of the vocabulary, correct? The labels would then be (presumably, internally) one-hot encoded vocabulary where there is only one `1` which is the expected token? For some reason I always thought that MLM involved perplexity or multi-label classification, i.e. where mask could have multiple correct tokens. I'm glad to now get a better understanding, so thanks again for your time.<|||||>Yeah this article by @chiphuyen is really great, I keep sharing it. I hope she writes more NLP articles in the future 😄 <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,182
closed
sts-b task score is far worse than other GLUE tasks
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Hello, I'm evaluating my GPT style model pretrained on TEXT8 dataset with GLUE. Below is the evaluation result. ``` CoLA | SST-2 | MRPC |   QQP   | STS-B  | MNLI | QNLI | RTE | WNLI 19.1 | 85 | 82.5 / 71.6 | 78.4 / 82 | 41.6 / 39.4 | 62.8 | 74.6 | 57.4 | 56.3 ``` Compare to other GLUE tasks, STS-B scores seem far worse than baseline in leader board. Does anyone know why model works bad on STS? Thanks a lot.
12-15-2019 13:35:50
12-15-2019 13:35:50
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Have you resolved this issue? I observed a much worse result.
transformers
2,181
closed
Conda version is not the latest
## 🚀 Feature The conda package in conda forge channel (v2.1.1) is not the latest released version (v2.2.2) so the ALBERT model is missing from the package. ## Motivation In conda environment we need the latest packages containing the ALBERT model. ## Additional context Also, for the ALBERT model in its code it has printing comments for each 12 layers. It shouldn't be there I guess in a production ready version.
12-15-2019 08:32:30
12-15-2019 08:32:30
Do you mean lines 66-67 and 153 in the [modeling_albert.py](https://github.com/huggingface/transformers/blob/master/transformers/modeling_albert.py) script? > ## Feature > The conda package in conda forge channel (v2.1.1) is not the latest released version (v2.2.2) so the ALBERT model is missing from the package. > > ## Motivation > In conda environment we need the latest packages containing the ALBERT model. > > ## Additional context > Also, for the ALBERT model in its code it has printing comments for each 12 layers. It shouldn't be there I guess in a production ready version.<|||||>@TheEdoardo93 in the last pip package in line 289 there was `print("Layer index", layer_index)` that is removed right now.<|||||>The last pip package (2.2.2) should not be outputting the layers. Could you please tell me what is the output in your console when running the following snippet? ```py from transformers import AlbertModel, __version__ import torch print(__version__) model = AlbertModel.from_pretrained("albert-base-v1") output = model(torch.tensor([[1,2,3]])) ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,180
closed
Pretty sure patch in Pull Request #1313 is incorrect
## 🐛 Bug The bug was introduced in pull request #1313 If a stop_token is specified, but does not exist in **text**, then the last character of the text string is trimmed off. text.find will return -1 in that case, which seems to remove the last character from the string. Example: --stop_token="wake" `text = tokenizer.decode(o, clean_up_tokenization_spaces=True)` text = ' fishing with two ladies and the same two ladies had been fishing with him and all three ladies had run' ` text = text[: text.find(args.stop_token) if args.stop_token else None]` text = ' fishing with two ladies and the same two ladies had been fishing with him and all three ladies had ru' See how that final 'n' is trimmed off, even though "wake" does not occur in this string? The fix should be to first find out if stop_token occurs in the text before doing the trimming, maybe: ``` if args.stop_token : loc = text.find(args.stop_token) if loc != -1 :+1: text = text[:loc] ```
12-15-2019 08:07:51
12-15-2019 08:07:51
And yes, the above code does fix it. Example: --stop_token="wake" text = ' to the same conclusion: that her husband, the guy she had chosen for her favorite, had been' trimmed text = ' to the same conclusion: that her husband, the guy she had chosen for her favorite, had been' text = ' murdered the previous spring. Oedipa closed her eyes and tried to wake up.' trimmed text = ' murdered the previous spring. Oedipa closed her eyes and tried to '<|||||>Thank you for raising this issue @jasonrohrer, there indeed was an error with the stop token. It's been fixed in 18a879f.
transformers
2,179
closed
Should I always use bert as a teacher to distillation distilbert as a student?
## ❓ Questions & Help Should I always use bert as a teacher to distill distilbert as a student? Is it fine RoBERTa model as a teacher to distill [distilbert](https://github.com/huggingface/transformers/blob/master/transformers/modeling_distilbert.py)? I assume roberta and distilbert use the same tokenizer and dataloader method.
12-15-2019 04:29:22
12-15-2019 04:29:22
Hello @graykode You can use whichever teacher you want, however in the method we propose, you need to make sure that the vocabularies match (knowledge distillation loss is applied to the distributions over the vocabulary). Victor<|||||>Thanks for your advice! I will close this issue
transformers
2,178
closed
Tokenize with offsets
Similar purpose to https://github.com/huggingface/transformers/pull/1274 (which I also used for most of the testing) but different approach. It keeps track of token offsets by trying to progressively tokenize the text character by character, and consume matching tokens along the way. It returns just the start of a span. Tests were added for ALBERT, CTRL and T5. I think this implementation is more generic and simpler to understand, and the results are very good.
12-15-2019 03:34:40
12-15-2019 03:34:40
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2178?src=pr&el=h1) Report > Merging [#2178](https://codecov.io/gh/huggingface/transformers/pull/2178?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/26e04e51ef0774e681784d7be900c1119d46c52e?src=pr&el=desc) will **decrease** coverage by `<.01%`. > The diff coverage is `66.66%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2178/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2178?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2178 +/- ## ========================================== - Coverage 73.73% 73.73% -0.01% ========================================== Files 87 87 Lines 14921 14919 -2 ========================================== - Hits 11002 11000 -2 Misses 3919 3919 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2178?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2178/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `63.91% <0%> (ø)` | :arrow_up: | | [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2178/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `92.35% <100%> (-0.03%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2178?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2178?src=pr&el=footer). Last update [26e04e5...7870a49](https://codecov.io/gh/huggingface/transformers/pull/2178?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@thomwolf This pull request is ready for review<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,177
closed
:zip: #2106 tokenizer.tokenize speed improvement (3-8x) by caching added_tokens in a Set
in #2106, we see that adding tokens to tokenizer decreases progressively tokenization performance which is not really a surprise as you need to go through the list of tokens which grows. But it sounds that this increase is not linear. By having a quick look at code, I've seen that: - `added_tokens` list is built for every calls of tokenize, - `all_special_tokens` is a python property that is reevaluated every time - `split_on_tokens` is going 2x in both `all_special_tokens` and `added_tokens_encoder` lists which is `O(n)`. I've tried to replace those by a simple cached Set of `added_tokens_encoder.keys() + all_special_tokens` that is reevaluated at each call of `add_tokens`. firstly, it avoids rebuilding the list at every call. Secondly, searching in a Set is `O(1)` in average and `O(n)` in worst case. On RobertaTokenizer, the result is a significant speed improvement (testing on for 100.000 calls): - for 0 added token, `tokenizer.tokenize` is >3x faster - for 200 tokens, `tokenizer.tokenize` is >7x faster Here are a few interesting plots. ### Execution time when adding more tokens between old code and new <img width="404" alt="Screenshot 2019-12-14 at 15 34 22" src="https://user-images.githubusercontent.com/77193/70850213-167e8f80-1e88-11ea-8339-334ada7e5f37.png"> We see here that old code is not linear and the execution time is impacted when more tokens are added. New code seems to behave linearly (up to 200 at least) ### Rate of speed increase between old code and new <img width="372" alt="Screenshot 2019-12-14 at 15 33 09" src="https://user-images.githubusercontent.com/77193/70850210-036bbf80-1e88-11ea-9778-d59e1e3e83c7.png"> We see that new code is 3x faster by default and this advantage grows when adding more tokens (>7x for 200) ### Execution time between old code and new in a bar plot <img width="399" alt="Screenshot 2019-12-14 at 15 33 14" src="https://user-images.githubusercontent.com/77193/70850206-fe0e7500-1e87-11ea-808e-1c9043a5ec4b.png"> Same as previous plot. I know you're working on Rust tokenizers that will be much faster in theory. But until it's ready, what do you think about this basic correction (and maybe others) that already improves the speed drastically? Don't hesitate to tell if you see that this modification would be very bad for other cases.
12-14-2019 14:45:11
12-14-2019 14:45:11
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2177?src=pr&el=h1) Report > Merging [#2177](https://codecov.io/gh/huggingface/transformers/pull/2177?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e92bcb7eb6c5b9b6ed313cc74abaab50b3dc674f?src=pr&el=desc) will **increase** coverage by `<.01%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2177/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2177?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2177 +/- ## ========================================== + Coverage 81.35% 81.36% +<.01% ========================================== Files 120 120 Lines 18254 18256 +2 ========================================== + Hits 14851 14854 +3 + Misses 3403 3402 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2177?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2177/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `90.74% <100%> (+0.24%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2177?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2177?src=pr&el=footer). Last update [e92bcb7...cc01351](https://codecov.io/gh/huggingface/transformers/pull/2177?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I've seen there was a similar PR https://github.com/huggingface/transformers/pull/1881 but focusing on special tokens. The current one is a bit more generic IMHO. You'll tell me what you think about it.<|||||>Thank you for the detailed report, this is very cool. It looks good to me, great work @mandubian!<|||||>Awesome, thanks a lot @mandubian
transformers
2,176
closed
run_squad.py for SQuAD2.0 have bad f1 score
why I use run_squad.py for SQuAD2.0 have bad f1 score 43.638 The noanser_f1=0.0 it look like do not deal with not answerable situation I don't have change anything I just run like this python3 run_squad.py \ --model_type bert \ --model_name_or_path bert-base-cased \ --do_train \ --do_eval \ --do_lower_case \ --train_file /share/nas165/Wendy/transformers/examples/tests_samples/SQUAD/train-v2.0.json \ --predict_file /share/nas165/Wendy/transformers/examples/tests_samples/SQUAD/dev-v2.0.json \ --per_gpu_train_batch_size 4 \ --learning_rate 4e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /share/nas165/Wendy/transformers/examples/squad_debug_SQuAD_1213_bert/ Do anyone know how to do? Thanks a lot
12-14-2019 13:14:53
12-14-2019 13:14:53
Can you please post the required version numbers? (Should be in the issue template)<|||||>the version numbers is 2.2.0<|||||>- Is it answering every question? for V2 you might want this flag passed https://github.com/huggingface/transformers/blob/master/examples/run_squad.py#L409 - Your model is cased? https://github.com/huggingface/transformers/blob/master/examples/run_squad.py#L428 --- Edit: I should say I haven't actually done this. I just know this flag exists. <|||||>@WenTingTseng If you want to use SQuAD v2.0, you have to pass the `--version_2_with_negative` argument to `run_squad.py`, otherwise the model supposes every question has at least an answer. Without that flag, you are basically not "learning" the no-answerable part.<|||||>ok, problem is resolved Thanks a lot
transformers
2,175
closed
merge new version
12-14-2019 12:16:23
12-14-2019 12:16:23
ok
transformers
2,174
closed
RobertaTokenizer token type issue
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Why the two middle `<\s>` are both assigned with token type 0? https://github.com/huggingface/transformers/blob/e92bcb7eb6c5b9b6ed313cc74abaab50b3dc674f/transformers/tokenization_roberta.py#L149 Could this one be better? ```python return len(cls + token_ids_0 + sep) * [0] + len(sep + token_ids_1 + sep) * [1] ```
12-14-2019 09:29:15
12-14-2019 09:29:15
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>having the same question... why add two seps instead of one?
transformers
2,173
closed
run_squad with roberta
Hi, @julien-c @thomwolf this PR is based on #1386 and #1984. - This PR modified run_squad.py and models_roberta to support Roberta. - This PR also made use of multiple processing to accelerate converting examples to features. (Converting examples to feature needed **15minus before and 34 seconds now** with 24 cpu cores' acceleration. The threads number is 1 by default which is the same as the original single processing's speed). - The result of Roberta large on squad1.1: `{'exact': 87.26584673604542, 'f1': 93.77663586186483, 'total': 10570, 'HasAns_exact': 87.26584673604542, 'HasAns_f1': 93.77663586186483, 'HasAns_total': 10570, 'best_exact': 87.26584673604542, 'best_exact_thresh': 0.0, 'best_f1': 93.77663586186483, 'best_f1_thresh': 0.0}`, which is sighltly lower than #1386 in a single run. Parameters are `python ./examples/run_squad.py --model_type roberta --model_name_or_path roberta-large --do_train --do_eval --do_lower_case \ --train_file data/squad1/train-v1.1.json --predict_file data/squad1/dev-v1.1.json --learning_rate 1.5e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir ./models_roberta/large_squad1 --per_gpu_eval_batch_size=3 --per_gpu_train_batch_size=3 --save_steps 10000 --warmup_steps=500 --weight_decay=0.01`. Hope this gap can be improved by `add_prefix_space=true' . I will do this comparasion in the next days. - The result of Roberta base is '{'exact': 80.65279091769158, 'f1': 88.57296806525736, 'total': 10570, 'HasAns_exact': 80.65279091769158, 'HasAns_f1': 88.57296806525736, 'HasAns_total': 10570, 'best_exact': 80.65279091769158, 'best_exact_thresh': 0.0, 'best_f1': 88.57296806525736, 'best_f1_thresh': 0.0}'. Roberta-base was also tested since it's more easy to be reproduced. - The results of bert-base-uncased is `{'exact': 79.21475875118259, 'f1': 87.13734938098504, 'total': 10570, 'HasAns_exact': 79.21475875118259, 'HasAns_f1': 87.13734938098504, 'HasAns_total': 10570, 'best_exact': 79.21475875118259, 'best_exact_thresh': 0.0, 'best_f1': 87.13734938098504, 'best_f1_thresh': 0.0}'. This is tested for the multiple processing's influence on other models. This result is the same with bert-base-uncased result without multiple processing. - Hope that someone else can help to reproduce my results. thank you! I will continue to find if three is some ways to improve the roberta-large's results. - Squad1 model on google drive [roberta-large-finetuned-squad](https://drive.google.com/drive/folders/1BZJeOeri_cKGUG_cRI5OCmkWC5deQqcc?usp=sharing):
12-14-2019 01:16:59
12-14-2019 01:16:59
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2173?src=pr&el=h1) Report > Merging [#2173](https://codecov.io/gh/huggingface/transformers/pull/2173?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7bd11dda6f43656cf0a3891b7f61a67196d233b4?src=pr&el=desc) will **decrease** coverage by `1.35%`. > The diff coverage is `9.09%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2173/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2173?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2173 +/- ## ========================================== - Coverage 80.79% 79.43% -1.36% ========================================== Files 113 113 Lines 17013 17067 +54 ========================================== - Hits 13745 13558 -187 - Misses 3268 3509 +241 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2173?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/data/metrics/squad\_metrics.py](https://codecov.io/gh/huggingface/transformers/pull/2173/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2RhdGEvbWV0cmljcy9zcXVhZF9tZXRyaWNzLnB5) | `0% <0%> (ø)` | :arrow_up: | | [transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2173/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `53.2% <21.21%> (-18.57%)` | :arrow_down: | | [transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/2173/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2RhdGEvcHJvY2Vzc29ycy9zcXVhZC5weQ==) | `14.75% <5.5%> (+0.56%)` | :arrow_up: | | [transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2173/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `9.72% <0%> (-85.42%)` | :arrow_down: | | [transformers/tests/modeling\_tf\_common\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2173/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `80.51% <0%> (-16.42%)` | :arrow_down: | | [transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2173/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `72.21% <0%> (-2.33%)` | :arrow_down: | | [transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2173/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `94.27% <0%> (-2.21%)` | :arrow_down: | | [transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2173/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `80.13% <0%> (-1.33%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2173?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2173?src=pr&el=footer). Last update [7bd11dd...805c21a](https://codecov.io/gh/huggingface/transformers/pull/2173?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Really nice job! Here are my results of RoBERTa-large on SQuAD using this PR: `Results: {'exact': 84.52792049187232, 'f1': 88.0216698977779, 'total': 11873, 'HasAns_exact': 80.90418353576248, 'HasAns_f1': 87.9017015344667, 'HasAns_total': 5928, 'NoAns_exact': 88.1412952060555, 'NoAns_f1': 88.1412952060555, 'NoAns_total': 5945, 'best_exact': 84.52792049187232, 'best_exact_thresh': 0.0, 'best_f1': 88.02166989777776, 'best_f1_thresh': 0.0}` The hyper-parameters are as follows: `python ./examples/run_squad.py \ --model_type roberta \ --model_name_or_path roberta-large \ --do_train \ --do_eval \ --do_lower_case \ --train_file data/squad2/train-v2.0.json \ --predict_file data/squad2/dev-v2.0.json \ --learning_rate 2e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir ./models_roberta/large_squad2 \ --per_gpu_eval_batch_size=6 \ --per_gpu_train_batch_size=6 \ --save_steps 10000 --warmup_steps=500 --weight_decay=0.01 --overwrite_cache --overwrite_output_dir --threads 24 --version_2_with_negative` <|||||>Really nice, thanks a lot @erenup
transformers
2,172
closed
[cli] Upload is now compatible with folders
12-13-2019 21:10:25
12-13-2019 21:10:25
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2172?src=pr&el=h1) Report > Merging [#2172](https://codecov.io/gh/huggingface/transformers/pull/2172?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d46147294852694d1dc701c72b9053ff2e726265?src=pr&el=desc) will **increase** coverage by `0.48%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2172/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2172?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2172 +/- ## ========================================== + Coverage 80.32% 80.81% +0.48% ========================================== Files 114 113 -1 Lines 17102 16999 -103 ========================================== Hits 13737 13737 + Misses 3365 3262 -103 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2172?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/2172/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2RhdGEvcHJvY2Vzc29ycy9zcXVhZC5weQ==) | `14.49% <0%> (+0.3%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2172?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2172?src=pr&el=footer). Last update [d461472...fb92209](https://codecov.io/gh/huggingface/transformers/pull/2172?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
2,171
closed
Small run_squad nit: eliminate trailing "_" in "best_predictions_.json" when no prefix
## 🐛 Bug <!-- Important information --> Prior convention for tf-based run_squad is to output best predictions in an nbest_predictions.json file. Now with the new convention of including a "prefix" in the generation of potentially many nbest files, in cases where there's not prefix, the name becomes nbest_predictions_.json (nothing after the "_"). Might be more backward compatible to include the rightmost "_" only when there's a prefix after it. The same thing applies to predictions and null_odds, ... Model I am using (Bert, XLNet....): albert Language I am using the model on (English, Chinese....): English The problem arise when using: * [X] the official example scripts: run_squad.py * [ ] my own modified scripts: (give details) The tasks I am working on is: * [X] an official GLUE/SQUaD task: SQuAD2 * [ ] my own task or dataset: (give details) ## To Reproduce Steps to reproduce the behavior: 1. look at output of a successful run of run_squad.py <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> ## Environment * OS: * Python version: * PyTorch version: * PyTorch Transformers version (or branch): v2.1.1, from source * Using GPU ? * Distributed of parallel setup ? * Any other relevant information: ## Additional context <!-- Add any other context about the problem here. -->
12-13-2019 20:20:08
12-13-2019 20:20:08
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>As it's only a cosmetic change and for the sake of not breaking backward compat over cosmetic issues I'm reluctant to change this.
transformers
2,170
closed
BertForSequenceClassification() model TF to pytorch conversion
I added a script convert_bert_seqclass_tf_checkpoint_to_pytorch.py for the conversion a trained BertForSequenceClassification model from TF to pytorch. I had to modify modeling_bert.py to support it, as well.
12-13-2019 19:46:59
12-13-2019 19:46:59
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2170?src=pr&el=h1) Report > Merging [#2170](https://codecov.io/gh/huggingface/transformers/pull/2170?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c8ed1c82c8a42ef700d4129d227fa356385c1d60?src=pr&el=desc) will **decrease** coverage by `<.01%`. > The diff coverage is `0%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2170/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2170?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2170 +/- ## ========================================== - Coverage 80.35% 80.34% -0.01% ========================================== Files 114 114 Lines 17095 17097 +2 ========================================== Hits 13736 13736 - Misses 3359 3361 +2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2170?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2170/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2JlcnQucHk=) | `87.43% <0%> (-0.32%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2170?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2170?src=pr&el=footer). Last update [c8ed1c8...e3c65da](https://codecov.io/gh/huggingface/transformers/pull/2170?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,169
closed
How to structure input data for training TFGPT2LMHeadModel using model.fit() in TF2.0?
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I am able to use the run_lm_finetuning.py script easily, but I wish to be able to use TF2.0 and call model.fit() on distilgpt2. using the fine-tuning script as an example, I structured my dataset as such: ``` #split text file by lines, tokenize lines, convert tokenized lines to integers examples = [] with open(file_path, encoding="utf-8") as f: text = f.readlines() for line in text: examples.append(tokenizer.encode(line)) #pad examples to appropriate length pad_examples = tf.keras.preprocessing.sequence.pad_sequences(examples, maxlen=256,padding='post', truncating='post', value=tokenizer.pad_token_id) #create dataset, in the finetuning script, labels were just copies of the "examples" input array dataset = tf.data.Dataset.from_tensor_slices((pad_examples, pad_examples)) BATCH_SIZE = 4 BUFFER_SIZE = 10000 dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True) ``` However, when I compile and run the script like this: ``` optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy') model.compile(optimizer=optimizer, loss=[loss, None, None, None, None, None, None], metrics=[metric]) model.fit(dataset, epochs=3) ``` my loss is NaN, and no training is happening. I believe I am missing something fundamental about how GPT2 works with inputs/labels, but it seems from the torch script the labels and inputs are the same array and calling model.train() works just fine. Any ideas would be greatly appreciated, as I have an existing TF2.0 architecture I am trying to connect to GPT-2 and being able to call model.fit() would be preferable. I have no issues fine-tuning BERT in TF2.0 with model.fit() as it is much clearer what the inputs and labels are in that case.
12-13-2019 18:54:09
12-13-2019 18:54:09
Hi, there is a fundamental difference between PyTorch and TensorFlow in that the losses for PyTorch can be computed both inside the model forward method as well as outside, whereas it is only outside for TensorFlow. This makes a difference when comparing the torch script and keras fit, as our GPT-2 implementation automatically computes the loss when giving the `labels` argument to the model, which would not be the case for TensorFlow. It computes the loss by shifting the examples by one index and comparing the model's token prediction to the true token value. In order to train your GPT-2 (or DistilGPT-2) model on language modeling, you would have to create a dataset with: - the examples - the labels: these are the examples but shifted by one index so that the model may compare its prediction of the following token compared to the true token. Let me know if you have additional questions.<|||||>@LysandreJik Thank you! An additional question: I have a text file with one sentence per line, and an end of sentence token at the end of each line. It seems I should concatenate the text into one long string and sweep a "window" of a specified size over the text like so: ``` self.examples = [] with open(file_path, encoding="utf-8") as f: text = f.read() tokenized_text = tokenizer.convert_tokens_to_ids(tokenizer.tokenize(text)) for i in range(0, len(tokenized_text)-block_size+1, block_size): # Truncate in block of block_size self.examples.append(tokenizer.build_inputs_with_special_tokens(tokenized_text[i:i+block_size])) ``` and so in order to get examples I would need to specify a length of tokens for training and my inputs would be `examples_chunk[:-1]` and my labels would be `examples_chunk[1:]` ?<|||||>Yes, you could also use `tokenizer.encode(text)`, rather than `tokenize` > `convert_tokens_to_ids` > `build_inputs_with_special_tokens`.<|||||>@LysandreJik Unfortunately I am still getting NaN loss and no training? Here is the code, which I assume is still not correct somehow but I cannot seem to figure out why. ``` with open(file_path, encoding="utf-8") as f: text = f.read() tokenized_text = tokenizer.encode(text) examples = [] block_size = 100 for i in range(0, len(tokenized_text)-block_size+1, block_size): # Truncate in block of block_size examples.append(tokenized_text[i:i+block_size]) inputs, labels = [], [] for ex in examples: inputs.append(ex[:-1]) labels.append(ex[1:]) dataset= tf.data.Dataset.from_tensor_slices((inputs,labels)) BATCH_SIZE = 16 BUFFER_SIZE = 10000 dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True) optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy') model.compile(optimizer=optimizer, loss=[loss, None, None, None, None, None, None], metrics=[metric]) model.fit(dataset, epochs=3) ```<|||||>What is your input file? I'm running your code and I do get a decreasing loss alongside an increasing accuracy. Here's a [gist](https://gist.github.com/LysandreJik/c958925768eb6a9a72609ea99561d1cb) with the self-contained code (the text is in the file), let me know if running this still outputs a NaN loss.<|||||>Thanks so much for your help. That code works just as you described with your text file, and also works fine when I use my own text file. I discovered that the issue is coming from adding special tokens to the tokenizer. My text file is made up of one sentence per line such as: ``` <start> this is an example sentence from my text file <end> <start> this is line two of my file <end> ``` When I don't change the bos_token and eos_token, I get decreasing loss and increasing accuracy. Adding the following code is what results in a NaN loss: ``` #change eos and bos tokens special_tokens_dict = {'bos_token':"<start>", 'eos_token':"<end>"} tokenizer.add_special_tokens(special_tokens_dict) ``` Any idea why this could be? Thank you again for the help. EDIT: included the code with the problem block ``` with open(file_path, encoding="utf-8") as f: text = f.read() #change eos and bos tokens special_tokens_dict = {'bos_token':"<start>", 'eos_token':"<end>"} tokenizer.add_special_tokens(special_tokens_dict) tokenized_text = tokenizer.encode(text) examples = [] block_size = 100 for i in range(0, len(tokenized_text)-block_size+1, block_size): # Truncate in block of block_size examples.append(tokenized_text[i:i+block_size]) inputs, labels = [], [] for ex in examples: inputs.append(ex[:-1]) labels.append(ex[1:]) dataset= tf.data.Dataset.from_tensor_slices((inputs,labels)) BATCH_SIZE = 16 BUFFER_SIZE = 10000 dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True) optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy') model.compile(optimizer=optimizer, loss=[loss, None, None, None, None, None, None], metrics=[metric]) model.fit(dataset, epochs=3) ``` In following the documentation, it appears that any time I try and run the command : `model.resize_token_embeddings(len(tokenizer))` I get a `NotImplementedError` If, however, I assign the bos and eos tokens when I first create the tokenizer: `tokenizer = GPT2Tokenizer.from_pretrained("distilgpt2", bos_token='<start>', eos_token='<end>')` Training results in decreasing loss and increasing accuracy. I realized this thread has drifted quite a bit, so I would be happy to close this and start another tokenizer thread. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,168
closed
CUDA error at 'cublasSgemm' when using the pretrained BERT
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): Language I am using the model on (English, Chinese....): The problem arise when using: * [ ] the official example scripts: (give details) * [v] my own modified scripts: (give details) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [v] my own task or dataset: (give details) ## To Reproduce Steps to reproduce the behavior: My code: ``` torch.cuda.set_device(0) sequence_output, pooled_output = self.bert(input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask) sequence_output = self.dropout(sequence_output) pooled_output = self.dropout(pooled_output) sense_logits = self.sense_classifier(pooled_output) arg_logits = self.arg_classifier(sequence_output) ``` <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> ## Environment * OS: Ubuntu 16.04 * Python version: 3.6.5 * PyTorch version: 1.3.1 * PyTorch Transformers version (or branch): 2.2.1 * Using GPU ? yes (1080) * Distributed of parallel setup ? (no) * Any other relevant information: ## Additional context My error message is here: When my model is on the training process, the error message occurs within some hours. I'm sure my problem is exactly same with this link: https://github.com/huggingface/transformers/issues/1760 however, update pytorch and transformers doesn't help. ``` File "training_finetuning.py", line 144, in train token_type_ids=b_token_type_ids, attention_mask=b_input_masks) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "../kaiser/src/modeling.py", line 53, in forward sequence_output, pooled_output = self.bert(input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py", line 738, in forward encoder_attention_mask=encoder_extended_attention_mask) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py", line 384, in forward layer_outputs = layer_module(hidden_states, attention_mask, head_mask[i], encoder_hidden_states, encoder_attention_mask) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py", line 355, in forward self_attention_outputs = self.attention(hidden_states, attention_mask, head_mask) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py", line 309, in forward self_outputs = self.self(hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py", line 230, in forward mixed_value_layer = self.value(hidden_states) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/linear.py", line 87, in forward return F.linear(input, self.weight, self.bias) File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1372, in linear output = input.matmul(weight.t()) RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc) ``` <!-- Add any other context about the problem here. -->
12-13-2019 18:08:19
12-13-2019 18:08:19
In [this](https://github.com/pytorch/pytorch/issues/24018) thread on PyTorch's GitHub, they said that this bug has been fixed. In more details, _"this bug was solved in cublas 10.2.0.186. The latest public version of cublas is 10.2.1.243 that was released with CUDA 10.1 Update 2."_<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,167
closed
using run_squad.py for predict and specifying config_name as path, config.json not found
## 🐛 Bug <!-- Important information --> The situation is that, when running a predict-only task and specifying 1) an explicit path for a fine-tuned albert model and 2) specifying a specific path to the corresponding config.json file, run_squad attempts to seek the config file in the location of the --output_dir. It appears that, when specifying config_name as a full path, the specification is ignored and run_squad looks in the specified output_dir location for config.json. Since I am automatically generating several output dirs during the course of running my pipeline, it is not convenient nor sensible for me to also copy the config file to each result directory. To be concrete, I am trying to specify that the config file should come from the training output model directory: --config_name /home/.../albert_models/squad2/config.json and that the eval/predict output should go to a result directory: --output_dir /home/.../pipeline_results/session_id_1234/output/workdirs/0.0.0 When I do this I see the below. Model I am using (Bert, XLNet....): albert, fine-tuned for SQuAD2 using albert xxl Language I am using the model on (English, Chinese....): English The problem arise when using: * [X] the official example scripts: run_squad.py * [ ] my own modified scripts: (give details) The tasks I am working on is: * [2] an official GLUE/SQUaD task: SQuAD2 * [ ] my own task or dataset: (give details) ## To Reproduce Steps to reproduce the behavior: 1. in transformers, run_squad.py, with arguments below Also tried without --cache_dir, which also had the same effect. The only thing that worked was to use the model dir == the output dir, but that placed my outputs into the model dir, which is not acceptable (should be unaltered by predict-only tasks). <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> run_squad.py arguments: ``` --model_type albert --model_name_or_path /home/.../albert_models/squad2 --cache_dir /home/.../transformers/cache_dir/v2.0-albert --config_name /home/.../albert_models/squad2/config.json --do_eval --do_lower_case --predict_file /home/.../pipeline_results/session_id_1234/output/workdirs/0.0.0/questionnaire.json --train_file None --per_gpu_train_batch_size 2 --per_gpu_eval_batch_size 24 --learning_rate 3e-05 --num_train_epochs 2.0 '--max_seq_length 128 --doc_stride 128 --version_2_with_negative --output_dir /home/.../pipeline_results/session_id_1234/output/workdirs/0.0.0/ ``` Partial stack trace: ``` 12/12/2019 19:18:24 - INFO - __main__ - Evaluate the following checkpoints: ['/home/.../pipeline_results/session_id_1234/output/workdirs/0.0.0''] Traceback (most recent call last): File "/home/.../transformers/transformers/configuration_utils.py", line 134, in from_pretrained resolved_config_file = cached_path(config_file, cache_dir=cache_dir, force_download=force_download, proxies=proxies) File "/home/.../transformers/transformers/file_utils.py", line 182, in cached_path raise EnvironmentError("file {} not found".format(url_or_filename)) OSError: file /home/.../pipeline_results/session_id_1234/output/workdirs/0.0.0/config.json' not found During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/.../transformers/transformers/configuration_utils.py", line 134, in from_pretrained resolved_config_file = cached_path(config_file, cache_dir=cache_dir, force_download=force_download, proxies=proxies) File "/home/.../transformers/transformers/file_utils.py", line 182, in cached_path raise EnvironmentError("file {} not found".format(url_or_filename)) OSError: file /home/.../pipeline_results/session_id_1234/output/workdirs/0.0.0'/config.json not found ``` ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> Expected that overriding the model_name_or_path with a direct path, and the config_name with another path (different from output_dir) would override the defaults. ## Environment * OS: ubuntu 18.04.1 * Python version: 3.7.5 * PyTorch version: 1.1.0 * PyTorch Transformers version (or branch): 2.2.1, installed from source * Using GPU ? Yes * Distributed of parallel setup ? No * Any other relevant information: ## Additional context <!-- Add any other context about the problem here. -->
12-13-2019 15:26:21
12-13-2019 15:26:21
@LysandreJik - Note changes above.<|||||>I believe the issue stems from the fact that the model cannot be evaluated unless it has been trained. A workaround is to specify the `model_name_or_path` to be the same as the `output_dir` so that it loads that when evaluating, but it isn't the best user experience. I'm thinking of forcing the evaluation to load the model from `model_name_or_path` rather than from `output_dir` when there is no `do_train` argument is specified. What do you think?<|||||>That's exactly the situation, and exactly the behavior that I believe would work. The training takes many hours (~20?) and I only want to do it once. Then I cache that in a central location and all of my many predict runs use that. <|||||>Let me know if the latest commit (c8ed1c8) fixes this issue.<|||||>Well... no and yes. The "no" is that I had trouble with dependencies when running that commit ( issues with missing "past" module). The "yes" is that I was able to slot in the fix to the 2.1.1 version and the fix worked. Not sure what my problems are with running both from master and from c8ed1c8. But at least there's a way forward. For unattended clone and operate, I'd either need the patch to be applied to 2.1.1, or I'll need some guidance about dependency issues with c8ed1c8. Thanks!<|||||>When you install from master (`pip install git+https://github.com/huggingface/transformers`) you run into issues? Do you mind copying the error message along with your software versions? We'll push a patch (2.2.2) later today.<|||||>Works great. Don't know what was going wrong with my prior attempts. Thanks!! <|||||>Glad to hear that. Feel free to re-open if you have similar issues.
transformers
2,166
closed
How to do the further pretraining ?
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Hello ! How could I futher Pre-train the BERT ( including the unsupervised masked language model and next sentence prediction tasks ) **using my own corpus** ? thank you very much !
12-13-2019 15:08:17
12-13-2019 15:08:17
We have no scripts for pre-training, but we do have scripts for fine-tuning (which seems to be what you want to do). Take a look at [run_lm_finetuning.py](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py) for more information. We don't have examples that do NSP however, as it was proven with RoBERTa to not be particularly useful for training. You'll have to code it yourself or find an implementation somewhere if you want to train on that loss.<|||||>@LysandreJik I got it. thank you !<|||||>@JiangYanting If you want to do LM finetuning incl. NSP, you might wanna have a look at [FARM](https://github.com/deepset-ai/FARM). There's an example script [here](https://github.com/deepset-ai/FARM/blob/master/examples/lm_finetuning.py ). From our experience it depends a lot on the domain whether NSP makes sense. In some industry applications, we made good experience with also adding other auxiliary tasks in this phase of model training (e.g. an additional classification task for available tags of documents / sentences). <|||||>@tholor Wow, that's so cool ! I would have a try after I take a final exam^_^. thank you very much !<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,165
closed
Model parallelism + Adapters
Adding model parallelism for large T5 models and other models if needed. Adding adapters (a generalization of #1289) at the same time.
12-13-2019 15:01:19
12-13-2019 15:01:19
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,164
closed
[SMALL BREAKING CHANGE] Cleaning up configuration classes - Adding Model Cards
Clean up configuration. Previously loading a JSON file in the configuration could be done either by `config = config_class(json_file)` or by `config = config_class.from_pretrained(json_file)`. This was a historical artifact from the time configuration classes didn't use `from_pretrained()` method. This introduced complexity in logic to instantiate the classes which impacted PRs like #1548 and complexified the code to add new models. In this PR we remove the first path to favor using the standardized `config = config_class.from_pretrained(json_file)`. cc @LysandreJik @mfuntowicz @julien-c
12-13-2019 13:38:24
12-13-2019 13:38:24
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2164?src=pr&el=h1) Report > Merging [#2164](https://codecov.io/gh/huggingface/transformers/pull/2164?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e92bcb7eb6c5b9b6ed313cc74abaab50b3dc674f?src=pr&el=desc) will **decrease** coverage by `0.99%`. > The diff coverage is `90.53%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2164/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2164?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2164 +/- ## ======================================== - Coverage 81.35% 80.35% -1% ======================================== Files 120 122 +2 Lines 18254 18335 +81 ======================================== - Hits 14851 14734 -117 - Misses 3403 3601 +198 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2164?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/tests/modeling\_tf\_openai\_gpt\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2164/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX29wZW5haV9ncHRfdGVzdC5weQ==) | `94.73% <ø> (ø)` | :arrow_up: | | [transformers/tests/modeling\_tf\_bert\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2164/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2JlcnRfdGVzdC5weQ==) | `96.22% <ø> (ø)` | :arrow_up: | | [transformers/tests/modeling\_tf\_roberta\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2164/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX3JvYmVydGFfdGVzdC5weQ==) | `75.2% <ø> (ø)` | :arrow_up: | | [transformers/tests/modeling\_ctrl\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2164/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2N0cmxfdGVzdC5weQ==) | `93.57% <ø> (ø)` | :arrow_up: | | [transformers/tests/modeling\_common\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2164/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2NvbW1vbl90ZXN0LnB5) | `75.64% <ø> (ø)` | :arrow_up: | | [transformers/tests/modeling\_tf\_t5\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2164/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX3Q1X3Rlc3QucHk=) | `92.77% <ø> (ø)` | :arrow_up: | | [transformers/tests/modeling\_bert\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2164/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2JlcnRfdGVzdC5weQ==) | `97.07% <ø> (ø)` | :arrow_up: | | [transformers/tests/modeling\_gpt2\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2164/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2dwdDJfdGVzdC5weQ==) | `94.16% <ø> (ø)` | :arrow_up: | | [transformers/tests/modeling\_tf\_ctrl\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2164/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2N0cmxfdGVzdC5weQ==) | `94.05% <ø> (ø)` | :arrow_up: | | [transformers/tests/modeling\_tf\_distilbert\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2164/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2Rpc3RpbGJlcnRfdGVzdC5weQ==) | `99.08% <ø> (ø)` | :arrow_up: | | ... and [50 more](https://codecov.io/gh/huggingface/transformers/pull/2164/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2164?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2164?src=pr&el=footer). Last update [e92bcb7...1bbdbac](https://codecov.io/gh/huggingface/transformers/pull/2164?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>awesome<|||||>Ok merging
transformers
2,163
closed
PreTrainedEncoderDecoder on tensorflow
## 🚀 Feature Hi, would it be possible to create a tensorflow version of `PreTrainedEncoderDecoder`? ## Motivation The main motivation is that I would like to use `PreTrainedEncoderDecoder` in TensorFlow. Yeah, I got it, PyTorch is better and I totally agree but unfortunately, I have to use TensorFlow. ## Additional context Looking at the code it does not seem too hard to create a `TFPreTrainedEncoderDecoder` Thank you guys
12-13-2019 12:19:17
12-13-2019 12:19:17
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>We are still settling on the proper API for the pytorch version, so it will probably be awhile (months) before we make a tensorflow version. Feel free to take a stab at it, of course!<|||||>Thank you for the reply :) I guess I will use PyTorch then. On 11 Feb 2020, 16:28 +0100, Sam Shleifer <[email protected]>, wrote: We are still settling on the proper API for the pytorch version, so it will probably be awhile (months) before we make a tensorflow version. Feel free to take a stab at it, of course! — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub<https://github.com/huggingface/transformers/issues/2163?email_source=notifications&email_token=ADZLZXGTOQOXZ37FREVFBZLRCK723A5CNFSM4J2MJBQKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELM262Q#issuecomment-584691562>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ADZLZXDXDM3BSWXJF5D375TRCK723ANCNFSM4J2MJBQA>.
transformers
2,162
closed
pad_to_max_length param is not supported in PreTrainedTokenizer.encode
## ❓ Questions & Help Hello, I've installed the current version of transformers package (2.2.1) through pip on Python 3.6.8rc1 on Windows 10 Pro (build 17763.678 if it is important). I am trying to get a sentence encoded and padded at the same time: ```python tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') temp = tokenizer.encode(text, add_special_tokens=True, max_length=MAX_LENGTH, pad_to_max_length=True) ``` And I'm getting an error, that `pad_to_max_length` is unrecognized option. What am I missing?
12-13-2019 10:23:11
12-13-2019 10:23:11
Hello, can you try with the patch that was released today (2.2.2) and let me know if it works for you?<|||||>By updating the Transformers library from 2.2.1 to 2.2.2, **it works as expected without the bug** highlighted by @madrugado. My environment is the following: - **Python** 3.6.9 - **OS**: Ubuntu 16.04 - **Transformers**: 2.2.2 (installed from PyPi with `pip install transformers`) - **PyTorch**: 1.3.1. - **TensorFlow**: 2.0 The stack trace is the following: ``` Python 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) [GCC 7.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import transformers /home/<user>/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /home/<user>/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /home/<user>/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /home/<user>/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /home/<user>/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /home/<user>/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') >>> text='Hello, my name is Edward' >>> temp = tokenizer.encode(text, add_special_tokens=True, max_length=50, pad_to_max_length=True) >>> temp [101, 7592, 1010, 2026, 2171, 2003, 3487, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] >>> ``` > Hello, can you try with the patch that was released today (2.2.2) and let me know if it works for you?<|||||>I also confirm that with 2.2.2 version everything is working fine. Thanks!<|||||>There is no clear documentation on `pad_to_max_length` param I had hard time finding this. It would be great if it is added to docs, or if it is present can you point me to that page. Thanks
transformers
2,161
closed
Adding model type to config.json
## Feature Add `model_type` to the *config.json* to define the model_type and make it independent from the name ## Motivation Currently, the model type is automatically discovered from the name. So if it is a Bert model, the autoloader is choosing the right methods if the name contains `bert`. If not, an error would be thrown. This is somehow cumbersome and error prone and it restricts the naming of the models. Why not just add this info as a attribute in the config.json? Other suggestions are welcome! ## Info I would happily start working at a PR if others agree as well.
12-13-2019 10:18:03
12-13-2019 10:18:03
Is now solved by this [PR](https://github.com/huggingface/transformers/pull/2494) Thanks a lot!<|||||>Yes, thanks for the contribution @perdix!
transformers
2,160
closed
[WIP] Add UniLM model
# Typical workflow for including a model Here an overview of the general workflow: - [x] add model/configuration/tokenization classes - [x] add conversion scripts - [x] add tests - [x] finalize Let's detail what should be done at each step ## Adding model/configuration/tokenization classes Here is the workflow for adding model/configuration/tokenization classes: - [x] copy the python files from the present folder to the main folder and rename them, replacing `xxx` with your model name, - [x] edit the files to replace `XXX` (with various casing) with your model name - [x] copy-paste or create a simple configuration class for your model in the `configuration_...` file - [x] copy-paste or create the code for your model in the `modeling_...` files (PyTorch and TF 2.0) - [x] copy-paste or create a tokenizer class for your model in the `tokenization_...` file # Adding conversion scripts Here is the workflow for the conversion scripts: - [x] copy the conversion script (`convert_...`) from the present folder to the main folder. - [x] edit this script to convert your original checkpoint weights to the current pytorch ones. # Adding tests: Here is the workflow for the adding tests: - [x] copy the python files from the `tests` sub-folder of the present folder to the `tests` subfolder of the main folder and rename them, replacing `xxx` with your model name, - [x] edit the tests files to replace `XXX` (with various casing) with your model name - [x] edit the tests code as needed # Final steps You can then finish the addition step by adding imports for your classes in the common files: - [x] add import for all the relevant classes in `__init__.py` - [x] add your configuration in `configuration_auto.py` - [x] add your PyTorch and TF 2.0 model respectively in `modeling_auto.py` and `modeling_tf_auto.py` - [x] add your tokenizer in `tokenization_auto.py` - [x] add your models and tokenizer to `pipeline.py` - [x] add a link to your conversion script in the main conversion utility (currently in `__main__` but will be moved to the `commands` subfolder in the near future) - [x] edit the PyTorch to TF 2.0 conversion script to add your model in the `convert_pytorch_checkpoint_to_tf2.py` file - [x] add a mention of your model in the doc: `README.md` and the documentation itself at `docs/source/pretrained_models.rst`. - [x] upload the pretrained weigths, configurations and vocabulary files.
12-13-2019 08:54:20
12-13-2019 08:54:20
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2160?src=pr&el=h1) Report > Merging [#2160](https://codecov.io/gh/huggingface/transformers/pull/2160?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f061606277322a013ec2d96509d3077e865ae875?src=pr&el=desc) will **increase** coverage by `0.04%`. > The diff coverage is `49.69%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2160/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2160?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2160 +/- ## ========================================== + Coverage 80.32% 80.37% +0.04% ========================================== Files 122 127 +5 Lines 18342 19000 +658 ========================================== + Hits 14734 15272 +538 - Misses 3608 3728 +120 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2160?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2160/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `88.91% <15.38%> (-2.55%)` | :arrow_down: | | [transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2160/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYXV0by5weQ==) | `44.68% <33.33%> (-0.78%)` | :arrow_down: | | [transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2160/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9hdXRvLnB5) | `59.18% <33.33%> (-1.69%)` | :arrow_down: | | [transformers/modeling\_unilm.py](https://codecov.io/gh/huggingface/transformers/pull/2160/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3VuaWxtLnB5) | `36.95% <36.95%> (ø)` | | | [transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2160/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2F1dG8ucHk=) | `38.79% <53.84%> (+1.89%)` | :arrow_up: | | [transformers/tests/tokenization\_unilm\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2160/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl91bmlsbV90ZXN0LnB5) | `55% <55%> (ø)` | | | [transformers/configuration\_unilm.py](https://codecov.io/gh/huggingface/transformers/pull/2160/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fdW5pbG0ucHk=) | `87.09% <87.09%> (ø)` | | | [transformers/tests/modeling\_unilm\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2160/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3VuaWxtX3Rlc3QucHk=) | `93.75% <93.75%> (ø)` | | | [transformers/tokenization\_unilm.py](https://codecov.io/gh/huggingface/transformers/pull/2160/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91bmlsbS5weQ==) | `94.73% <94.73%> (ø)` | | | ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/2160/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2160?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2160?src=pr&el=footer). Last update [f061606...bbacc86](https://codecov.io/gh/huggingface/transformers/pull/2160?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thank you for the PR! I edited your post to add the guideline for adding a new model; we'll check the boxes as we go. I'll have a look at the code and come back to you quickly!<|||||>> # Typical workflow for including a model > * [ ] add your models and tokenizer to `pipeline.py` @rlouf Sorry, I didn't find the `pipeline.py` file.<|||||>@sshleifer Thanks for the comments! We will merge them into the code. @addf400 <|||||>Let's restart with a new pull request @addf400 @donglixp <|||||>Is anyone still working on this? @addf400 @donglixp @JetRunner also @thomwolf from #1530<|||||>I'm also looking forward to applying the UniLM model via Huggingface Transformers! @donglixp @JetRunner @thomwolf <|||||>It seems that this pull request has lasted for a year but still not finished? Is someone still working on it? <|||||>Has this PR for UniLM model been added to Huggingface Transformers? @donglixp @JetRunner @thomwolf @sshleifer<|||||>Hey @ontocord , I think it the "minilm" model should work out-of-the-box: https://github.com/huggingface/transformers/issues/5777 Not sure if you're looking for this model :thinking: I haven't tried it yet, but the recent Microsoft papers (on language modeling) are looking really promising!<|||||>Thanks @stefan-it. I don't think MiniLM and UniLM are the same thing, altough it all falls under one project. The MS papers are promising!<|||||>I'm also looking forward to applying the unilm model via Huggingface Transformers!<|||||>2022 year, still not merged the unilm model into the master branch.<|||||>I'm still looking forward to applying the unilm model via Huggingface Transformers! 👻👻 <|||||>I'm still looking forward to applying the unilm model via Huggingface Transformers too!
transformers
2,159
closed
Low ROUGE scores for BertSum
Great work, very easy to pick up and play with. I downloaded the CNN/DM stories from the link provided and selected only the files that belong to the test set following See et al.'s dataset splits (https://github.com/abisee/cnn-dailymail/blob/master/url_lists/all_test.txt). Then I ran the model using the first command provided in the readme. My question is, what is the expected ROUGE F1 scores on the test set? I expected something near what was presented in the paper for BertSumExtAbs, which is: R-1: 0.4213 R-2: 0.1960 R-L: 0.3918 But the ROUGE scores I got were much lower: ****** ROUGE SCORES ****** ** ROUGE 1 F1 >> 0.303 Precision >> 0.328 Recall >> 0.288 ** ROUGE 2 F1 >> 0.185 Precision >> 0.210 Recall >> 0.172 ** ROUGE L F1 >> 0.335 Precision >> 0.356 Recall >> 0.320
12-13-2019 06:28:31
12-13-2019 06:28:31
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I'm struggling with this also :( did you make any progress diagnosing the problem?<|||||>No, I did not...I decided to go with a different model<|||||>hello.. guys.. any answers to this? Why is there such a low score? I looked at the summaries, and they seem to be good, but i have no comparison benchmark. However, the rouge scores are much lower than paper. how so? <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>active it<|||||>@AI678 The BertSum is unfortunately not maintained anymore. If you're looking to do summarization, please check out the [seq2seq](https://github.com/huggingface/transformers/tree/master/examples/seq2seq) scripts.
transformers
2,158
closed
gpt-2 implement issue
Thanks for your good implementation some model in pytorch! gpt-2 paper mentioned that they did few modifications with original gpt, included "A modified initialization which accounts for the accumulation on the residual path with model depth is used. We scale the weights of residual layers at initialization by a factor of 1/ √ N where N is the number of residual layers." I assume that will help in training, it is crucial to reimplement the gpt2, so consider add this to repo?
12-13-2019 01:58:54
12-13-2019 01:58:54
Hi, this is for the initialization. We don't have any scripts that show how to pretrain GPT-2 (therefore no need for initialization), only scripts to fine-tune it from a checkpoint.<|||||>thanks for your reply
transformers
2,157
closed
How to find the corresponding download models from Amazon?
## ❓ Questions & Help As we know, the TRANSFORMER could easy auto-download models by the pretrain( ) function. And the pre-trained BERT/RoBerta model are stored at the path of ./cach/.pytorch/.transformer/.... But, all the name of the download models are like this: d9fc1956a01fe24af529f239031a439661e7634e6e931eaad2393db3ae1eff03.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda.json It's not readable and hard to distinguish which model is I wanted. In another word, if I want to find the pretrained model of 'uncased_L-12_H-768_A-12', I can't finde which one is ? Thanks for your answering.
12-13-2019 01:34:56
12-13-2019 01:34:56
Hi, they are named as such because that's a clean way to make sure the model on the S3 is the same as the model in the cache. The name is created from the `etag` of the file hosted on the S3. If you want to save it with a given name, you can save it as such: ```py from transformers import BertModel model = BertModel.from_pretrained("bert-base-cased") model.save_pretrained("cased_L-12_H-768_A-12") ```<|||||>@LysandreJik, following up the question above, and your answer, I ran this command first: ``` from transformers import RobertaModel model = RobertaModel.from_pretrained("roberta-large") model.save_pretrained("./roberta-large-355M") ``` I guess, we expect config.json, vocab, and all the other necessary files to be saved in `roberta-large-355M` directory. Then I ran: ``` python ./examples/run_glue.py --model_type roberta --model_name_or_path ./roberta-large-355M --task_name MRPC --do_train --do_eval --do_lower_case --data_dir $GLUE_DIR/$TASK_NAME --max_seq_length 128 --per_gpu_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 2.0 --output_dir ./results/mrpc/ ``` and I am getting: ``` OSError: Model name './roberta-large-355M' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed './roberta-large-355M' was a path or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url ``` I checked the `roberta-large-355M` and there are only: `config.json` `pytorch_model.bin`, but files named ['vocab.json', 'merges.txt'] are missing. same issue with the XLNET: ``` ../workspace/transformers/xlnet_base# ls config.json pytorch_model.bin ``` What am I missing here? Why are all the files not downloaded properly? Thanks. <|||||>You also have to save the tokenizer into the same directory: ```python tokenizer.save_pretrained("./roberta-large-355M") ``` Let me know if this solves your issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>OSError: Model name 'roberta-base' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'roberta-base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url. I got the error above even after saving the tokenizer, config, and model in the same directory<|||||>the problem for me is , when i load the model turning wifi off or switch off internet connection it fail to run but when i turn internet connection it run again. how can i run it off line. i also set enviornment variable like this . import os os.environ['HF_DATASETS_OFFLINE']='1' os.environ['TRANSFORMERS_OFFLINE']='1' generator = pipeline('text-generation', model='EleutherAI/gpt-neo-1.3B') generator(text, do_sample=True, min_length=5) result "Connection error, and we cannot find the requested files in the cached path." ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. <|||||>import os from transformers import pipeline #HF_DATASETS_OFFLINE = 1 #TRANSFORMERS_OFFLINE = 1 #os.environ[HF_DATASETS_OFFLINE = 1,TRANSFORMERS_OFFLINE = 1] os.environ["HF_DATASETS_OFFLINE"] = "1" os.environ["TRANSFORMERS_OFFLINE"] = "1" cache_dir='/Users/hossain/Desktop/gpt2/gpt-neo-1.3/model/' generator = pipeline('text-generation', model='EleutherAI/gpt-neo-1.3B') text = 'i am fine. what about you?' generator(text, do_sample=True, min_length=5) result: through an error "Connection error, and we cannot find the requested files in the cached path." ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.<|||||>i have dig down into the sentence_transformers lib to see which folder contain the file after downloaded. And came up with this script to see where sentence_transformers keep its files. ```python import os torch_home = os.path.expanduser( os.getenv("TORCH_HOME", os.path.join(os.getenv("XDG_CACHE_HOME", "~/.cache"), 'torch'))) print(torch_home) ``` i hope it helps<|||||>> i have dig down into the sentence_transformers lib to see which folder contain the file after downloaded. And came up with this script to see where sentence_transformers keep its files. > > ```python > import os > > torch_home = os.path.expanduser( > os.getenv("TORCH_HOME", > os.path.join(os.getenv("XDG_CACHE_HOME", > "~/.cache"), 'torch'))) > > print(torch_home) > ``` > > i hope it helps thanks. the code works on windows too
transformers
2,156
closed
End-Task Distillation with DistilBERT
## ❓ Questions & Help The DistilBERT paper notes the IMDB and SQuAD results were obtained "with a second step of distillation during fine-tuning". What does this involve exactly and how can it be performed with the DistilBERT model in this repo?
12-12-2019 23:36:16
12-12-2019 23:36:16
Hello @shreydesai, You should have a look at [run_squad_w_distillation.py](https://github.com/huggingface/transformers/blob/master/examples/distillation/run_squad_w_distillation.py) which is the script used in the experiment you are mentioning. Victor<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,155
closed
Special Tokens are Split by BPE
## 🐛 Bug When I load 'distilbert-base-uncased' DistilBertTokenizer (with do_basic_tokenize=False) and call tokenize() on a string that includes special tokens, the special tokens are broken up by BPE. Model I am using (Bert, XLNet....): DistilBertForSequenceClassification Language I am using the model on (English, Chinese....): English The problem arise when using: * [ ] the official example scripts: (give details) * [x] my own modified scripts: (give details) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details) ## To Reproduce Steps to reproduce the behavior: 1. Load pretrained DistilBertTokenizer 2. call tokenize on a string including special tokens <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ``` tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased', do_lower_case=True, do_basic_tokenize=False) print(tokenizer.special_tokens_map) ``` {'unk_token': '[UNK]', 'sep_token': '[SEP]', 'pad_token': '[PAD]', 'cls_token': '[CLS]', 'mask_token': '[MASK]'} ``` text = '%s Hrabri (Brave) was the lead boat of the Hrabri-class submarines; built by the Vickers-Armstrong Naval Yard in the United Kingdom, for the Kingdom of Serbs, Croats and Slovenes (later Yugoslavia) %s' % (tokenizer.cls_token, tokenizer.sep_token) print(text) ``` [CLS] Hrabri (Brave) was the lead boat of the Hrabri-class submarines; built by the Vickers-Armstrong Naval Yard in the United Kingdom, for the Kingdom of Serbs, Croats and Slovenes (later Yugoslavia) [SEP] ``` tokens = tokenizer.tokenize(text) print(' '.join(tokens)) ``` [ cl ##s ] hr ##ab ##ri ( brave ) was the lead boat of the hr ##ab ##ri - class submarines ; built by the vickers - armstrong naval yard in the united kingdom , for the kingdom of serbs , croats and slovene ##s ( later yugoslavia ) [ sep ] ## Expected behavior ``` tokens = tokenizer.tokenize(text) print(' '.join(tokens)) ``` [CLS] hr ##ab ##ri ( brave ) was the lead boat of the hr ##ab ##ri - class submarines ; built by the vickers - armstrong naval yard in the united kingdom , for the kingdom of serbs , croats and slovene ##s ( later yugoslavia ) [SEP] ## Environment * OS: Windows * Python version: 3.6.3 * PyTorch version: 0.4.1 * PyTorch Transformers version (or branch): 2.2.1 * Using GPU ? no * Distributed of parallel setup ? no * Any other relevant information: ## Additional context
12-12-2019 20:58:51
12-12-2019 20:58:51
Hello! Indeed this is a known issue with version 2.2.1. You can either revert to 2.2.0 or install from source (`pip install git+https://github.com/huggingface/transformers`) until we push a new version (2.2.2) which should happen before the end of the week.<|||||>I confirm that reverting to 2.2.0 solves the problem<|||||>Bumping to 2.2.2 (released today) should solve the problem too!<|||||>I can confirm that 2.2.2 fixes the issue. This question can be closed.<|||||>Thanks! This is a great library!<|||||>It really is! Can you close this question? That way the overview of open issues is a lot clearer. Thanks.<|||||>Hi, I still get the same issue with version `2.5.1` (installed from source). The `<MASK>` token seems to be split into it's individual characters when an input string is encoded. I trained the `roberta` model from scratch on my own dataset as described in https://huggingface.co/blog/how-to-train . I ran the following lines to test my trained model on the masked token prediction task. ``` config = RobertaConfig.from_json_file("drive/My Drive/doerrberto-small-v1/config.json") model = RobertaForMaskedLM(config) state_dict = torch.load("drive/My Drive/doerrberto-small-v1/pytorch_model.bin") model.load_state_dict(state_dict) tokenizer = RobertaTokenizer("drive/My Drive/doerrberto-small-v1/vocab.json", "drive/My Drive/doerrberto-small-v1/merges.txt") fill_mask = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) result = fill_mask(sentence) ``` This was when I encountered the `ValueError: only one element tensors can be converted to Python scalars` error. I then confirmed that this error was generated due to incorrect encoding of `<MASK>` token. Any help will be appreciated. Thanks!<|||||>@aksub99 `RobertaTokenizer`'s mask token is actually `<mask>` not `<MASK>` You can also just use `tokenizer.mask_token`<|||||>@julien-c Thanks for pointing that out, but I had used `tokenizer.mask_token` while testing. Sorry for the typo in my previous comment. That still gave me the same errors. This is my complete testing code snippet and it's output. Code: ``` import torch from transformers import RobertaConfig, RobertaForMaskedLM, pipeline, RobertaTokenizer config = RobertaConfig.from_json_file("drive/My Drive/doerrberto-small-v1/config.json") model = RobertaForMaskedLM(config) state_dict = torch.load("drive/My Drive/doerrberto-small-v1/pytorch_model.bin") model.load_state_dict(state_dict) tokenizer = RobertaTokenizer("drive/My Drive/doerrberto-small-v1/vocab.json", "drive/My Drive/doerrberto-small-v1/merges.txt") fill_mask = pipeline( "fill-mask", model=model, tokenizer=tokenizer ) sentence = "I {} you".format(tokenizer.mask_token) print(sentence) token_ids = tokenizer.encode(sentence, return_tensors='pt') print(token_ids.squeeze()) print(tokenizer.mask_token_id) ``` Output: ``` I <mask> you tensor([ 0, 387, 225, 32, 81, 3229, 34, 377, 2]) 4 ``` Clearly, the `<mask>` is being split into it's individual characters.<|||||>Does the `doerrberto-small-v1` vocabulary file contain a mask token? Can you do `tokenizer.encode(tokenizer.mask_token)`, and does it return the `tokenizer.mask_token_id` in-between model-specific tokens?<|||||>@LysandreJik Yes, the `doerrberto-small-v1` vocabulary file does contain a mask token and is associated with an ID of 4. `tokenizer.encode(tokenizer.mask_token)` gives out `[0, 225, 32, 81, 3229, 34, 2]` which means that the mask token is again being split up. Sorry, could you explain what you mean by "in-between model-specific tokens"? <|||||>Seems broken in GPT2 ? ```python tok = transformers.AutoTokenizer.from_pretrained("gpt2") tok.cls_token = "<|cls|>" sample = "My name is Barrack Obama<|cls|>I like pizza" print(tok.tokenize(sample)) ``` >`['My', 'Ġname', 'Ġis', 'ĠBarr', 'ack', 'ĠObama', '<', '|', 'cl', 's', '|', '>', 'I', 'Ġlike', 'Ġpizza']` <|||||>It does show up in the special tokens: ```python >>> print(tok) PreTrainedTokenizerFast(name_or_path='gpt2', vocab_size=50257, model_max_len=1024, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<|endoftext|>', 'eos_token': '<|endoftext|>', 'unk_token': '<|endoftext|>', 'cls_token': '<|cls|>'}) ```<|||||>@LysandreJik maybe<|||||>Using `<|endoftext|>` does work however. It's just when you add new special tokens that the tokenizer doesn't use them. ```python >>> tok.tokenize("An attempt with eot<|endoftext|>Will it work") ['An', 'Ġattempt', 'Ġwith', 'Ġe', 'ot', '<|endoftext|>', 'Will', 'Ġit', 'Ġwork'] ```
transformers
2,154
closed
AlBERT UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
## ❓ Questions & Help ![image](https://user-images.githubusercontent.com/36267779/70731538-8584a880-1d0f-11ea-8516-7895c3a50c27.png) ![image](https://user-images.githubusercontent.com/36267779/70731555-8d444d00-1d0f-11ea-84a5-0df7b00ff820.png) Hi! There is some problem while downloading any of the pre-trained AlBERT models, however, there weren't any problems a few days ago. Could you please tell me where can I download the AlBERT TensorFlow checkpoints (`albert_model.ckpt`) for running `convert_albert_original_tf_checkpoint_to_pytorch.py` script? Unfortunately, I wasn't able to find any for resolving this as in [2110](https://github.com/huggingface/transformers/issues/2110). I'd really appreciate any help in resolving the issue. Thanks a bunch in advance! tensorflow | 2.0.0 | torch | 1.3.1 | transformers | 2.2.1 | python | 3.7 |
12-12-2019 16:39:02
12-12-2019 16:39:02
Hello! Could you please tell me which model you are trying to download? I've just tried the following command and it succeeded without any issues: ```py AlbertForQuestionAnswering.from_pretrained("albert-base-v2", force_download=True) ``` I put the `force_download` flag to True to make sure I was downloading the files from the S3. Is there any way you could try this on your side?<|||||>In my environment (Python 3.6.9, OS Ubuntu, Transformers 2.2.1 (installed from _PyPi_), PyTorch 1.3.1 and TensorFlow 2.0), **I'm not able to reproduce your bug**, so I'm able to download and use any ALBERT model I want. I've tried the same code line that in your case generates the error, e.g. ``` > from transformers import AlbertForQuestionAnswering > model = AlbertForQuestionAnswering.from_pretrained(X) ``` where X is one of [_'albert-base-v1', 'albert-large-v1', 'albert-xlarge-v1', 'albert-xxlarge-v1', 'albert-base-v2', 'albert-large-v2', albert-xlarge-v2', 'albert-xxlarge-v2'_] You can specify `force_download=True` when you're loading a specific version of AlBERT model, e.g. ``` > from transformers import AlbertForQuestionAnswering > model = AlbertForQuestionAnswering.from_pretrained('albert-base-v1', force_download=True) ``` N.B: at the moment, there is a known bug when using v2 AlBERT models, as said when you import this version in Transformers: > There is currently an upstream reproducibility issue with ALBERT v2 models. Please see https://github.com/google-research/google-research/issues/119 for more information. > ## Questions & Help > ![image](https://user-images.githubusercontent.com/36267779/70721247-e905da80-1cfd-11ea-9863-90dcabf6bf58.png) > ![image](https://user-images.githubusercontent.com/36267779/70730858-65a0b500-1d0e-11ea-811c-d10ba7c1c356.png) > > Hi! There is some problem while downloading any of the pre-trained AlBERT models, however, there weren't any problems a few days ago. Could you please tell me where can I download the AlBERT TensorFlow checkpoints (albert_model.ckpt) for running convert_albert_original_tf_checkpoint_to_pytorch.py script? Unfortunately, I wasn't able to find any for resolving this as in #2110. > I'd really appreciate any help in resolving the issue. Thanks a bunch in advance!<|||||>I've tried with all of the models with and without `force_download=True` ![image](https://user-images.githubusercontent.com/36267779/70733283-aef30380-1d12-11ea-9207-0f5c8a16b497.png) Unfortunately, I have this bug now, however, it was OK yesterday. Besides, have the same issue using Jupyter notebook after restarting the kernel. Before that worked as expected. Thanks for your concern and fast reply!<|||||>Do you get the same errors with other models, like BERT?<|||||>I've just tried again using completely new basic script and it worked, don't know what is that. I just thought it's the same as in another issue. But anyway, thanks a lot, guys!<|||||>Glad you could fix it.
transformers
2,153
closed
BertAbs decoder_input_ids
## ❓ Questions & Help What should the `decoder_input_ids` look like if we are fine-tuning the model on our own dataset? I tried `[unused0] [unused2] summary_sent_toks [unused2] summary_sent_toks2 [unused1]` (looking at the paper) ... but I get shape errors because of line 150 in `modeling_bertabs.py`: ``` decoder_input_ids[:, :-1], encoder_hidden_states, dec_state ``` The `decoder_input_ids` shape I'm passing in in `(8, 512)` ... but the code above chops off the last column.
12-12-2019 06:33:39
12-12-2019 06:33:39
Could you please post the full stack trace as well as the part of the code you use for fine-tuning?<|||||>See here: https://gist.github.com/ohmeow/f2cc6ea0a9d0e4a5fa227942edcfa723 I think it has something to do with how I'm preparing the target tokens but I'm not sure what the appropriate fix is. Looked at the BertSum source code on github but it was confusing. Either way, the shape of the decoder_ids is (batch size, max_seq_len) ... but the model chops off the last column before passing the ids off to the decoder. My gut feeling is that this is to account for the need to shift the ids right by 1 for the gold labels but not sure ... and that means the input should be (batch_size, max_seq_len+1). Any thoughts on what I should do or what I'm missing? Thanks On Thu, Dec 12, 2019 at 3:15 AM Rémi Louf <[email protected]> wrote: > Could you please post the full stack trace as well as the part of the code > you use for fine-tuning? > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/2153?email_source=notifications&email_token=AAADNMBPVGGALZOJ3SCV543QYIMN3A5CNFSM4JZZWIK2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGWKWLY#issuecomment-564964143>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AAADNMCRVCPLMOPCLI2L4S3QYIMN3ANCNFSM4JZZWIKQ> > . > <|||||>I think I may have solved the `decoder_input_ids` issue with the fix to my code: ``` def fit_to_block_size(sequence, block_size, pad_token_id, sep_token_id, is_summary:bool=False): """ Adapt the source and target sequences' lengths to the block size. If the sequence is shorter than the block size we pad it with -1 ids which correspond to padding tokens. """ if len(sequence) > block_size: if (is_summary): sequence = sequence[:block_size]+ [symbols['EOS']] else: # ensure inclusion of whole sentences if possible sent_sep_idxs = [ idx for idx, t in enumerate(sequence) if t == sep_token_id and idx < block_size ] last_sent_sep_idx = min(max(sent_sep_idxs)+1 if (len(sent_sep_idxs) > 0) else block_size, block_size) sequence = sequence[:last_sent_sep_idx] if len(sequence) < block_size: sequence.extend([pad_token_id] * (block_size - len(sequence))) if (is_summary): sequence += [pad_token_id] return sequence ``` However, I'm now running into an error when the "context" attention is calculated in the `TransformerDecoderLayer` ... ``` ~/development/_training/ml/nlp-playground/tritonlyticsai/text/modeling_bertabs.py in forward(self, key, value, query, mask, layer_cache, type, predefined_graph_1) 601 602 if mask is not None: --> 603 mask = mask.unsqueeze(1).expand_as(scores) 604 scores = scores.masked_fill(mask, -1e18) 605 RuntimeError: The expanded size of the tensor (1) must match the existing size (512) at non-singleton dimension 3. Target sizes: [512, 8, 8, 1]. Tensor sizes: [8, 1, 512, 512] ``` The passed in mask is built by the model code based on the dimensions of the source and target input ids ... which look right to me.<|||||>@ohmeow Have you been able to fine-tune the BertAbs on your dataset? I would appreciate if your can share you experience.<|||||>This is still a work in progress ... but the below should help you get started on fine-tuning the pretrained model. Look here: https://gist.github.com/ohmeow/7aa294e2959c1315fe7dfdf8091f2d87 You'll notice that I also copied a few of the HF .py files into my own package (ohmeow.text). I did this two be able to step through the code, troubleshoot, and also because a modification has to be made to modeling_bertabs.py. #pdb.set_trace() encoder_hidden_states = encoder_output #encoder_output[0] --WTG-- dec_state = self.decoder.init_decoder_state( encoder_input_ids, encoder_hidden_states ) decoder_outputs, _ = self.decoder( decoder_input_ids[:, :-1], encoder_hidden_states, dec_state ) #return decoder_outputs #--WTG-- return self.generator(decoder_outputs) the commented out sections are what was originally there in the HF code. On Mon, Dec 30, 2019 at 6:17 PM Ehsan Hemmati <[email protected]> wrote: > @ohmeow <https://github.com/ohmeow> Have you been able to fine-tune the > BertAbs on your dataset? I would appreciate if your can share you > experience. > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/2153?email_source=notifications&email_token=AAADNMD6OHZO67VQURDKBJLQ3KTSRA5CNFSM4JZZWIK2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEH3TI6Q#issuecomment-569848954>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AAADNMGJ6MSCUTEYGHNB4BTQ3KTSRANCNFSM4JZZWIKQ> > . > <|||||>@ohmeow Thanks for sharing this. Just what is the HF files you mentioned?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,152
closed
RoBERTa/GPT-2 tokenization: Why we call all_special_tokens for each token in split_all_tokens?
Is there a reason why the property function `all_special_tokens` is called in each iteration in `split_on_tokens()` when looping over all tokens? When I initialize a new variable and call all_special_tokens only once in the tokenizer init, the tokenization is speed-up around 2~3 times for me. Maybe I am missing something :)
12-12-2019 02:00:58
12-12-2019 02:00:58
Hi, could you provide an example that was sped up by replacing that variable? When tokenizing 55k tokens 10 times without creating a variable for all_special_tokens I get the result in 3.88s whereas when creating a variable I get the result in 3.87s. This doesn't seem like such a big difference!<|||||>In my case, I tested it on the feature_conversion for SQuAD. I measure the time to convert a subset of the SQuAD dataset to features: ``` start_time = time.time() tok_times = [] for i in range(10): start_time = time.time() convert_examples_to_features( examples=eval_examples, tokenizer=tokenizer, max_seq_length=FLAGS.max_seq_length, doc_stride=FLAGS.doc_stride, max_query_length=FLAGS.max_query_length, is_training=False, output_fn=append_feature) delta_time = time.time() - start_time print('Run {}: Time for tokenization: {}'.format(i, delta_time)) tok_times.append(delta_time) print('Avg time for tokenization: {}'.format(np.mean(tok_times))) ``` The original implementation yields (in seconds): ``` Run 0: Time for tokenization: 1.8680036067962646 Run 1: Time for tokenization: 1.8013951778411865 Run 2: Time for tokenization: 1.7933814525604248 Run 3: Time for tokenization: 1.7968308925628662 Run 4: Time for tokenization: 1.8006742000579834 Run 5: Time for tokenization: 1.7927491664886475 Run 6: Time for tokenization: 1.8060340881347656 Run 7: Time for tokenization: 1.7863578796386719 Run 8: Time for tokenization: 1.807504415512085 Run 9: Time for tokenization: 1.7879209518432617 Avg time for tokenization: 1.8040851831436158 ``` When initializing a variable instead and referencing it in tokenization: ``` Run 0: Time for tokenization: 0.7765586376190186 Run 1: Time for tokenization: 0.6800308227539062 Run 2: Time for tokenization: 0.6858618259429932 Run 3: Time for tokenization: 0.6877231597900391 Run 4: Time for tokenization: 0.6820297241210938 Run 5: Time for tokenization: 0.6838114261627197 Run 6: Time for tokenization: 0.6909258365631104 Run 7: Time for tokenization: 0.6799609661102295 Run 8: Time for tokenization: 0.6868128776550293 Run 9: Time for tokenization: 0.679542064666748 Avg time for tokenization: 0.6933257341384887 ``` I basically just initialize a new variable in init of `tokenization_utils.py`: `self.all_special_tokens_init = self.all_special_tokens` And then I reference this variable in `split_on_tokens()` instead of the call to the property function `all_special_tokens`.<|||||>Indeed, I do get a massive speedup when initializing a variable and using `squad_convert_examples_to_features`. Thank you for letting us know! I'll update this later today.<|||||>Should have been fixed with f24a228<|||||>No problem :) Thanks for the fix. I will close this then.
transformers
2,151
closed
RoBERTa tokenization: Why do we call 'all_special_tokens' in each tokenize loop?
Is there a reason why the property function`all_special_tokens` is called in each iteration in `tokenize()` when looping over all tokens? When I initialize a new variable and call `all_special_tokens` only once in the tokenizer init, the tokenization is speed-up around 2~3 times for me.
12-12-2019 01:48:37
12-12-2019 01:48:37
transformers
2,150
closed
RoBERTa tokenization: Why do we call 'all_special_tokens' in each tokenize loop?
Is there a reason why the property function 'all_special_tokens' is called in each iteration in tokenize() when looping over all tokens?
12-12-2019 01:48:37
12-12-2019 01:48:37
transformers
2,149
closed
:bug: #2120 in model.from_pretrained, PosixPath crashes at "albert" check
- `pretrained_model_name_or_path` is now stringified to allow the "albert" and "v2" checks with PosixPath (or any other path representation that isn't iterable). - If `pretrained_model_name_or_path` is None, it gives string "None" which doesn't contain "albert" so it's OK. - 2 x `str(pretrained_model_name_or_path)` doesn't impact performances as it's called only once per program and `and` operator will generally fail at left side. - added a test to check non-regression
12-11-2019 23:47:22
12-11-2019 23:47:22
Ok, I've checked the errors in CI. Those are linked to the fact that in older python version, PosixPath is not converted automatically to String and `os.path.isdir/isfile` crash because it expects a string or int. So my patch works perfectly in latest version of python (like 3.7) but not older (like 3.5) which is quite ugly. Solutions are: - anywhere there is `isdir/isfile`, force `str(path)` in call which is ugly but will work except if you want to manage other types than strings in path - officially ask people to convert their path to strings when calling `from_pretrained` which is shame because it shows the impossibility to be backward compatible completely but it won't introduce `str()` everywhere in the code. WDYT? <|||||>Hey @mandubian, thanks for offering that fix. I think that in the next release we'll remove this warning about albert models v2, which will solve the problem with PosixPaths.<|||||>@LysandreJik perfect! It will still fail with python 3.5 on `isdir/isfile` but can we do anything for that? I'm not sure... that's the history of Python ;)<|||||>The line was removed for version 2.3.0 so there's no need for that anymore. Thanks @mandubian :).
transformers
2,148
closed
Fix encode plus
Fixing the tensor creation in encode_plus
12-11-2019 20:18:47
12-11-2019 20:18:47
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2148?src=pr&el=h1) Report > Merging [#2148](https://codecov.io/gh/huggingface/transformers/pull/2148?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/030faccb8d45be9bdd2b4b80ff26f36dc41f622a?src=pr&el=desc) will **decrease** coverage by `0.02%`. > The diff coverage is `30.76%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2148/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2148?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2148 +/- ## ========================================== - Coverage 80.07% 80.05% -0.03% ========================================== Files 112 112 Lines 16866 16868 +2 ========================================== - Hits 13505 13503 -2 - Misses 3361 3365 +4 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2148?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2148/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `90.22% <30.76%> (-0.8%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2148?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2148?src=pr&el=footer). Last update [030facc...3d57c51](https://codecov.io/gh/huggingface/transformers/pull/2148?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Great! Nice catch @LysandreJik
transformers
2,147
closed
Recommended way for creating distillBERT container and serving
## ❓ Questions & Help <!-- A clear and concise description of the question. --> As per documentation, I am supposed to load distilbert as below. question_answering_model = torch.hub.load('huggingface/pytorch-transformers', 'modelForQuestionAnswering', 'distilbert-base-uncased-distilled-squad') question_answering_tokenizer = torch.hub.load('huggingface/pytorch-transformers', 'tokenizer', 'distilbert-base-uncased-distilled-squad') I'm using google Cloud Run. It bring up the container (and hence the model) only upon request. This causes download and load delay. How to pre-download the model and serve? I am looking for dockerfile step where it could install the weights file and other files needed for the model. This way, I am hoping that my dynamic delays are reduced and I get inference must faster. Please let me know if such a thing is possible. thanks Ishwar
12-11-2019 18:07:39
12-11-2019 18:07:39
You can find the s3 URL of models here for distilbert: https://github.com/huggingface/transformers/blob/master/transformers/configuration_distilbert.py If you build the docker on your machine, first download model files on your machine. Then just add those files to your container through Dockerfile. If you want your Docker build to download from s3, you can install `aws-cli` in Dockerfile and run `aws s3 cli`. But it will make it slower. Naturally a model in your docker will make it a bit fatter.<|||||>I think the path has changed slightly. I found the file in "src" folder under master. https://github.com/huggingface/transformers/blob/master/src/transformers/configuration_distilbert.py <|||||>About downloading model files, in configuration_distilbert.py, I only found https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-uncased-distilled-squad-config.json file path. It just gives config. It is not a weights file / pickle file. Please suggest the path of files which I can download and make part of the local folder. Thanks.<|||||>Links to pre-trained models are available in the beginning of each `modeling_xxx.py` file, e.g. for [BERT](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L34-L56). Put this with a configuration in your folder and you can load them locally. You could also use the `save_pretrained` method to automatically create a folder that can be used with `from_pretrained`.
transformers
2,146
closed
doc: fix pretrained models table
Hi, this PR fixes the pretrained models table, see #2145.
12-11-2019 16:57:49
12-11-2019 16:57:49
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2146?src=pr&el=h1) Report > Merging [#2146](https://codecov.io/gh/huggingface/transformers/pull/2146?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2e2f9fed554bb5f147ea3d9573004b447dd7c9e7?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2146/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2146?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2146 +/- ## ======================================= Coverage 80.07% 80.07% ======================================= Files 112 112 Lines 16866 16866 ======================================= Hits 13505 13505 Misses 3361 3361 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2146?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2146?src=pr&el=footer). Last update [2e2f9fe...c852efa](https://codecov.io/gh/huggingface/transformers/pull/2146?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Great, thanks @stefan-it
transformers
2,145
closed
the docs pretrained models is missing
Hi, your docs have the table with pretrained models missing, probably some formatting error, as the source code has the table <img width="1280" alt="Screenshot 2019-12-11 at 17 31 01" src="https://user-images.githubusercontent.com/340180/70640463-4174a380-1c3c-11ea-9c6e-ca343ef46332.png"> https://huggingface.co/transformers/pretrained_models.html Cheers, Piotr
12-11-2019 16:33:17
12-11-2019 16:33:17
Should be working now :)<|||||>Thanks @PiotrCzapla for raising the issue, @stefan-it fixed it earlier today!
transformers
2,144
closed
Allowing from_pretrained to load from url directly
Allowing `from_pretrained` to load from url directly.
12-11-2019 16:21:44
12-11-2019 16:21:44
cc @mfuntowicz <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2144?src=pr&el=h1) Report > Merging [#2144](https://codecov.io/gh/huggingface/transformers/pull/2144?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2d103546ef102d69ea12cdca3ec3163052886851?src=pr&el=desc) will **increase** coverage by `0.51%`. > The diff coverage is `91.66%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2144/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2144?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2144 +/- ## ========================================== + Coverage 79.85% 80.36% +0.51% ========================================== Files 114 114 Lines 17059 17091 +32 ========================================== + Hits 13622 13736 +114 + Misses 3437 3355 -82 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2144?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/tests/modeling\_tf\_auto\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2144/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2F1dG9fdGVzdC5weQ==) | `40.67% <100%> (+4.31%)` | :arrow_up: | | [transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2144/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fdXRpbHMucHk=) | `89.87% <100%> (+1.56%)` | :arrow_up: | | [transformers/tests/utils.py](https://codecov.io/gh/huggingface/transformers/pull/2144/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3V0aWxzLnB5) | `93.75% <100%> (+0.2%)` | :arrow_up: | | [transformers/tests/tokenization\_auto\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2144/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl9hdXRvX3Rlc3QucHk=) | `58.62% <100%> (+8.62%)` | :arrow_up: | | [transformers/tests/modeling\_auto\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2144/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2F1dG9fdGVzdC5weQ==) | `38.09% <100%> (+4.19%)` | :arrow_up: | | [transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2144/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `90.47% <100%> (+0.24%)` | :arrow_up: | | [transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2144/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2ZpbGVfdXRpbHMucHk=) | `71.01% <80%> (+31.01%)` | :arrow_up: | | [transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2144/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3V0aWxzLnB5) | `92.35% <83.33%> (+0.78%)` | :arrow_up: | | [transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2144/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `91.46% <83.33%> (+0.6%)` | :arrow_up: | | ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/2144/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2144?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2144?src=pr&el=footer). Last update [2d10354...413f419](https://codecov.io/gh/huggingface/transformers/pull/2144?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>You can test this with identifier `"dbmdz/bert-base-german-cased"` (cc @stefan-it, copied your weights and also converted them to TF 2.0) Or for a smaller, dummy model, with `"julien-c/bert-xsmall-dummy"`.<|||||>Great and clean, @julien-c Merging
transformers
2,143
closed
Fix typo in examples/run_glue.py args declaration.
deay -> decay
12-11-2019 15:15:02
12-11-2019 15:15:02
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2143?src=pr&el=h1) Report > Merging [#2143](https://codecov.io/gh/huggingface/transformers/pull/2143?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4c12860f7ae61659aed2675498350a386fc4e122?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2143/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2143?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2143 +/- ## ======================================= Coverage 80.07% 80.07% ======================================= Files 112 112 Lines 16867 16867 ======================================= Hits 13506 13506 Misses 3361 3361 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2143?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2143?src=pr&el=footer). Last update [4c12860...059111d](https://codecov.io/gh/huggingface/transformers/pull/2143?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Great, thanks!
transformers
2,142
closed
master branch examples/run_squad.py: missing --predict_file argparse argument
## 🐛 Bug <!-- Important information --> Model I am using: albert Language I am using the model on (English, Chinese....): English The problem arise when using: * [X] the official example scripts: (give details) examples/run_squad/py: --predict_file not recognized * [ ] my own modified scripts: (give details) The tasks I am working on is: * [X] an official GLUE/SQUaD task: run_squad * [ ] my own task or dataset: (give details) ## To Reproduce Try running an eval only run with run_squad.py Steps to reproduce the behavior: 1. provide all of the requisite arguments to run_squad.py 2. observe error `run_squad.py: error: unrecognized arguments: --predict_file` 3. <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> A correct run ## Environment * OS: * Python version: 3.7.5 * PyTorch version: n/a * PyTorch Transformers version (or branch): master * Using GPU ? yes * Distributed of parallel setup ?no * Any other relevant information: Quick inspection of examples/run_squad.py reveals missing declaration, present in, e.g., 1.2.0 ## Additional context <!-- Add any other context about the problem here. -->
12-11-2019 14:57:24
12-11-2019 14:57:24
In order to use the **evaluation** mode, you have to pass from script the `do_eval` parameter (in addition to the "classical" input parameters for evaluation). > ## Bug > Model I am using: albert > > Language I am using the model on (English, Chinese....): English > > The problem arise when using: > > * [x] the official example scripts: (give details) > examples/run_squad/py: --predict_file not recognized > * [ ] my own modified scripts: (give details) > > The tasks I am working on is: > > * [x] an official GLUE/SQUaD task: run_squad > * [ ] my own task or dataset: (give details) > > ## To Reproduce > Try running an eval only run with run_squad.py > Steps to reproduce the behavior: > > 1. provide all of the requisite arguments to run_squad.py > 2. observe error `run_squad.py: error: unrecognized arguments: --predict_file` > > ## Expected behavior > A correct run > > ## Environment > * OS: > * Python version: 3.7.5 > * PyTorch version: n/a > * PyTorch Transformers version (or branch): master > * Using GPU ? yes > * Distributed of parallel setup ?no > * Any other relevant information: > > Quick inspection of examples/run_squad.py reveals missing declaration, present in, e.g., 1.2.0 > > ## Additional context<|||||>Indeed, there was a big refactor of the SQuAD script recently which removed these arguments in favor of `data_dir`, which contains the files. I'll add the possibility to either use `predict_file` and `train_file` instead of `data_dir` later today.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,141
closed
Fine-tuning distilled GPT-2
## ❓ Questions & Help To my understanding, examples/run_lm_finetuning.py can be used to fine-tune the model to new data. How do I fine-tune a distilled GPT-2? To be precise, I assume that I can use the entire code, but I just need to import the right module. I tried importing DistilGPT2Config, DistilGPT2LMHeadModel, DistilGPT2Tokenizer, but it doesn't work out.
12-11-2019 14:48:51
12-11-2019 14:48:51
_DistilGPT2Config_, _DistilGPT2LMHeadModel_ and _DistilGPT2Tokenizer_ **don't exist**. In order to fine-tuning the DistilGPT2 model for LM, you can use the following settings of tokenizer, config and model: **Tokenizer**: ``` > from transformers import GPT2Tokenizer > tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2', ) ``` N.B: as said in the source code [here](https://github.com/huggingface/transformers/blob/35401fe50fa3e460b2a4422630b017f106c79e03/transformers/tokenization_gpt2.py), this tokenizer requires a space to start the input string, therefore the `encoding` and `tokenize` methods should be called with the `add_prefix_space` flag set to `True`. Otherwise, this tokenizer's `encode`, `decode`, and `tokenize` methods will not conserve the spaces at the beginning of a string: `tokenizer.decode(tokenizer.encode(" Hello")) = "Hello"` **Config**: ``` > from transformers import GPT2Config > config = GPT2Config.from_pretrained('distilgpt2') ``` **Model**: ``` > from transformers import GPT2LMHeadModel > model = GPT2LMHeadModel.from_pretrained('distilgpt2') ``` N.B: for completeness, in order to use DistilGPT2 model, you have to use the following code: `model = GPT2Model.from_pretrained('distilgpt2')`. > ## Questions & Help > To my understanding, examples/run_lm_finetuning.py can be used to fine-tune the model to new data. How do I fine-tune a distilled GPT-2? To be precise, I assume that I can use the entire code, but I just need to import the right module. I tried importing DistilGPT2Config, DistilGPT2LMHeadModel, DistilGPT2Tokenizer, but it doesn't work out.<|||||>It works. Thank you.
transformers
2,140
closed
return_tokens_mapped_to_origin not working
## 🐛 Bug Model I am using: **Bert** Language I am using the model on: **English** ## To Reproduce Call `bertTokenizer.tokenize("text", return_tokens_mapped_to_origin=True)` Result: > TypeError: _tokenize() got an unexpected keyword argument 'return_tokens_mapped_to_origin' ## Expected behavior The official documentation mentions a "return_tokens_mapped_to_origin" optional parameter that when set to True should return the index of each token in the initial given text. https://huggingface.co/transformers/main_classes/tokenizer.html?highlight=return_tokens_mapped_to_origin#transformers.PreTrainedTokenizer.tokenize ## Environment * OS: macOS Mojave * Python version: 3.7 * PyTorch version: 1.3.0 * PyTorch Transformers version (or branch): 2.2.1 * Using GPU ? No ## Additional context In the source code this parameter is never used outside of the doc comment, neither in the base class nor in its implementations.
12-11-2019 13:19:11
12-11-2019 13:19:11
What is the idea here? That for each (sub)token its "parent" token ID is remembered? That would be so great. I can definitely use functionality like that.<|||||>> What is the idea here? That for each (sub)token its "parent" token ID is remembered? That would be so great. I can definitely use functionality like that. This is what the doc says: > return_tokens_mapped_to_origin: (optional) Set to True to return the index of each token in the initial whitespace tokenization. (default False) I think the idea was that with this parameter set to True, in addition to the tokens, the function returns a map to the position of the i-th token in the original sentence, so the word it belongs to. So for example considering the sentence: `Word-embedding is so nice` If the tokenization is `["word", "-", "em", "##bed", "##ding", "is", "so", "nice"]` I should have as second returned value something like `[0, 0, 0, 0, 0, 1, 2, 3]` which corresponds to the position of the tokens "parent" in the whitespace tokenization `["word-embedding", "is", "so", "nice"]` It would be very useful but as I can see it hasn't been implemented, don't know why it is mentioned in the documentation. <|||||>An easy way to implement it without the need to adapt the code to every single tokenizer could be to whitespace-tokenize the text first, then for each whitespace-token call the subword-tokenizer and add to the 'map' the current position for the number of subword-tokens returned. This could be used in the library to implement this feature and can work also as a workaround to achieve the same result.<|||||>Hi, thanks for pointing that out @alessiocancian, this documentation was an error. You're right about the expected behavior, this is what happens in the `squad_convert_examples_to_features`. It is not implemented yet in the `tokenize` method as we don't have the bandwidth for it currently, but it will probably be in a future release as it's very useful to map tokens back to the original normalized sentence.<|||||>This sounds like a great addition indeed! +1<|||||>For everyone interested here's the code of the workaround I mentioned: ``` sentence = "Word-embedding is so nice" words = sentence.split() #whitespace tokenization tokens = [] tokens_map = [] for i, word in enumerate(words): _tokens = tokenizer.tokenize(word) for token in _tokens: tokens.append(token) tokens_map.append(i) print(words[tokens_map[2]]) #prints "Word-embedding" ``` Needs some changes to work with separators, but could be a starting point for an easy implementation in the `tokenize` method @LysandreJik EDIT: found out that `sentence.split()` is not the best to reconstruct words because of punctuation, you can change it with a generic word tokenizer like `nltk.word_tokenize`.<|||||>@alessiocancian Unfortunately you will inevitably run into inconsistencies between the tokenizer that you used and the base tokenizer that is used in transformers internally. I am not sure whether there are even distinct steps in the tokenisation process (string->tokens->subword units), so I am curious to see what @LysandreJik has planned and how they are going to implement it! When I look at the source code of the squad example, it seems that punctuation is not taken care of and that splits happen on white space characters (as defined in `_is_whitespace`) only. https://github.com/huggingface/transformers/blob/7296f1010b6faaf3b1fb409bc5a9ebadcea51973/transformers/data/processors/squad.py#L490-L507 I might be missing something, though. <|||||>> @alessiocancian Unfortunately you will inevitably run into inconsistencies between the tokenizer that you used and the base tokenizer that is used in transformers internally. @BramVanroy yes I thought the same thing, with whitespace tokenization you can reconstruct it easily but using a tokenizer you can't, you need to use the same one. A way could be to have the tokenizer as parameter following a common interface (a tokenize method which takes a string and returns a list of strings) but I'm not sure if it makes sense. Whitespace tokenization in most cases is useless because you get unexpected extra punctuation. The easiest way is still to use the code I shared so you have full control on the tokenization you're referencing to. I'm using it and works fine.<|||||>Hey @alessiocancian. I did some testing and I ran into an issue: your idea won't work for all tokenizer since it seems that they are context-sensitive. Here is an example with the roberta tokenizer: ```python tokenizer = RobertaTokenizer.from_pretrained('roberta-base') print(tokenizer.tokenize('They were hugging.')) # ['They', '_were', '_hugging', '.'] print(tokenizer.tokenize('hugging')) # ['h', 'ug', 'ging'] ``` I am not sure whether it is expected for tokenizers to work like this. It seems odd: if "hugging" is in the vicabulary, why isn't the tokenizer using it in the second case? I also tried starting the string with a space or a special token, but to no avail. Perhaps @LysandreJik can shed some light here. I tested with a couple of tokenizers, and to get the same tokenization for the whole sequence at once and word-for-word, it seems that you can add "i" (or any token with only one sub token) to the token and then remove that subtoken again. However, for the first token, the "i" must be at the end. I tested this with 10k sentences on albert, bert, distilbert, gpt2, openai, roberta, and xlnet tokenizers. XLNet behaves a bit weird because it tokenizes the i like `'▁', 'i'` so the tokens need to be removed twice. It's messy, I know, but it works... ```python tokens = [] for idx, t in enumerate(sentence.split()): if idx > 0: t = f"i {t}" subtokens = tok.tokenize(t) subtokens.pop(0) # need to pop twice for xlnet to remove # '▁', 'i' if tok_name == 'xlnet': subtokens.pop(0) else: t = f"{t} i" subtokens = tok.tokenize(t) subtokens.pop(-1) if tok_name == 'xlnet': subtokens.pop(-1) tokens += subtokens ```<|||||>Hi @BramVanroy, concerning your question of why the word "hugging" was split even though it clearly was in the dictionary: the RoBERTa tokenizer uses a byte-level BPE tokenizer like GPT-2. It makes the difference between words preceded by a space, and those that are not, as you correctly guessed. You can't simply add a space at the beginning as it will get stripped in the tokenize method. In order to do so, you would have to specify the `add_prefix_space` boolean option: ```py from transformers import RobertaTokenizer tokenizer = RobertaTokenizer.from_pretrained('roberta-base') print(tokenizer.tokenize('They were hugging.')) # ['They', 'Ġwere', 'Ġhugging', '.'] print(tokenizer.tokenize('hugging', add_prefix_space=True)) # ['Ġhugging'] ```<|||||>Hey @LysandreJik thanks for your time. But isn't that exactly what the tokenizer does? What am I missing here? https://github.com/huggingface/transformers/blob/81d6841b4be25a164235975e5ebdcf99d7a26633/src/transformers/tokenization_gpt2.py#L194-L201 Also, it is a bit strange to see that not all tokenizers know this attribute. Wouldn't it make more sense to have this as part of the PretrainedTokenizer's `_tokenize` or at least adding `**kwargs` to all tokenizer's `_tokenize`? It feels awkward now when quickly wanting to swapping tokenizers by only changing the init, but then you get: ```python from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') print(tokenizer.tokenize('They were hugging.')) # ['They', 'Ġwere', 'Ġhugging', '.'] print(tokenizer.tokenize('hugging', add_prefix_space=True)) # TypeError: _tokenize() got an unexpected keyword argument 'add_prefix_space' ``` I understand _why_ the other tokenizers don't need it, but from a usage perspective it is odd that the same `tokenize()` function doesn't accept the same arguments. It also becomes awkward when you want to do something more dynamic like ```python from transformers import BertTokenizer, RobertaTokenizer models = { 'bert': (BertTokenizer, 'bert-base-uncased'), 'roberta': (RobertaTokenizer, 'roberta-base') } # from user-input or from config mname = 'bert' tokenizer = models[mname][0].from_pretrained(models[mname][1]) print(tokenizer.tokenize('They were hugging.')) # ['They', 'Ġwere', 'Ġhugging', '.'] print(tokenizer.tokenize('hugging', add_prefix_space=mname == 'roberta')) # roberta: ['Ġhugging'] # bert: TypeError: _tokenize() got an unexpected keyword argument 'add_prefix_space' ``` I hope it's clear what I am trying to say. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,139
closed
About Summarization
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Thank you very much for your wonderful work. I found that some new code for summarization has been added from "pretrained encoder" paper. However, I see only the evaluation part of the code. I want to ask if you will add the code for the training part. Thank you very much!
12-11-2019 09:37:11
12-11-2019 09:37:11
If you want to look the source code used for training the model, you can look at the source [GitHub](https://github.com/nlpyang/PreSumm), in particular you can view the `src/train.py`, `src/train_abstractive.py` or `src/train_extractive.py` Python scripts.<|||||>@TheEdoardo93 Thank you for your reply. I know, will you plan to integrate the source training code into transformers? It is more convenient to use your transformers code for training.<|||||>At the moment, I think that it is **not** on the roadmap. Do you have a particular reason for asking to integrate the training algorithm into this library? > @TheEdoardo93 Thank you for your reply. I know, will you plan to integrate the source training code into transformers? It is more convenient to use your transformers code for training.<|||||>@TheEdoardo93 I think this is a good encoder-decoder framework based on BERT. In addition to the summary task, it can also do many other generation tasks. If the training code can be integrated into this library, it can be used to finetune more downstream generation tasks. I think this library currently lacks downstream fine-tuning for NLG tasks, such like query generation, generative reading comprehension and other summarization tasks.<|||||>Thanks for the help. How do I load the checkpoints **model_step_20000.pt** that was trained from src/train.py to replace **model= BertAbs.from_pretrained("bertabs-finetuned-cnndm")** > If you want to look the source code used for training the model, you can look at the source [GitHub](https://github.com/nlpyang/PreSumm), in particular you can view the `src/train.py`, `src/train_abstractive.py` or `src/train_extractive.py` Python scripts. <|||||>Hello! As I know, you **can't** load a PyTorch checkpoint _directly_ in `BertAbs` model, you'll indeed get an error. A PyTorch checkpoint typically contains the model state dict. Therefore, you can try to use the following source code for your task: ``` > import transformers > import torch > from transformers import BertTokenizer > tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True) > from modeling_bertabs import BertAbs > model = BertAbs.from_pretrained('bertabs-finetuned-cnndm') > model.load_state_dict(torch.load(PATH_TO_PT_CHECKPOINT)) ``` where _PATH_TO_PT_CHECKPOINT_ could be e.g. _./input_checkpoints/model_step_20000.pt_. **N.B**: this code would work only in the case where the architecture of `bertabs-finetuned-cnndm` model is equal to the one you're trying to load into, otherwise an error occur! If this code doesn't work as expected, we can work together in order to solve your problem :) > Thanks for the help. How do I load the checkpoints **model_step_20000.pt** that was trained from src/train.py to replace **model= BertAbs.from_pretrained("bertabs-finetuned-cnndm")** > > > If you want to look the source code used for training the model, you can look at the source [GitHub](https://github.com/nlpyang/PreSumm), in particular you can view the `src/train.py`, `src/train_abstractive.py` or `src/train_extractive.py` Python scripts.<|||||>Its Important!! ADD IT.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@TheEdoardo93 is there any way to load a pretrained model with different architecture? I used the source library to train a model with source embedding size of 1024 instead of 512 as in the pretrained one as 512 was too small for my data.
transformers
2,138
closed
encode_plus not returning attention_mask and not padding
## 🐛 Bug Tested on RoBERTa and BERT of the master branch, the [`encode_plus`](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.encode_plus) method of the tokenizer does not return an attention mask. The documentation states that by default an attention_mask is returned, but I only get back the input_ids and the token_type_ids. Even when explicitly specifying `return_attention_mask=True`, I don't get that back. If these specific tokenizers (RoBERTa/BERT) don't support this functionality (which would seem odd), it might be useful to also put that in the documentation. As a small note, there's also a typo in the documentation: > return_attention_mask – (optional) Set to False to **avoir** returning attention mask (default True) Finally, it seems that `pad_to_max_length` isn't padding my input (see the example below). I also tried `True` instead of an integer, hoping that it would automatically pad up to max seq length in the batch, but to no avail. ```python from transformers import BertTokenizer if __name__ == '__main__': tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') orig_text = ['I like bananas.', 'Yesterday the mailman came by!', 'Do you enjoy cookies?'] edit_text = ['Do you?', 'He delivered a mystery package.', 'My grandma just baked some!'] # orig_sents and edit_text are lists of sentences for orig_sents, edit_sents in zip(orig_text, edit_text): orig_tokens = tokenizer.tokenize(orig_sents) edit_tokens = tokenizer.tokenize(edit_sents) seqs = tokenizer.encode_plus(orig_tokens, edit_tokens, return_attention_mask=True, return_tensors='pt', pad_to_max_length=120) print(seqs) ``` Output: ``` {'input_ids': tensor([[ 101, 1045, 2066, 26191, 1012, 102, 2079, 2017, 1029, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 1, 1, 1, 1]])} {'input_ids': tensor([[ 101, 7483, 1996, 5653, 2386, 2234, 2011, 999, 102, 2002, 5359, 1037, 6547, 7427, 1012, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]])} {'input_ids': tensor([[ 101, 2079, 2017, 5959, 16324, 1029, 102, 2026, 13055, 2074, 17776, 2070, 999, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]])} ```
12-11-2019 08:31:29
12-11-2019 08:31:29
Hi, thanks for raising this issue! When running this code on the master branch, I do get the attention mask as output, but only when removing the `return_tensors` argument. When running with this argument, it crashes because a list is being concatenated to a tensor. I'm fixing this in #2148. It's weird that you didn't get an error when running this line. On which commit are you based? `encode` and `encode_plus` take kwargs arguments so it wouldn't raise an error if one of your arguments (`pad_to_max_length`) was not supposed to be there (e.g. if running on an old version of transformers). `pad_to_max_length` is a boolean flag: if set to True with no `max_length` specified, it will pad the sequence up to the maximum sequence length the model can handle. If a `max_length` is specified, it will pad the sequence up to that number.<|||||>Hey! For me setting pad_to_max_length results in an error thrown. Just tried it out with the master branch but this resulted in the same error The code I'm executing: ``` titles = [['allround developer', 'Visual Studio Code'], ['allround developer', 'IntelliJ IDEA / PyCharm'], ['allround developer', 'Version Control']] enc_titles = [[tokenizer.encode_plus(title[0], max_length=13, pad_to_max_length=True), tokenizer.encode_plus(title[1], max_length=13, pad_to_max_length=True)] for title in titles] ``` The error that I am getting: ```TypeError Traceback (most recent call last) <ipython-input-213-349f66a39abe> in <module> 4 # titles = [' '.join(title) for title in titles] 5 print(titles) ----> 6 enc_titles = [[tokenizer.encode_plus(title[0], max_length=4, pad_to_max_length=True), tokenizer.encode_plus(title[1], max_length=4)] for title in titles] <ipython-input-213-349f66a39abe> in <listcomp>(.0) 4 # titles = [' '.join(title) for title in titles] 5 print(titles) ----> 6 enc_titles = [[tokenizer.encode_plus(title[0], max_length=4, pad_to_max_length=True), tokenizer.encode_plus(title[1], max_length=4)] for title in titles] /usr/local/lib/python3.7/site-packages/transformers/tokenization_utils.py in encode_plus(self, text, text_pair, add_special_tokens, max_length, stride, truncation_strategy, return_tensors, return_token_type_ids, return_overflowing_tokens, return_special_tokens_mask, **kwargs) 816 If there are overflowing tokens, those will be added to the returned dictionary 817 stride: if set to a number along with max_length, the overflowing tokens returned will contain some tokens --> 818 from the main sequence returned. The value of this argument defines the number of additional tokens. 819 truncation_strategy: string selected in the following options: 820 - 'longest_first' (default) Iteratively reduce the inputs sequence until the input is under max_length /usr/local/lib/python3.7/site-packages/transformers/tokenization_utils.py in get_input_ids(text) 808 the `tokenize` method) or a list of integers (tokenized string ids using the `convert_tokens_to_ids` 809 method) --> 810 text_pair: Optional second sequence to be encoded. This can be a string, a list of strings (tokenized 811 string using the `tokenize` method) or a list of integers (tokenized string ids using the 812 `convert_tokens_to_ids` method) /usr/local/lib/python3.7/site-packages/transformers/tokenization_utils.py in tokenize(self, text, **kwargs) 657 sub_text = sub_text.strip() 658 if i == 0 and not sub_text: --> 659 result += [tok] 660 elif i == len(split_text) - 1: 661 if sub_text: /usr/local/lib/python3.7/site-packages/transformers/tokenization_utils.py in split_on_tokens(tok_list, text) 654 result = [] 655 split_text = text.split(tok) --> 656 for i, sub_text in enumerate(split_text): 657 sub_text = sub_text.strip() 658 if i == 0 and not sub_text: /usr/local/lib/python3.7/site-packages/transformers/tokenization_utils.py in <genexpr>(.0) 654 result = [] 655 split_text = text.split(tok) --> 656 for i, sub_text in enumerate(split_text): 657 sub_text = sub_text.strip() 658 if i == 0 and not sub_text: TypeError: _tokenize() got an unexpected keyword argument 'pad_to_max_length'``` <|||||>Hm, you're right. I think it was (again) an issue with the notebook that I was testing this time, where some values from previous cells were used or something like that. Thanks for the fix! Now that we're at the topic, though, it might be nice to have a convenience method for batch processing? Something along these lines where `pad_to_batch_length` pads up to the max batch length (rather than max_seq_length of the model) to save computation/memory. ```python def enocde_batch_plus(batch, batch_pair=None, pad_to_batch_length=False, return_tensors=None, **kwargs): def merge_dicts(list_of_ds): # there's probably a better way of doing this d = defaultdict(list) for _d in list_of_ds: for _k, _v in _d.items(): d[_k].append(_v) return dict(d) encoded_inputs = [] batch_pair = [None] * len(batch) if batch_pair is None else batch_pair for firs_sent, second_sent in zip(batch, batch_pair): encoded_inputs.append(tokenizer.encode_plus(firs_sent, second_sent, **kwargs)) encoded_inputs = merge_dicts(encoded_inputs) if pad_to_batch_length: max_batch_len = max([len(l) for l in encoded_inputs['input_ids']]) # pad up to max_batch_len, similar to how it's done ine in prepare_for_model() if return_tensors: # convert to tensors, similar to how it's done in prepare_model() pass return encoded_inputs ```<|||||>@Jarvanerp I cannot reproduce your issue, though. Your code works for me. ```python # output [[{'input_ids': [101, 2035, 22494, 4859, 9722, 102, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0]}, {'input_ids': [101, 5107, 2996, 3642, 102, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]}], [{'input_ids': [101, 2035, 22494, 4859, 9722, 102, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0]}, {'input_ids': [101, 13420, 3669, 3501, 2801, 1013, 1052, 17994, 27292, 102, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0]}], [{'input_ids': [101, 2035, 22494, 4859, 9722, 102, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0]}, {'input_ids': [101, 2544, 2491, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]}]] ```<|||||>@BramVanroy Thanks for your comment! It made me try it out in just a plain Python file instead of a Jupyter notebook and it worked... 😄 <|||||>@BramVanroy Indeed, batch processing would be a cool feature, especially when padding's involved. We're thinking about it cc @mfuntowicz @thomwolf <|||||>@LysandreJik That's some good news! Looking forward to that; it will help getting rid of boiler plate stuff in our code.<|||||>@LysandreJik Just to keep you updated, this is what I am using now. (Padding and converting to tensors are modified versions of those in `prepare_model`.) I think it covers most if not all functionality of `encode_plus`. If you want, I can look at brushing it up, adding tests similar to those for `encode_plus`, add an `encode_batch` method and so on, and do a PR. ```python def encode_batch_plus(batch, batch_pair=None, pad_to_batch_length=False, return_tensors=None, return_token_type_ids=True, return_attention_mask=True, return_special_tokens_mask=False, **kwargs): if pad_to_batch_length and 'pad_to_max_length' in kwargs and kwargs['pad_to_max_length']: raise ValueError("'pad_to_batch_length' and 'pad_to_max_length' cannot be used simultaneously.") def merge_dicts(list_of_ds): d = defaultdict(list) for _d in list_of_ds: for _k, _v in _d.items(): d[_k].append(_v) return dict(d) # gather all encoded inputs in a list of dicts encoded = [] batch_pair = [None] * len(batch) if batch_pair is None else batch_pair for firs_sent, second_sent in zip(batch, batch_pair): # return_tensors=None: don't convert to tensors yet. Do that manually as the last step encoded.append(TOKENIZER.encode_plus(firs_sent, second_sent, return_tensors=None, return_token_type_ids=return_token_type_ids, return_attention_mask=return_attention_mask, return_special_tokens_mask=return_special_tokens_mask, **kwargs)) # convert list of dicts in a single merged dict encoded = merge_dicts(encoded) if pad_to_batch_length: max_batch_len = max([len(l) for l in encoded['input_ids']]) if TOKENIZER.padding_side == 'right': if return_attention_mask: encoded['attention_mask'] = [mask + [0] * (max_batch_len - len(mask)) for mask in encoded['attention_mask']] if return_token_type_ids: encoded["token_type_ids"] = [ttis + [TOKENIZER.pad_token_type_id] * (max_batch_len - len(ttis)) for ttis in encoded['token_type_ids']] if return_special_tokens_mask: encoded['special_tokens_mask'] = [stm + [1] * (max_batch_len - len(stm)) for stm in encoded['special_tokens_mask']] encoded['input_ids'] = [ii + [TOKENIZER.pad_token_id] * (max_batch_len - len(ii)) for ii in encoded['input_ids']] elif TOKENIZER.padding_side == 'left': if return_attention_mask: encoded['attention_mask'] = [[0] * (max_batch_len - len(mask)) + mask for mask in encoded['attention_mask']] if return_token_type_ids: encoded['token_type_ids'] = [[TOKENIZER.pad_token_type_id] * (max_batch_len - len(ttis)) for ttis in encoded['token_type_ids']] if return_special_tokens_mask: encoded['special_tokens_mask'] = [[1] * (max_batch_len - len(stm)) + stm for stm in encoded['special_tokens_mask']] encoded['input_ids'] = [[TOKENIZER.pad_token_id] * (max_batch_len - len(ii)) + ii for ii in encoded['input_ids']] else: raise ValueError(f"Invalid padding strategy: {TOKENIZER.padding_side}") if return_tensors is not None: if return_tensors in {'pt', 'tf'}: encoded['input_ids'] = tf.constant(encoded['input_ids']) if return_tensors == 'tf' \ else torch.tensor(encoded['input_ids']) if 'attention_mask' in encoded: encoded['attention_mask'] = tf.constant(encoded['attention_mask']) if return_tensors == 'tf' \ else torch.tensor(encoded['attention_mask']) if 'token_type_ids' in encoded: encoded['token_type_ids'] = tf.constant(encoded['token_type_ids']) if return_tensors == 'tf' \ else torch.tensor(encoded['token_type_ids']) if 'special_tokens_mask' in encoded: encoded['special_tokens_mask'] = tf.constant(encoded['special_tokens_mask']) if return_tensors == 'tf' \ else torch.tensor(encoded['special_tokens_mask']) # should num_truncated_tokens, overflowing_tokens also be converted to tensors? # if yes then this could be generalised in a for loop/dict comprehension converting all k,v to k,tensor(v) else: raise ValueError(f"Cannot return tensors with value '{return_tensors}'") return encoded ```<|||||>Hi @BramVanroy, thank you for sharing! I believe @mfuntowicz is working on a similar implementation [on the cli branch](https://github.com/huggingface/transformers/commit/0b51532ce94140cdb22f761b09fff28cce76f985#diff-e8b171e32a922a1fb8080ebf163f28af)<|||||>Aha, great. I couldn't wait because I needed it for a shared task, but nice to see it's taking form. Almost there!<|||||>@BramVanroy @LysandreJik I don't think the padding issue is still resolved yet.<|||||>> @BramVanroy @LysandreJik I don't think the padding issue is still resolved yet. Can you give more information? A minimal example that we can copy-and-paste as well as your expected output would be nice.<|||||>Hello, I confirm that the padding issue is not resolved yet. It works with `return_overflowing_tokens=False` but not `return_overflowing_tokens=True` for some reason, see sample code below: ```py >>> tokenizer=BertTokenizer.from_pretrained('bert-base-cased') >>> fake_batch = ["foo "*100, "foo "*42] >>> text_encoded_plus=tokenizer.batch_encode_plus(fake_batch, add_special_tokens=False, max_length=10, pad_to_max_length=True, return_tensors='pt', return_attention_mask=True, return_overflowing_tokens=False) >>> print(text_encoded_plus['input_ids'].shape, text_encoded_plus['attention_mask'].shape) torch.Size([2, 10]) torch.Size([2, 10]) ``` ```py >>> text_encoded_plus=tokenizer.batch_encode_plus(fake_batch, add_special_tokens=False, max_length=10, pad_to_max_length=True, return_tensors='pt', return_attention_mask=True, return_overflowing_tokens=True) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) ~/anaconda3/envs/pyannote/lib/python3.7/site-packages/transformers/tokenization_utils.py in convert_to_tensors_(self, batch_outputs, return_tensors) 1801 try: -> 1802 batch_outputs[key] = torch.tensor(value) 1803 except ValueError: ValueError: expected sequence of length 190 at dim 1 (got 74) During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) <ipython-input-249-da5ce1e175a8> in <module> 7 return_tensors='pt', 8 return_attention_mask=mask, ----> 9 return_overflowing_tokens=True) 10 print(text_encoded_plus['input_ids'].shape) ~/anaconda3/envs/pyannote/lib/python3.7/site-packages/transformers/tokenization_utils.py in batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, max_length, stride, truncation_strategy, pad_to_max_length, is_pretokenized, return_tensors, return_token_type_ids, return_attention_masks, return_overflowing_tokens, return_special_tokens_masks, return_offsets_mapping, return_lengths, **kwargs) 1784 if return_tensors is not None: 1785 -> 1786 self.convert_to_tensors_(batch_outputs, return_tensors) 1787 return BatchEncoding(batch_outputs) 1788 ~/anaconda3/envs/pyannote/lib/python3.7/site-packages/transformers/tokenization_utils.py in convert_to_tensors_(self, batch_outputs, return_tensors) 1802 batch_outputs[key] = torch.tensor(value) 1803 except ValueError: -> 1804 raise ValueError(self.UNEVEN_SEQUENCES_FOR_BATCH_MSG) 1805 except RuntimeError: 1806 if None in [item for sequence in value for item in sequence]: ValueError: The sequences building the batch are not of the same size, no tensor can be built. Set `pad_to_max_length=True` to pad the smaller sequencesup to the larger sequence's length. ```<|||||>Indeed, I can reproduce. Looking into it now.<|||||>The issue with this is that slow tokenizers cannot convert the `overflowing_tokens` to tensors as these have mismatching dimensions. This was never handled, unfortunately, so I added a better error message in #5633. The good news is that fast tokenizers handle this feature! Simply replacing the `BertTokenizer` by `BertTokenizerFast` should do the job. Thanks for letting us know of this issue.<|||||>Oh okay, thank you ! I thought that the regular, kept tokens were not padded :)
transformers
2,137
closed
Tokenization in C++
Is there any general strategy for tokenizing text in C++ in a way that's compatible with the existing pretrained `BertTokenizer` implementation? I'm looking to use a finetuned BERT model in C++ for inference, and currently the only way seems to be to reproduce the `BertTokenizer` code manually (or modify it to be compatible with torchscript). Has anyone come up with a better solution than this?
12-11-2019 08:18:14
12-11-2019 08:18:14
You should wait a few days if you can because @n1t0 is working on something that will very likely solve your problem and it should be ready for a first release before the end of the year.<|||||>Any update on this? It is already beyond "the end of the year".<|||||>I also tried to figure out an alternative beyond manual tokenizer. Will your approach handle with multiple models? I'm looking for a GPT-2 tokenizer in C++.<|||||>Check out this repo: https://github.com/huggingface/tokenizers You can already use it from transformers, using `BertTokenizerFast`<|||||>Why was this closed? https://github.com/huggingface/tokenizers offers no C++ solution other than developing a Rust -> C++ interop wrapper yourself, which wouldn't work in my case.<|||||>following<|||||>This is still not available. <|||||>We will not develop a C++ implementation of tokenizers. In case you would like C++ bindings for the `tokenizers` library, I recommend commenting on this issue dedicated to it instead: https://github.com/huggingface/tokenizers/issues/185<|||||>https://github.com/wangkuiyi/huggingface-tokenizer-in-cxx/ I built the C++ version. It works on my macOS and iPhones.<|||||>> https://github.com/wangkuiyi/huggingface-tokenizer-in-cxx/ I built the C++ version. It works on my macOS and iPhones. Thank you for sharing, this is exactly what I needed.<|||||>sharing a Nice work. https://github.com/mlc-ai/tokenizers-cpp<|||||>I am looking for C++ implementation of tokenizer used in this model https://github.com/kuprel/min-dalle Can anybody comment is it similar to hugging face tokenizer?
transformers
2,136
closed
is the tokenization broken for bert?
## 🐛 Bug <!-- Important information --> Model I am using is `bert-base-uncased`: Language I am using the model on (English): ## To Reproduce Steps to reproduce the behavior: 1. Just Ran the example from the docs ``` import torch from transformers import BertTokenizer, BertModel, BertForMaskedLM # OPTIONAL: if you want to have more information on what's happening under the hood, activate the logger as follows import logging logging.basicConfig(level=logging.INFO) # Load pre-trained model tokenizer (vocabulary) tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', force_download=True) # Tokenize input text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]" tokenized_text = tokenizer.tokenize(text) # Mask a token that we will try to predict back with `BertForMaskedLM` masked_index = 8 tokenized_text[masked_index] = '[MASK]' print(tokenized_text) assert tokenized_text == ['[CLS]', 'who', 'was', 'jim', 'henson', '?', '[SEP]', 'jim', '[MASK]', 'was', 'a', 'puppet', '##eer', '[SEP]'] >INFO:transformers.tokenization_utils:loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at /Users/1570137/.cache/torch/transformers/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084 ``` The assertion fails and the `print(tokenized_text)` returns this actually, `['[', 'cl', '##s', ']', 'who', 'was', 'jim', 'henson', '[MASK]', '[', 'sep', ']', 'jim', 'henson', 'was', 'a', 'puppet', '##eer', '[', 'sep', ']']` ## Extra Details >! pip show transformers ``` Name: transformers Version: 2.2.1 Summary: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch Home-page: https://github.com/huggingface/transformers Author: Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Google AI Language Team Authors, Open AI team Authors, Facebook AI Authors, Carnegie Mellon University Authors Author-email: [email protected] License: Apache Location: /Users/1570137/anaconda3/envs/my_env/lib/python3.7/site-packages Requires: sacremoses, numpy, tqdm, sentencepiece, regex, requests, boto3 Required-by: flair ``` I am on MacOS, No GPU. Also is that behaviour expected? Thanks.
12-11-2019 03:58:41
12-11-2019 03:58:41
This should be fixed in the current master but not in a release AFAIK. See https://github.com/huggingface/transformers/issues/2132 and close this issue please.<|||||>Okay thanks!
transformers
2,135
closed
Is there support for TensorflowJs?
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I managed to save my tenforflow 2.0 model and I see keras .h5 and config.json files. When I run the tensorflowjs converter it seems to run with no issues. !tensorflowjs_converter --input_format=keras save/tf_model.h5 save/tfjs_model I see the output as expected in the generated files. But, when I try to load them from Javascript, I get these errors: models.ts:287 Uncaught (in promise) TypeError: Cannot read property 'model_config' of null at models.ts:287 at common.ts:14 at Object.next (common.ts:14) at a (common.ts:14) I found some Github issues like this one https://github.com/tensorflow/tfjs/issues/931, that mention the issue is that the .h5 file only includes the weights and they provide a workaround which involves saving the model with the weights, but it is not clear to me how to do that with the HF library. Is this something you support or is there a way to get the Keras model with the weights?
12-11-2019 02:54:16
12-11-2019 02:54:16
My understanding is that tfjs is still kinda unstable so you’d be better off bringing that issue there. That being said, @Pierrci has tried to do similar stuff so might be able to chime in.<|||||>thanks, @julien-c I will repost there. Do you think I could have better luck if I try this with torchjs instead? I tried ONNX and faced multiple roadblocks. I didn't anticipate running transformer models in JavaScript would be so challenging 😅<|||||>> I found some Github issues like this one [tensorflow/tfjs#931](https://github.com/tensorflow/tfjs/issues/931), that mention the issue is that the .h5 file only includes the weights and they provide a workaround which involves saving the model with the weights, but it is not clear to me how to do that with the HF library. > > Is this something you support or is there a way to get the Keras model with the weights? Yes the first step is actually to convert the Keras model into a SavedModel format, you can see this notebook as an example: https://colab.research.google.com/drive/1p1Nifh1P-vqAZ1gHsNSCXAHzVWzl5YPP (from my experiments it doesn't work on all models). Once you have the SavedModel then you can use (in another environment with TF 1.15 since it's the [TFJS converter requirement](https://github.com/tensorflow/tfjs/blob/master/tfjs-converter/python/requirements.txt)) the `tfjs.converters.convert_tf_saved_model` method to convert to TFJS format. But then you might run into exceptions like `Unsupported Ops` (it seems a lot of operators are yet to be implemented in TFJS). Feel free to cross-reference this issue if you post another issue in the TFJS repo! <|||||>thanks, @Pierrci let me try this out<|||||>@Pierrci the conversion to savedmodel works, but now I get an error when converting to tfjs: tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 0 of node StatefulPartitionedCall/tf_bert_for_sequence_classification/bert/embeddings/position_embeddings/embedding_lookup was passed float from Func/StatefulPartitionedCall/input/_3:0 incompatible with expected resource. I will try changing the input tensor spec to float32<|||||>@hamletbatista which version of TensorFlow did you use to convert to SavedModel format? Is it the nightly or an older one like 2.0?<|||||>@Pierrci I used 2.0. Then, I created a second version using the nightly, but my Colab crashed. Trying it now. I'll let you know. <|||||>Hi @Pierrci made a copy of your notebook and tried my model there and got it to export fine. Thanks a lot for your help! Now, let's see if it works in JavaScript :)<|||||>Now I get a missing operator AddV2 in TFJS Uncaught (in promise) Error: Tensorflow Op is not supported: AddV2 I will take a break and look into this. <|||||>Got it to work with tfjs 1.4.0.<|||||>Wonderful! Can I ask you what is the model you're working with @hamletbatista?<|||||>@Pierrci Sure. I wrote a couple of articles about this. See https://www.searchenginejournal.com/automated-intent-classification-using-deep-learning-part-2/318691/ I'm trying to get this to work from within Excel and need it working in JavaScript while keeping things simple. I tried Ludwig, but it doesn't support this. See https://github.com/uber/ludwig/issues/575 <|||||>Thanks!
transformers
2,134
closed
closes #1960 Add saving and resuming functionality for remaining examples
#1987 was merged in before I could update the other pytorch examples. This should also close #1960 once it's merged in.
12-11-2019 02:53:11
12-11-2019 02:53:11
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2134?src=pr&el=h1) Report > Merging [#2134](https://codecov.io/gh/huggingface/transformers/pull/2134?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/18601c3b6e46e05c4a78303a9e6036f795f82180?src=pr&el=desc) will **decrease** coverage by `1.07%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2134/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2134?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2134 +/- ## ========================================== - Coverage 78.74% 77.67% -1.08% ========================================== Files 131 131 Lines 19736 19736 ========================================== - Hits 15541 15329 -212 - Misses 4195 4407 +212 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2134?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2134/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `9.27% <0%> (-80.8%)` | :arrow_down: | | [transformers/tests/modeling\_tf\_common\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2134/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `77.15% <0%> (-17.25%)` | :arrow_down: | | [transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2134/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `54.32% <0%> (-10.1%)` | :arrow_down: | | [transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2134/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `71.42% <0%> (-2.3%)` | :arrow_down: | | [transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2134/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `94.27% <0%> (-2.21%)` | :arrow_down: | | [transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2134/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `80.19% <0%> (-1.33%)` | :arrow_down: | | [transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2134/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `63.58% <0%> (-0.72%)` | :arrow_down: | | [transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2134/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3BpcGVsaW5lcy5weQ==) | `67.35% <0%> (-0.59%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2134?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2134?src=pr&el=footer). Last update [18601c3...b03872a](https://codecov.io/gh/huggingface/transformers/pull/2134?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
2,133
closed
Refactor functionality of run_squad and squad_utils into XXXForQuestionAnswering
## Request Push some/most/all the functionality of the squad training scripts into the class ``XXXForQuestionAnswering``. ## Alt Request. I'm guessing this is immediately objectionable, since ``XXXForQuestionAnswering`` is just the nice clean NN. No messy string manipulation functions welcome here. So perhaps I request a ``XXXQuestionAnswerHandler`` class. # Context Basically, as far as I can tell, there is quite a gulf between the input/output of ``XXXForQuestionAnswering`` and actually doing squad. Currently, we attempt to straddle this gulf by a number of scripts, perhaps what's called glue code(?). These require passing around of many arguments, so many that I can't keep track of them, and a lot of conditionals to treat the idiosyncrasies of different models. I think separating the ``XXXForQuestionAnswering`` from actually being able to do a squad like task is a cause of some problems. If these models are really supposed to be used for question answering, and not simply churning through a full training/eval squad-style json, then these auxilliary scripts of answer selection, and answer cleaning should be fastened to the model firmly (like within some handler). Squad has set the defacto format for CDQA, and many of the steps in the scripts would be useful in wider applications. ## Context II A massive thanks for refactoring the squad training code. It is so much clearer than it was but I still think there's big room for improvement. For me, using the previous incarnation of squad_run was ummm... constantly problematic (eg [like this](https://github.com/huggingface/transformers/issues/2038)). Surely 97% cos I'm a noob - some of the issues I had unfathomably basic. But the scripts were really not user friendly (now improved but still - see previous), and the current classes really don't give much away: * "Hello, my dog is cute" is not a CDQA [?!](https://github.com/huggingface/transformers/blob/master/transformers/modeling_xlm.py#L869). * The ``...Simple`` class, partly clarified by LysandreJik's [post](https://github.com/huggingface/transformers/issues/2038#issuecomment-564220238), but doesn't explain why these are still lingering on the master branch without appearing in the official docs, while the tensor flow analogue only has a simple... I mean its fine, but its pretty confusing if you're already confused. * the documentation of the output being "well depends on the config" which although true is... not exactly useful to the uninitiated. I got to the point of trying to refactor the training scripts myself, but made very little progress. Very happy to see someone else has been on the case. ## Context III An example functionality: allow pre-cache examples-to-feature - effectively do a sort of dry run. Running ``squad_run.py`` on a GPU machine saw a lot of idle time where one CPU would take an hour to cache the examples suitable to the configuration of the finetuning. Why not build a ``examples_to_features`` method within the class? Then do this on your desktop before shipping if off for training _oven-ready_. At the moment in the script ``squad_run.py``, the caching for the training is called from ``main`` (not ``train``) and caching the evaluation was done in ``evaluate``. I don't follow this decision. I tried to extract the caching function, but it was super hacky and would be addressed by the request. ## Invitation for comments I'm sure this isn't a new idea, and hasn't happened cos its either too much work or a terrible idea. I tried to see what others were doing but the scripts and specifics people have used to arrive at their claimed squad scores do not seem as available as other resources ( ... XLM? ... Roberta? - Am I missing something?) I'd be very happy to hear thoughts on this, including responses that begin > _"This is nonsense because ..."_ Thanks HF for your awesome opensource NLP lib
12-11-2019 00:23:36
12-11-2019 00:23:36
Thanks a lot for your input. We're trying to continually improve our training scripts and would like to keep them efficient while keeping them understandable. As you have noticed, we have recently refactored the glue and squad scripts somewhat, and will continue to do so. Your input is appreciated and we're keeping it in mind for the future improvements that are bound to happen (sooner rather than later).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,132
closed
`bert-base-uncased` tokenizer broke around special tokens in v2.2.1
In `v2.2.1`, the `bert-base-uncased` tokenizer changed in a way that's probably not intentional: ``` Python 3.7.5 (default, Oct 25 2019, 10:52:18) [Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from transformers.tokenization_auto import AutoTokenizer To use data.metrics please install scikit-learn. See https://scikit-learn.org/stable/index.html Neither PyTorch nor TensorFlow >= 2.0 have been found.Models won't be available and only tokenizers, configurationand file/data utilities can be used. >>> t = AutoTokenizer.from_pretrained("bert-base-uncased"); t.encode_plus(text='A, [MASK] AllenNLP sentence.') { 'input_ids': [101, 1037, 1010, 1031, 7308, 1033, 5297, 20554, 2361, 6251, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] } ``` In `v2.2.0`: ``` Python 3.7.5 (default, Oct 25 2019, 10:52:18) [Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from transformers.tokenization_auto import AutoTokenizer To use data.metrics please install scikit-learn. See https://scikit-learn.org/stable/index.html Neither PyTorch nor TensorFlow >= 2.0 have been found.Models won't be available and only tokenizers, configurationand file/data utilities can be used. >>> t = AutoTokenizer.from_pretrained("bert-base-uncased"); t.encode_plus(text='A, [MASK] AllenNLP sentence.') { 'special_tokens_mask': [1, 0, 0, 0, 0, 0, 0, 0, 0, 1], 'input_ids': [101, 1037, 1010, 103, 5297, 20554, 2361, 6251, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0] } ``` (indented the results for clarity) The key difference is that in `v2.2.0`, it recognizes the `[MASK]` token as a special token and gives it token id `103`. In `v2.2.1`, this no longer happens. The behavior of `bert-base-cased` has not changed, so I don't think this is an intentional change.
12-10-2019 23:53:11
12-10-2019 23:53:11
`git bisect` says the commit introducing this problem is 7246d3c2f93c4461f3ec8ada7a26a002d8f196ea.<|||||>Any way you could run the same test on `master`? It might have been fixed since.<|||||>I did. It was not fixed in master. It only affects the [MASK] token. On Tue, Dec 10, 2019, 16:25 Julien Chaumond <[email protected]> wrote: > Any way you could run the same test on master? It might have been fixed > since. > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/2132?email_source=notifications&email_token=AAHAYPUUEHE5JTMYXUYYXHLQYAXQHA5CNFSM4JZGP2YKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGRNYOY#issuecomment-564321339>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AAHAYPRQDOQTJNFD6ULI4JDQYAXQHANCNFSM4JZGP2YA> > . > <|||||>I screwed up. It is fixed in `master` after all.<|||||>Good to hear! We'll push a new release soon, cc @LysandreJik
transformers
2,131
closed
[AB-219] Progress bar
## This PR: - adds progress bars to tokenization
12-10-2019 20:46:59
12-10-2019 20:46:59
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2131?src=pr&el=h1) Report > Merging [#2131](https://codecov.io/gh/huggingface/transformers/pull/2131?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6a73382706ce3c6905023872f63a680f0eb419a4?src=pr&el=desc) will **decrease** coverage by `0.25%`. > The diff coverage is `96%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2131/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2131?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2131 +/- ## ========================================== - Coverage 80.07% 79.82% -0.26% ========================================== Files 112 113 +1 Lines 16867 16885 +18 ========================================== - Hits 13506 13478 -28 - Misses 3361 3407 +46 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2131?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/timing.py](https://codecov.io/gh/huggingface/transformers/pull/2131/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3RpbWluZy5weQ==) | `100% <100%> (ø)` | | | [transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2131/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `91.09% <100%> (+0.07%)` | :arrow_up: | | [transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2131/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9iZXJ0LnB5) | `95.57% <88.88%> (-0.36%)` | :arrow_down: | | [transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/2131/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9hbGJlcnQucHk=) | `82.9% <0%> (-6.84%)` | :arrow_down: | | [transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2131/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl94bG5ldC5weQ==) | `84% <0%> (-6.41%)` | :arrow_down: | | [transformers/tests/modeling\_tf\_common\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2131/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `93.33% <0%> (-3.59%)` | :arrow_down: | | [transformers/tests/tokenization\_tests\_commons.py](https://codecov.io/gh/huggingface/transformers/pull/2131/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl90ZXN0c19jb21tb25zLnB5) | `97.41% <0%> (-2.59%)` | :arrow_down: | | [transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2131/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2ZpbGVfdXRpbHMucHk=) | `37.5% <0%> (-2.5%)` | :arrow_down: | | [transformers/hf\_api.py](https://codecov.io/gh/huggingface/transformers/pull/2131/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2hmX2FwaS5weQ==) | `95% <0%> (-2.5%)` | :arrow_down: | | [transformers/tests/modeling\_common\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2131/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2NvbW1vbl90ZXN0LnB5) | `73.24% <0%> (-1.3%)` | :arrow_down: | | ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/2131/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2131?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2131?src=pr&el=footer). Last update [6a73382...a4d0bc7](https://codecov.io/gh/huggingface/transformers/pull/2131?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
2,130
closed
[BREAKING CHANGE] Setting all ignored index to the PyTorch standard
The CrossEntropy loss, as well as other losses, accept a value as an index they will ignore when computing the loss. This value was set to -1 in some cases, but left to the default value (-100) in other cases. To stay consistent we're setting the value to be the default PyTorch one in all cases. Includes a few documentation fixes.
12-10-2019 20:45:57
12-10-2019 20:45:57
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2130?src=pr&el=h1) Report > Merging [#2130](https://codecov.io/gh/huggingface/transformers/pull/2130?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6a73382706ce3c6905023872f63a680f0eb419a4?src=pr&el=desc) will **decrease** coverage by `1.17%`. > The diff coverage is `92.3%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2130/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2130?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2130 +/- ## ========================================== - Coverage 80.07% 78.89% -1.18% ========================================== Files 112 112 Lines 16867 16867 ========================================== - Hits 13506 13307 -199 - Misses 3361 3560 +199 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2130?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/2130/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RyYW5zZm9feGwucHk=) | `75.9% <ø> (ø)` | :arrow_up: | | [transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2130/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbS5weQ==) | `88.34% <ø> (ø)` | :arrow_up: | | [transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2130/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3JvYmVydGEucHk=) | `90.43% <ø> (ø)` | :arrow_up: | | [transformers/modeling\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2130/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2NhbWVtYmVydC5weQ==) | `100% <ø> (ø)` | :arrow_up: | | [transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2130/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `59.41% <100%> (-12.36%)` | :arrow_down: | | [transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/2130/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2dwdDIucHk=) | `84.44% <100%> (ø)` | :arrow_up: | | [transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2130/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `94.27% <100%> (-2.21%)` | :arrow_down: | | [transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/2130/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `80.13% <100%> (-1.33%)` | :arrow_down: | | [transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2130/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2Rpc3RpbGJlcnQucHk=) | `95.87% <100%> (ø)` | :arrow_up: | | [transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2130/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `72.21% <100%> (-2.33%)` | :arrow_down: | | ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/2130/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2130?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2130?src=pr&el=footer). Last update [6a73382...dc667ce](https://codecov.io/gh/huggingface/transformers/pull/2130?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
2,129
closed
Progress indicator improvements when downloading pre-trained models.
Downloading GPT2-XL can take a while. If you're not expecting it, the current progress bar can be confusing. It looks like this: ``` 4%|▉ | 257561600/6431878936 [00:33<16:12, 6351328.14B/s] ``` With this change, the progress bar is much more readable: ``` Downloading: 3%|▋ | 166M/6.43G [00:30<12:34, 8.31MB/s] ``` Also, by importing from `tqdm.auto` you will get a nice graphical progress bar if you're running in a jupyter notebook. (Unless you're using jupyter lab and you don't have widgets set up properly, but that's it's own ball of wax.)
12-10-2019 19:40:23
12-10-2019 19:40:23
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2129?src=pr&el=h1) Report > Merging [#2129](https://codecov.io/gh/huggingface/transformers/pull/2129?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6a73382706ce3c6905023872f63a680f0eb419a4?src=pr&el=desc) will **not change** coverage. > The diff coverage is `50%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2129/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2129?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2129 +/- ## ======================================= Coverage 80.07% 80.07% ======================================= Files 112 112 Lines 16867 16867 ======================================= Hits 13506 13506 Misses 3361 3361 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2129?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2129/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2ZpbGVfdXRpbHMucHk=) | `40% <50%> (ø)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2129?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2129?src=pr&el=footer). Last update [6a73382...58d75aa](https://codecov.io/gh/huggingface/transformers/pull/2129?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>That's really cool! Looks good to me.<|||||>LGTM as well, thanks!<|||||>Traceback (most recent call last): File "train.py", line 13, in <module> from transformers import * File "/home/user/.local/lib/python3.6/site-packages/transformers/__init__.py", line 20, in <module> from .file_utils import (TRANSFORMERS_CACHE, PYTORCH_TRANSFORMERS_CACHE, PYTORCH_PRETRAINED_BERT_CACHE, File "/home/user/.local/lib/python3.6/site-packages/transformers/file_utils.py", line 24, in <module> from tqdm.auto import tqdm ModuleNotFoundError: No module named 'tqdm.auto' Is there a way to override this when one's not using a jupyter notebook?<|||||>upgrading to tqdm-4.41.1 solved it!
transformers
2,128
closed
In which directory the downloaded roberta-base models will be stored on linux server conda environment
## In which directory the downloaded roberta-base models will be stored on linux server conda environment
12-10-2019 19:25:10
12-10-2019 19:25:10
Models downloaded with the `XXXModel.from_pretrained` method are usually in the torch home folder, which is `~/.cache/torch/transformers`<|||||>Thanks for your response. I could see there are some files with below names b35e7cd126cd4229a746b5d5c29a749e8e84438b14bcdb575950584fe33207e8.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda.json Can we rename those files something like below, the naming convention mentioned in https://s3.amazonaws.com/models.huggingface.co/ roberta-base-config.json and load them as RobertaTokenizer.from_pretrained('roberta-base') <|||||>#2157 is very similar; perhaps it'll answer your question.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>If we have the model loaded, can we then find from where on disk it was loaded?
transformers
2,127
closed
Where is extract_features.py and run_classifier.py ?
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Hello! I couldn't find the extract_features.py and run_classifier.py. Have they been renamed ?
12-10-2019 17:14:27
12-10-2019 17:14:27
Reading the answer given by @thomwolf in #1123, I'm sure that `extract_features.py` script has been removed from repo, but in the future it could be updated! Reading the answer given by @ningjize in #1011, I'm sure that `run_classifier.py` script has been updated as `run_glue.py` script, that you can find in `examples/` directory [here](https://github.com/huggingface/transformers/blob/master/examples/run_glue.py). > ## ❓ Questions & Help > Hello! I couldn't find the extract_features.py and run_classifier.py. Have they been renamed ?
transformers
2,126
closed
Model2Model: RuntimeError: expected device cpu and dtype Float but got device cpu and dtype Bool
## ❓ Questions & Help I'm going to try the new Model2Model feature: ``` import torch import numpy as np from transformers import Model2Model, BertTokenizer, BertModel # device = torch.device("cuda" if torch.cuda.is_available() else "cpu") device = torch.device("cpu") tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = Model2Model.from_pretrained('bert-base-uncased').to(device) source_ids = torch.tensor([tokenizer.encode("this is source sentence", add_special_tokens=True)], dtype=torch.long).to(device) target_ids = torch.tensor([tokenizer.encode("this is target sentence", add_special_tokens=True)], dtype=torch.long).to(device) model(source_ids, target_ids) ``` This is the output: ``` RuntimeError Traceback (most recent call last) <ipython-input-10-885d1d4b847f> in <module> ----> 1 model(source_ids, target_ids) /users/tr.amirhj/anaconda3/envs/genv/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 545 result = self._slow_forward(*input, **kwargs) 546 else: --> 547 result = self.forward(*input, **kwargs) 548 for hook in self._forward_hooks.values(): 549 hook_result = hook(self, input, result) /users/tr.amirhj/anaconda3/envs/genv/lib/python3.6/site-packages/transformers/modeling_encoder_decoder.py in forward(self, encoder_input_ids, decoder_input_ids, **kwargs) 229 "attention_mask", None 230 ) --> 231 decoder_outputs = self.decoder(decoder_input_ids, **kwargs_decoder) 232 233 return decoder_outputs + encoder_outputs /users/tr.amirhj/anaconda3/envs/genv/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 545 result = self._slow_forward(*input, **kwargs) 546 else: --> 547 result = self.forward(*input, **kwargs) 548 for hook in self._forward_hooks.values(): 549 hook_result = hook(self, input, result) /users/tr.amirhj/anaconda3/envs/genv/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, masked_lm_labels, encoder_hidden_states, encoder_attention_mask, lm_labels) 871 inputs_embeds=inputs_embeds, 872 encoder_hidden_states=encoder_hidden_states, --> 873 encoder_attention_mask=encoder_attention_mask) 874 875 sequence_output = outputs[0] /users/tr.amirhj/anaconda3/envs/genv/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 545 result = self._slow_forward(*input, **kwargs) 546 else: --> 547 result = self.forward(*input, **kwargs) 548 for hook in self._forward_hooks.values(): 549 hook_result = hook(self, input, result) /users/tr.amirhj/anaconda3/envs/genv/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask) 677 seq_ids = torch.arange(seq_length, device=device) 678 causal_mask = seq_ids[None, None, :].repeat(batch_size, seq_length, 1) <= seq_ids[None, :, None] --> 679 extended_attention_mask = causal_mask[:, None, :, :] * attention_mask[:, None, None, :] 680 else: 681 extended_attention_mask = attention_mask[:, None, None, :] RuntimeError: expected device cpu and dtype Float but got device cpu and dtype Bool ``` If I missed something?
12-10-2019 16:31:41
12-10-2019 16:31:41
In my environment, the code you've posted **works as expected**. - Python: 3.6.9 - Transformers: 2.2.1 (installed from PyPi with pip install transformers) - PyTorch: 1.3.1 - TensorFlow: 2.0 - OS: Ubuntu 16.04 Here the stack trace: ``` Python 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) [GCC 7.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torch >>> import numpy as np >>> from transformers import Model2Model, BertTokenizer, BertModel /home/<user>/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /home/<user>/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /home/<user>/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /home/<user>/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /home/<user>/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /home/<user>/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) 2019-12-10 17:40:40.255384: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2019-12-10 17:40:40.277896: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3408000000 Hz 2019-12-10 17:40:40.279096: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x558b757804f0 executing computations on platform Host. Devices: 2019-12-10 17:40:40.279146: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version >>> device = torch.device("cpu") >>> device device(type='cpu') >>> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') >>> model = Model2Model.from_pretrained('bert-base-uncased').to(device) >>> source_ids = torch.tensor([tokenizer.encode("this is source sentence", add_special_tokens=True)], ... dtype=torch.long).to(device) >>> target_ids = torch.tensor([tokenizer.encode("this is target sentence", add_special_tokens=True)], ... dtype=torch.long).to(device) >>> >>> source_ids tensor([[ 101, 2023, 2003, 3120, 6251, 102]]) >>> target_ids tensor([[ 101, 2023, 2003, 4539, 6251, 102]]) >>> model(source_ids, target_ids) (tensor([[[ -6.3390, -6.3664, -6.4600, ..., -5.5354, -4.1787, -5.8384], [ -7.9225, -7.7588, -7.9552, ..., -6.6068, -5.5835, -6.9365], [-10.4273, -10.3139, -10.5899, ..., -9.5835, -7.8032, -9.9118], [ -8.8252, -8.6229, -8.8085, ..., -8.0037, -6.6364, -8.5376], [ -8.6978, -8.4959, -8.5453, ..., -7.9320, -6.6115, -8.7994], [-13.0414, -12.5687, -12.3714, ..., -10.1630, -11.1963, -9.3892]]], grad_fn=<AddBackward0>), tensor([[[-3.0067e-01, 1.5002e-01, 1.7042e-02, ..., -3.6836e-01, 2.2961e-01, 8.0086e-01], [-1.0987e+00, -2.3812e-01, 1.9028e-01, ..., -6.9975e-01, 6.7476e-01, 2.9067e-01], [-1.7711e-01, -3.5428e-01, 3.6858e-01, ..., -1.1280e-01, 1.6458e-01, 1.1558e+00], [-5.6245e-01, -1.9310e-01, 1.7546e-01, ..., -1.8610e-02, -1.1314e-03, 3.2184e-01], [-3.8065e-01, -1.8030e-01, -1.2957e-01, ..., 4.6774e-01, 1.4298e-01, -1.8563e-01], [ 7.8768e-01, 1.0423e-01, -4.0617e-01, ..., 2.6467e-01, -7.9018e-01, -1.9337e-01]]], grad_fn=<NativeLayerNormBackward>), tensor([[-9.0084e-01, -3.3628e-01, 2.9453e-01, 7.1089e-01, -1.0436e-02, -2.5144e-01, 9.0506e-01, 2.9434e-01, 1.6485e-01, -9.9996e-01, 2.0915e-01, 2.8445e-01, 9.8403e-01, -3.0770e-01, 9.2687e-01, -6.1045e-01, -1.2372e-01, -5.7582e-01, 3.9420e-01, -7.7367e-01, 6.4602e-01, 9.8964e-01, 6.7300e-01, 2.6016e-01, 4.0054e-01, 4.2634e-01, -6.1309e-01, 9.4336e-01, 9.6244e-01, 7.9033e-01, -7.7723e-01, 2.5581e-01, -9.9027e-01, -2.3506e-01, -1.6533e-01, -9.8790e-01, 2.4701e-01, -7.8211e-01, -9.2877e-02, -4.5130e-02, -9.2165e-01, 3.7376e-01, 9.9949e-01, -2.2205e-01, 1.6105e-01, -3.5879e-01, -9.9999e-01, 3.1183e-01, -9.0365e-01, -1.2721e-01, -5.7083e-02, -3.8538e-01, 2.2891e-01, 4.1976e-01, 4.4054e-01, 2.7219e-01, -1.6016e-02, 2.7714e-01, -1.6180e-01, -5.8537e-01, -6.2011e-01, 3.2424e-01, -1.1204e-01, -9.2093e-01, -1.9166e-01, -3.7498e-01, -1.5816e-01, -2.6796e-01, -1.0934e-01, -3.2014e-02, 8.7326e-01, 2.5321e-01, 3.1921e-01, -8.0303e-01, -3.4841e-01, 2.4700e-01, -4.6604e-01, 1.0000e+00, -4.6661e-01, -9.8111e-01, -1.2605e-01, -1.8299e-01, 4.1548e-01, 6.1520e-01, -3.6703e-01, -1.0000e+00, 3.6013e-01, -2.1875e-01, -9.9034e-01, 1.5795e-01, 4.0751e-01, -2.1697e-01, -3.0685e-01, 3.8455e-01, -1.3388e-01, -1.6273e-01, -3.3509e-01, 7.5851e-03, -2.6005e-01, -1.5252e-01, 1.6267e-01, -2.9343e-01, -1.8843e-01, -2.8192e-01, 1.9310e-01, -3.3882e-01, -4.8637e-01, 3.5417e-01, -4.0395e-01, 7.1347e-01, 3.5647e-01, -3.2761e-01, 3.3358e-01, -9.4952e-01, 5.4614e-01, -2.8969e-01, -9.8452e-01, -4.2365e-01, -9.8693e-01, 7.5074e-01, -3.5488e-02, -2.6717e-01, 9.6647e-01, 5.1186e-01, 2.8068e-01, -1.0258e-01, -2.3203e-03, -1.0000e+00, -9.8173e-02, -3.1035e-01, 2.0420e-01, -1.8622e-01, -9.8229e-01, -9.5138e-01, 6.5169e-01, 9.6339e-01, 2.2344e-01, 9.9859e-01, -2.5536e-01, 9.4590e-01, 3.1677e-01, -1.7800e-01, -5.1792e-01, -4.0876e-01, 5.2822e-01, 5.4555e-01, -8.1303e-01, 2.1158e-01, 9.4905e-02, -8.9056e-02, -2.3806e-01, -3.3301e-01, 1.6834e-01, -9.2588e-01, -4.2112e-01, 9.3633e-01, 2.8537e-01, 7.7606e-02, 7.2043e-01, -1.9238e-01, -3.9200e-01, 8.6078e-01, 3.3558e-01, 3.0295e-01, 6.4802e-02, 4.6284e-01, -8.7253e-02, 4.8427e-01, -9.0531e-01, 3.4391e-01, 4.1636e-01, -1.6641e-01, 1.7450e-01, -9.7965e-01, -3.0878e-01, 4.5623e-01, 9.8710e-01, 8.1641e-01, 2.8662e-01, 5.9909e-02, -3.3217e-01, 2.3228e-01, -9.5294e-01, 9.7835e-01, -1.7293e-01, 2.3846e-01, 4.8146e-01, 6.5912e-03, -8.8724e-01, -3.5229e-01, 8.4911e-01, -3.5286e-02, -8.8944e-01, -5.5141e-02, -4.7656e-01, -4.7363e-01, -3.5688e-02, 6.3608e-01, -3.2397e-01, -4.2425e-01, -5.4916e-02, 9.3040e-01, 9.7627e-01, 7.4838e-01, -5.1590e-01, 4.6674e-01, -9.0206e-01, -5.0592e-01, 1.5316e-01, 2.7624e-01, 1.7898e-01, 9.9323e-01, -1.4045e-01, -1.6275e-01, -9.1684e-01, -9.8267e-01, 3.2413e-02, -8.8971e-01, -3.2410e-02, -7.1453e-01, 4.0365e-01, 5.0860e-01, -2.6739e-01, 3.7175e-01, -9.8981e-01, -8.5210e-01, 3.3096e-01, -3.1729e-01, 4.9861e-01, -2.0997e-01, 5.6376e-01, 1.7651e-01, -6.6355e-01, 7.7454e-01, 9.3114e-01, 2.3015e-01, -7.5848e-01, 8.5644e-01, -2.3493e-01, 9.0546e-01, -6.1747e-01, 9.8845e-01, 2.5930e-01, 3.8508e-01, -9.3526e-01, 1.6509e-01, -9.2224e-01, 1.8666e-01, -1.8823e-01, -6.0511e-01, -1.4290e-01, 4.5802e-01, 2.9694e-01, 7.0364e-01, -5.6475e-01, 9.9713e-01, -4.6605e-01, -9.5852e-01, 3.6494e-01, -6.1851e-02, -9.8850e-01, 1.2088e-01, 1.8488e-01, -4.5003e-01, -4.3713e-01, -4.3971e-01, -9.6328e-01, 9.0248e-01, 1.4709e-01, 9.9092e-01, 5.7188e-02, -9.3378e-01, -3.1652e-01, -9.2534e-01, -8.0443e-02, -2.1560e-01, 6.4397e-01, -9.1586e-02, -9.4833e-01, 4.7442e-01, 5.7476e-01, 3.3297e-01, 3.8941e-01, 9.9658e-01, 9.9985e-01, 9.7776e-01, 8.7411e-01, 8.7804e-01, -9.6168e-01, -1.2054e-01, 9.9997e-01, -6.6824e-01, -1.0000e+00, -9.5125e-01, -5.6642e-01, 4.1273e-01, -1.0000e+00, -1.6136e-01, -3.4676e-02, -9.1901e-01, -3.1622e-01, 9.8318e-01, 9.9124e-01, -1.0000e+00, 8.9389e-01, 9.4346e-01, -5.0858e-01, 2.4580e-01, -2.3135e-01, 9.7547e-01, 4.2250e-01, 3.7753e-01, -2.2546e-01, 3.7723e-01, -1.3091e-01, -8.7157e-01, 2.3319e-01, 2.5093e-01, 8.2724e-01, 1.5588e-01, -7.3930e-01, -9.3200e-01, -1.2279e-01, -6.6587e-02, -2.5732e-01, -9.6035e-01, -1.6951e-01, -3.5703e-01, 6.1311e-01, 2.4599e-01, 2.3456e-01, -7.9384e-01, 2.7844e-01, -5.1939e-01, 4.3604e-01, 5.1201e-01, -9.2245e-01, -6.2274e-01, -2.1160e-01, -4.7518e-01, 2.7232e-01, -9.6657e-01, 9.7142e-01, -2.9870e-01, 2.4310e-01, 1.0000e+00, -9.2202e-02, -8.8537e-01, 3.1929e-01, 1.6034e-01, -3.4469e-01, 1.0000e+00, 3.9171e-01, -9.8495e-01, -3.9130e-01, 2.0869e-01, -3.9736e-01, -4.2046e-01, 9.9881e-01, -2.3887e-01, 2.8045e-01, 2.6567e-01, 9.7683e-01, -9.9247e-01, 6.3824e-01, -9.0147e-01, -9.5820e-01, 9.5663e-01, 9.3855e-01, -1.4730e-01, -7.2889e-01, 1.4520e-01, -1.8675e-01, 2.6300e-01, -9.6400e-01, 5.8518e-01, 4.4442e-01, -9.6464e-02, 8.8574e-01, -8.8098e-01, -3.9014e-01, 4.1658e-01, 9.9770e-02, 4.1451e-01, 2.6072e-01, 4.5863e-01, -3.4371e-01, 1.0964e-01, -2.7387e-01, -8.9248e-02, -9.6777e-01, 4.3397e-02, 1.0000e+00, 1.2981e-01, -3.4366e-01, -4.5056e-02, -9.4596e-02, -2.1016e-01, 3.5447e-01, 5.0661e-01, -3.0578e-01, -8.1335e-01, 6.9142e-02, -9.1946e-01, -9.8745e-01, 7.4571e-01, 1.8653e-01, -3.5182e-01, 9.9974e-01, 2.4423e-01, 1.8763e-01, -7.2386e-02, 3.4985e-01, 1.0746e-01, 5.1677e-01, -4.9051e-01, 9.7835e-01, -3.0722e-01, 3.8846e-01, 8.6099e-01, 1.8453e-01, -3.9804e-01, -6.3625e-01, 9.1733e-03, -9.4351e-01, -5.8535e-02, -9.6325e-01, 9.6869e-01, -1.2770e-01, 3.2308e-01, 2.0592e-01, 1.4773e-01, 1.0000e+00, 2.8664e-01, 6.8401e-01, -6.8457e-01, 8.7746e-01, -9.4684e-01, -7.9937e-01, -3.8151e-01, -4.8727e-02, 4.4213e-01, -2.3993e-01, 2.1252e-01, -9.7509e-01, -1.9764e-01, -6.7608e-02, -9.7805e-01, -9.8934e-01, 4.5225e-01, 7.6899e-01, 1.1139e-01, -6.8287e-01, -5.6328e-01, -5.9391e-01, 2.5473e-01, -2.1508e-01, -9.2927e-01, 6.3278e-01, -3.2913e-01, 4.2842e-01, -3.1567e-01, 4.6466e-01, -2.1445e-01, 7.9070e-01, 1.9876e-01, 1.7233e-01, -1.2041e-01, -8.2787e-01, 7.1979e-01, -8.0239e-01, 1.0820e-01, -1.7385e-01, 1.0000e+00, -4.9901e-01, -3.6784e-02, 7.7607e-01, 7.4679e-01, -1.9120e-01, 1.9722e-01, 1.9967e-01, 2.1493e-01, 3.5653e-01, 2.5057e-01, -8.0337e-01, -2.9930e-01, 5.6660e-01, -4.0009e-01, -2.1291e-01, 8.1289e-01, 2.1814e-01, 8.7318e-02, -9.9111e-02, 1.8116e-01, 9.9893e-01, -1.7561e-01, -1.1083e-01, -5.9985e-01, -1.1718e-01, -3.0548e-01, -5.8867e-01, 1.0000e+00, 3.5209e-01, -1.9990e-01, -9.9180e-01, -1.8034e-02, -9.2345e-01, 9.9952e-01, 8.0807e-01, -8.5855e-01, 5.7186e-01, 2.8361e-01, -1.6332e-01, 7.0452e-01, -2.8947e-01, -3.2616e-01, 1.7375e-01, 1.6440e-01, 9.5412e-01, -4.7137e-01, -9.6928e-01, -6.5504e-01, 3.6296e-01, -9.5361e-01, 9.7173e-01, -5.8073e-01, -1.9150e-01, -3.3605e-01, 3.5247e-01, 8.7008e-01, -2.1333e-03, -9.7685e-01, -1.8092e-01, 6.1657e-02, 9.7678e-01, 2.7418e-01, -4.2944e-01, -9.5711e-01, -1.8267e-01, 6.5512e-02, 3.0961e-01, -9.1480e-01, 9.7564e-01, -9.7270e-01, 2.6567e-01, 1.0000e+00, 3.8786e-01, -6.4924e-01, 1.9543e-01, -4.9142e-01, 2.3787e-01, 8.9131e-02, 5.6665e-01, -9.4914e-01, -2.9186e-01, -2.7113e-01, 2.6839e-01, -1.8699e-01, 3.7806e-01, 6.6034e-01, 2.5334e-01, -3.5623e-01, -5.3300e-01, -2.1946e-01, 3.8268e-01, 7.2743e-01, -2.6907e-01, -1.9909e-01, 1.3403e-01, -2.0919e-01, -9.0321e-01, -2.7320e-01, -3.6081e-01, -9.9466e-01, 6.8170e-01, -1.0000e+00, -1.3583e-01, -5.5586e-01, -2.3915e-01, 8.6088e-01, 3.7196e-02, 5.7585e-02, -7.7021e-01, 3.6318e-01, 8.1365e-01, 7.2954e-01, -2.8529e-01, 1.7030e-01, -7.6105e-01, 1.7249e-01, -1.9593e-01, 2.6639e-01, 1.1146e-01, 7.0965e-01, -1.9811e-01, 1.0000e+00, 1.0188e-01, -5.4220e-01, -9.7256e-01, 3.0447e-01, -2.6452e-01, 9.9995e-01, -9.3875e-01, -9.5903e-01, 2.8526e-01, -6.0464e-01, -8.2965e-01, 2.4211e-01, 1.2796e-01, -6.8806e-01, -3.7915e-01, 9.5529e-01, 8.6700e-01, -3.4978e-01, 2.9469e-01, -3.8873e-01, -4.3963e-01, 8.3753e-02, -2.6750e-01, 9.8786e-01, 2.4844e-01, 9.2651e-01, 6.6386e-01, -2.5228e-02, 9.6676e-01, 3.2368e-01, 6.5488e-01, 1.0813e-01, 1.0000e+00, 3.6666e-01, -9.5177e-01, 2.3646e-01, -9.8821e-01, -2.6993e-01, -9.5921e-01, 2.2923e-01, 1.7226e-01, 9.1098e-01, -2.9949e-01, 9.6250e-01, 2.4218e-01, 1.3680e-01, 1.6822e-01, 5.4578e-01, 3.2755e-01, -9.3052e-01, -9.8844e-01, -9.8757e-01, 3.3784e-01, -4.5782e-01, -6.9121e-02, 3.6121e-01, 1.9176e-01, 3.9072e-01, 3.6573e-01, -1.0000e+00, 9.3469e-01, 4.4213e-01, -1.4691e-01, 9.6524e-01, 1.3485e-02, 2.9751e-01, 2.4334e-01, -9.8886e-01, -9.6670e-01, -3.9043e-01, -3.6449e-01, 8.1793e-01, 6.7040e-01, 8.5270e-01, 3.2083e-01, -4.9832e-01, -2.7567e-01, 4.0584e-01, -2.4154e-01, -9.9213e-01, 4.2448e-01, 2.6829e-01, -9.7171e-01, 9.6098e-01, -4.4311e-01, -2.2641e-01, 6.6900e-01, 6.8819e-02, 9.4393e-01, 7.6540e-01, 5.9071e-01, 1.1600e-01, 6.0003e-01, 8.7642e-01, 9.5714e-01, 9.8856e-01, 1.3229e-01, 7.8398e-01, 3.2535e-01, 4.0681e-01, 3.5011e-01, -9.3994e-01, 2.2100e-01, 1.2674e-01, -1.6419e-01, 2.9378e-01, -2.3917e-01, -9.7171e-01, 3.4781e-01, -2.8501e-01, 5.4948e-01, -4.0438e-01, 7.0494e-02, -4.3903e-01, -2.3478e-01, -7.8532e-01, -5.0934e-01, 5.0192e-01, 4.1413e-01, 9.2632e-01, 3.0985e-01, -1.4323e-01, -6.4190e-01, -1.6080e-01, 3.1866e-01, -9.2836e-01, 9.2523e-01, -1.3718e-02, 4.0830e-01, -1.7649e-02, -5.0922e-04, 5.2501e-01, -3.0525e-01, -3.6783e-01, -2.6349e-01, -8.0582e-01, 8.1521e-01, -7.9358e-02, -4.9387e-01, -4.9402e-01, 5.9927e-01, 3.1836e-01, 9.9176e-01, 2.0806e-01, 2.6599e-01, -1.2025e-01, -2.1694e-01, 3.5750e-01, -2.4472e-01, -1.0000e+00, 4.2622e-01, 2.0912e-01, -8.1241e-02, 7.5354e-02, -2.0835e-01, 1.8585e-01, -9.7545e-01, -1.5719e-01, 1.4028e-01, -1.8040e-01, -5.1406e-01, -3.6387e-01, 3.6267e-01, 5.9451e-01, 3.2176e-01, 9.0730e-01, -4.6973e-02, 5.9712e-01, 4.5915e-01, -5.8261e-02, -6.2097e-01, 9.1518e-01]], grad_fn=<TanhBackward>)) >>> ```<|||||>It seems that it was a problem with pytorch. Upgrading to 1.3 solve the problem. Thanks @TheEdoardo93
transformers
2,125
closed
DistilmBERT training/distillation dataset
## ❓ Questions & Help Thanks a lot for Distil**m**BERT (amongst everything else), is there any info on the dataset used in the distillation process? Both the dataset itself or the process used to obtain it would be greatly appreciated! Am I right to assume you used a similar (if not the same) data as the original [`multilingual-bert`](https://github.com/google-research/bert/blob/master/multilingual.md#details) with a processed dump of the biggest 104 wikipedia dumps? Again, any pointer to the preprocessing steps would be great!
12-10-2019 15:39:55
12-10-2019 15:39:55
By reading the [official docs](https://github.com/huggingface/transformers/tree/master/examples/distillation), I think that they have trained Distil**m**BERT . For what concern the pre-processing steps, there are no information about that (surely I'm interested in these steps too). It would be more useful and precise to specify: - the dataset used for training the model --> on the **concatenation of Wikipedia** in 104 different languages. Is it correct guys? - the pre-processing steps developed --> **no information** about this step - when to use Bert('base-multilingual-cased') and when to use DistilMBert --> the latter one is twice as fast as the former one, as said in the official docs - the difference between Bert('base-multilingual-cased') and DistilMBert --> on a [Twitter account](https://twitter.com/BramVanroy/status/1203096204122435590), a HuggingFace's dev said the following statement: "_Distil-mBERT is just an instance of DistilBERT with multilingual weights_." A question related to this topic: "_pretrained with the supervision of bert-base-multilingual-cased_" means that they have initialized the weights of the DistilMBERT model with the ones of multi-lingual BERT model? > ## Questions & Help > Thanks a lot for Distil**m**BERT (amongst everything else), is there any info on the dataset used in the distillation process? > > Both the dataset itself or the process used to obtain it would be greatly appreciated! > > Am I right to assume you used a similar (if not the same) data as the original [`multilingual-bert`](https://github.com/google-research/bert/blob/master/multilingual.md#details) with a processed dump of the biggest 104 wikipedia dumps? > Again, any pointer to the preprocessing steps would be great!<|||||>Thanks for adding a bit of context, but I'm pretty sure that (given that Distil**m**BERT is just a DistilBERT with multilingual weights) the distillation procedure is pretty much the same used in the [paper](https://arxiv.org/abs/1910.01108), just using `bert-base-multilingual-cased` as teacher. I was really just curious to know if they had used the 104 languages wiki dump for distillation as well and if either the data or the script used to obtain them are available somewhere :)<|||||>Yeah, you're right about the distillation procedure they've followed.<|||||>Hello @mbant Indeed we used the concatenation of 104 wikipedia. We only extract ~110M seqs among these dumps following the smoothing probability of 0.7 used in mBERT (see [here](https://github.com/google-research/bert/blob/master/multilingual.md#details)). The pre-training distillation phase is then the same as described in the paper! The teacher is indeed `bert-base-multilingual-cased`. Victor<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,124
closed
Is there a way to evaluate models during training in Multi-gpu setting
## ❓ Questions & Help Hi all, I always see comments in examples saying that "when single GPU otherwise metrics may not average well". So is this really something that shouldn't be done? I mean, is there a way to evaluate the model safely after each epoch in the multi-gpu training setting? Thanks. <!-- A clear and concise description of the question. -->
12-10-2019 14:21:25
12-10-2019 14:21:25
Yes. After each batch has been completed by a GPU, you can store its results and corresponding labels in a shared space (e.g. CPU/memory). Then, when all batches are done, you can evaluate the epoch by calculating your metric/avg loss over all gathered results. It has been suggested to only keep track of the batch averages, but I feel like that is just an approximation of an approximation. My code is a bit too bombastic to share, but I use something like this: - custom Trainer class with a train/evaluate/test loop. It also has a 'performer' property - the performer is an instance of a custom Performer singleton class that keeps track of losses, labels, and/or predictions at the end of each processed batch. Note that this means that each separate process keeps track of its own progress. Results aren't shared between processes until the end of the epoch - at the end of each epoch, all results are `gather`ed from the different processes to a single one, which then calculates the average over all collected batches, and broadcasts that information (e.g. avg_loss, avg_secondary_metric) back to the other processes It is rather complex and perhaps too much code and hassle for what it is, but for me it was more the learning experience of how to work with multi-GPU and `gather` with, as a bonus, fast evaluation and testing.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hey @BramVanroy thank you for your reply and sorry for late reply. Unfortunately I am not sure if I understood what you suggested. Do you suggest to transfer predictions and gold labels of each bacth to the CPU and then calculate metrics by using them if I want to evaluate my model during training ? As far as I see in most of the examples in the repo, it is okay to evaluate the model by using multi gpus once the training is over. They only do not suggest to eval it during training. There must be some reason for that ? <|||||>I did not even know how to "evaluate models during training in single-gpu setting".
transformers
2,123
closed
Transformers for Tabular data extraction - e.g., wikitables
Hi Team, Can you please let us know if Transformers can be used to extract information from tabular data. Example is - [https://demo.allennlp.org/wikitables-parser](https://demo.allennlp.org/wikitables-parser) . WikiTables is the dataset. Example questions can be: show me all students who got marks greater than 40% Wondering if BERT or any other SOTA transformer technology can be leveraged to solve this NLP problem
12-10-2019 14:02:06
12-10-2019 14:02:06
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
2,122
closed
Remove misplaced summarization documentation
Documentation for the previous version of abstractive summarization is still present in the repository: https://twitter.com/DavidMezzetti/status/1204123548966621184 This PR removes it.
12-10-2019 13:53:28
12-10-2019 13:53:28
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2122?src=pr&el=h1) Report > Merging [#2122](https://codecov.io/gh/huggingface/transformers/pull/2122?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e57d00ee108595375504eb21c230ce35428aae5e?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2122/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2122?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2122 +/- ## ======================================= Coverage 80.08% 80.08% ======================================= Files 112 112 Lines 16862 16862 ======================================= Hits 13504 13504 Misses 3358 3358 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2122?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2122?src=pr&el=footer). Last update [e57d00e...a5fa0de](https://codecov.io/gh/huggingface/transformers/pull/2122?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
2,121
closed
"Write With Transformer" interface returning 502 on gpt2/xl model
## 🐛 Bug The "Write With Transformer" interface is returning a `502` when the API calls the gpt2/xl model. See: https://transformer.huggingface.co/doc/gpt2-xl ## To Reproduce Steps to reproduce the behavior just using the API request: ``` curl 'https://transformer.huggingface.co/autocomplete/gpt2/xl' --data-binary '{"context":"See how a modern ","model_size":"gpt2/xl","top_p":0.9,"temperature":1,"max_time":1}' --compressed ``` ## Expected behavior Expecting autocomplete results, but getting this `502` response instead. ``` <html> <head><title>502 Bad Gateway</title></head> <body bgcolor="white"> <center><h1>502 Bad Gateway</h1></center> <hr><center>nginx/1.14.2</center> </body> ```
12-10-2019 09:15:28
12-10-2019 09:15:28
transformers
2,120
closed
BertModel.from_pretrained() doesn't accept pathlib.PosixPath anymore
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): Bert Language I am using the model on (English, Chinese....): Japanese The problem arise when using: * [ ] the official example scripts: (give details) * [x] my own modified scripts: (give details) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details) ## To Reproduce Steps to reproduce the behavior: 1. Prepare a directory with `config.json`, `pytorch_model.bin`. 2. Give the directory as a pathlib.PosixPath such like `bert_model = BertModel.from_pretrained(bert_path)` 3. <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> TypeError: argument of type 'PosixPath' is not iterable ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> This used to load the model with 2.1.1, but started to cause an error since 2.2.0. `bert_model = BertModel.from_pretrained(str(bert_path))` works. ## Environment colab * OS: * Python version: * PyTorch version: * PyTorch Transformers version (or branch): 2.2.0 and later * Using GPU ? * Distributed of parallel setup ? * Any other relevant information: ## Additional context <!-- Add any other context about the problem here. -->
12-10-2019 07:39:41
12-10-2019 07:39:41
It comes from this line https://github.com/huggingface/transformers/blob/master/transformers/modeling_utils.py#L321-L324 If it's a PosixPath, it's not an iterable so `"albert" in path` crashes. Patch already pushed in previous PR!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,119
closed
Finetune and generate text with BertForMaskedLM
## ❓ Questions & Help I am trying to fine-tune and generate text using BertForMaskedLM. Although my script works I am not getting the output I am expecting. I am confused on what should I pass to BertForMaskedLM when training (attention mask, token types ids, etc) and how to generate text once the model is fine tuned. Any help is welcome, hereafter is my current code: ``` import torch from torch.optim import Adam from transformers import BertForMaskedLM, AutoTokenizer if __name__ == "__main__": lr = 0.002 epochs = 20 model = BertForMaskedLM.from_pretrained("bert-base-uncased") tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") optimizer = Adam(model.parameters(), lr=lr) dataset = ["this is the first sentence.", "this is the second, slightly longer, sentence.", "this is the third and last sentence."] # We precomputed this max_len = 12 # Since we only have 3 sentences we fit all our dataset in a single batch. # The padded batch will look like: # [[101, 2023, 2003, 1996, 2034, 6251, 1012, 102, 0, 0, 0, 0], # [101, 2023, 2003, 1996, 2117, 1010, 3621, 2936, 1010, 6251, 1012, 102], # [101, 2023, 2003, 1996, 2353, 1998, 2197, 6251, 1012, 102, 0, 0]] padded_batch = [] padding_id = tokenizer.convert_tokens_to_ids(tokenizer.pad_token) for sentence in dataset: encoded_sentence = tokenizer.encode(sentence) padded_sentence = encoded_sentence + \ [padding_id]*(max_len-len(encoded_sentence)) padded_batch.append(padded_sentence) # The attention mask will have the same shape of the batch, with 0s for the # padded element and 1s for the non-padded ones # [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0], # [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], # [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] attention_mask = [[1 if t_id != padding_id else 0 for t_id in tensor] for tensor in padded_batch] # The lm_labels will be the same as the padded batch but for the padded # elements which are replaced by -1 # [[101, 2023, 2003, 1996, 2034, 6251, 1012, 102, -1, -1, -1, -1], # [101, 2023, 2003, 1996, 2117, 1010, 3621, 2936, 1010, 6251, 1012, 102], # [101, 2023, 2003, 1996, 2353, 1998, 2197, 6251, 1012, 102, -1, -1]] lm_labels = [[t_id if t_id != padding_id else -1 for t_id in tensor] for tensor in padded_batch] # Converting the model input from list to tensor padded_batch = torch.tensor(padded_batch) attention_mask = torch.tensor(attention_mask) lm_labels = torch.tensor(lm_labels) # Since we only have one batch every epoch we do a single forward pass, # backprop and optimization step for i in range(epochs): loss, _ = model(input_ids=padded_batch, attention_mask=attention_mask, lm_labels=lm_labels) print(loss.item()) loss.backward() optimizer.step() model.zero_grad() # The model should now be trained and we want to generate the first three # words of a new sentence. Given the training data used we expect it to be # "this is the". # Initialize the model input with "[CLS] [MASK]" to generate word w_1, the # input for w_i is "[CLS] w_1 ... w_(i-1) [MASK]" where w_1 ... w_(i-1) # have been generated during previous steps. output = [101, 103] for i in range(4): generation_input = torch.tensor([output]) pred = model(generation_input)[0] new_index = torch.argmax(pred, -1) output[-1] = new_index[:, -1].item() output.append(103) print(output) print(tokenizer.decode(output)) ```
12-10-2019 04:49:10
12-10-2019 04:49:10
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>did you find a solution?<|||||>Hi, I am also encountering this problem. Is it possible to please provide an example for fine tuning BertForMaskedLM on a specific type of text (ex Medical corpus) ? Especially what input should be passed to BertForMaskedLM to fine tune it (attention mask, token types ids, masked_token_index)?
transformers
2,118
closed
Could convert_pytorch_checkpoint_to_tf2.py convert any pytorch model to tf2?
## ❓ Questions & Help <!-- A clear and concise description of the question. -->
12-10-2019 04:08:39
12-10-2019 04:08:39
I think **no**. You can use this Python script to convert a PyTorch implementation of one of the models supported by Transformers to TensorFlow 2.0 version. Have you ever tried to use this Python script to convert **any** PyTorch model to TensorFlow 2.0? > ## Questions & Help<|||||>@TheEdoardo93 No,have not tried.I read the code.Finding maybe the model costumed support converting. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,117
closed
Encoder-decoders in Transformers
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): Language I am using the model on (English, Chinese....): The problem arise when using: * [X ] the official example scripts: (give details) * [ ] my own modified scripts: (give details) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ X] my own task or dataset: (give details) ## To Reproduce Steps to reproduce the behavior: 1. I'm trying to use the hybrid seq2seq model described in this [article](https://medium.com/huggingface/encoder-decoders-in-transformers-a-hybrid-pre-trained-architecture-for-seq2seq-af4d7bf14bb8). It is stated that the library is available from 2.2.0 version. I tried in both 2.2.0 and 2.2.1 I don't find the respective libraries working as expected. ``` from transformers import PreTrainedEncoderDecoder model = PreTrainedEncoderDecoder.from_pretrained('bert-base-uncased','gpt2') tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') encoder_input_ids=tokenizer.encode("Hi How are you") ouput = model(torch.tensor( encoder_input_ids).unsqueeze(0),torch.tensor( encoder_input_ids).unsqueeze(0) ) ``` and I get the following error: ``` TypeError: forward() got an unexpected keyword argument 'encoder_hidden_states' ``` I checked the code of modelling [gpt2](https://github.com/huggingface/transformers/blob/1d189304624db17749aee23fa2345f009cc48215/transformers/modeling_gpt2.py#L541) it doesn't take any input as encoder_hidden_states. 2. I also tried the another example from that article ![1_my6KF5Wa2AFa54tPnQp3Yw](https://user-images.githubusercontent.com/12907396/70492861-013de580-1ac3-11ea-9197-86ebae611f7e.png) There is no decode method in Model2Model, but then do you mean decoder? But then Bert using decoder I get the following error ``` TypeError: forward() got an unexpected keyword argument 'length' ``` As bert doesn't take the length as the input or any of the parameters shown in the example. * OS: Windows * Python version: 3.7.4 * PyTorch version:1.3.0 * PyTorch Transformers version (or branch):2.2.0 * Using GPU ? No * Distributed of parallel setup ? No * Any other relevant information: ## Additional context <!-- Add any other context about the problem here. -->
12-10-2019 03:38:26
12-10-2019 03:38:26
Hi @anandhperumal, Thank you for posting an issue. Just to clarify: 1. Indeed, as I specified in the article, `PreTrainedEncoderDecoder` only works with BERT as an encoder and BERT as a decoder. GPT2 shouldn't take too much work to adapt, but we haven't had the time to do it yet. Try `PreTrainedEncoderDecoder.from_pretrained('bert-base-uncased', 'bert-base-uncased')` should work. Let me know if it doesn't. 2. We mean `decode`. Again, as written in the article this is not available as of now but will be very soon. You can follow the progress here: #1840 <|||||>Hi @rlouf , Thanks for getting back. The `PreTrainedEncoderDecoder` works like a charm but what was your intuition behind using BERT as a decoder. I mean there is nothing wrong with using it as a decoder but it was never trained as a decoder. Did you test that on any dataset by combining two BERT trained for the different tasks?<|||||>It was this paper that we originally intended to reproduce: https://arxiv.org/abs/1907.12461<|||||>@rlouf Thanks.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>how to fine tune the encoder-decoder model for training on new corpus?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@rlouf I am using transformers(3.0.2). The module EncoderDecoderModel has some problem same as its predecessor. I am getting the following error for using BERT+GPT2 as well as Bert+XLNet for encoder-decoder: ``` forward() got an unexpected keyword argument 'encoder_hidden_states' ``` Is the problem has been fixed? If yes then please clarify me and tell me how to use it.
transformers
2,116
closed
Couldn't reach server at '{}' to download vocabulary files.
Traceback (most recent call last): File "/home/hgy/anaconda3/envs/pytorch-python3/lib/python3.6/site-packages/urllib3/connection.py", line 157, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/home/hgy/anaconda3/envs/pytorch-python3/lib/python3.6/site-packages/urllib3/util/connection.py", line 84, in create_connection raise err File "/home/hgy/anaconda3/envs/pytorch-python3/lib/python3.6/site-packages/urllib3/util/connection.py", line 74, in create_connection sock.connect(sa) TimeoutError: [Errno 110] Connection timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/hgy/anaconda3/envs/pytorch-python3/lib/python3.6/site-packages/urllib3/connectionpool.py", line 672, in urlopen chunked=chunked, File "/home/hgy/anaconda3/envs/pytorch-python3/lib/python3.6/site-packages/urllib3/connectionpool.py", line 376, in _make_request self._validate_conn(conn) File "/home/hgy/anaconda3/envs/pytorch-python3/lib/python3.6/site-packages/urllib3/connectionpool.py", line 994, in _validate_conn conn.connect() File "/home/hgy/anaconda3/envs/pytorch-python3/lib/python3.6/site-packages/urllib3/connection.py", line 334, in connect conn = self._new_conn() File "/home/hgy/anaconda3/envs/pytorch-python3/lib/python3.6/site-packages/urllib3/connection.py", line 169, in _new_conn self, "Failed to establish a new connection: %s" % e urllib3.exceptions.NewConnectionError: <urllib3.connection.VerifiedHTTPSConnection object at 0x7f912c2cc550>: Failed to establish a new connection: [Errno 110] Connection timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/hgy/anaconda3/envs/pytorch-python3/lib/python3.6/site-packages/requests/adapters.py", line 449, in send timeout=timeout File "/home/hgy/anaconda3/envs/pytorch-python3/lib/python3.6/site-packages/urllib3/connectionpool.py", line 720, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/home/hgy/anaconda3/envs/pytorch-python3/lib/python3.6/site-packages/urllib3/util/retry.py", line 436, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /models.huggingface.co/bert/bert-base-chinese-vocab.txt (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f912c2cc550>: Failed to establish a new connection: [Errno 110] Connection timed out',)) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/hgy/anaconda3/envs/pytorch-python3/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 360, in _from_pretrained resolved_vocab_files[file_id] = cached_path(file_path, cache_dir=cache_dir, force_download=force_download, proxies=proxies, resume_download=resume_download) File "/home/hgy/anaconda3/envs/pytorch-python3/lib/python3.6/site-packages/transformers/file_utils.py", line 180, in cached_path resume_download=resume_download) File "/home/hgy/anaconda3/envs/pytorch-python3/lib/python3.6/site-packages/transformers/file_utils.py", line 327, in get_from_cache http_get(url, temp_file, proxies=proxies, resume_size=resume_size) File "/home/hgy/anaconda3/envs/pytorch-python3/lib/python3.6/site-packages/transformers/file_utils.py", line 243, in http_get response = requests.get(url, stream=True, proxies=proxies, headers=headers) File "/home/hgy/anaconda3/envs/pytorch-python3/lib/python3.6/site-packages/requests/api.py", line 75, in get return request('get', url, params=params, **kwargs) File "/home/hgy/anaconda3/envs/pytorch-python3/lib/python3.6/site-packages/requests/api.py", line 60, in request return session.request(method=method, url=url, **kwargs) File "/home/hgy/anaconda3/envs/pytorch-python3/lib/python3.6/site-packages/requests/sessions.py", line 533, in request resp = self.send(prep, **send_kwargs) File "/home/hgy/anaconda3/envs/pytorch-python3/lib/python3.6/site-packages/requests/sessions.py", line 646, in send r = adapter.send(request, **kwargs) File "/home/hgy/anaconda3/envs/pytorch-python3/lib/python3.6/site-packages/requests/adapters.py", line 516, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /models.huggingface.co/bert/bert-base-chinese-vocab.txt (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f912c2cc550>: Failed to establish a new connection: [Errno 110] Connection timed out',)) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "bert.py", line 125, in <module> text_train, text_dev, text_test, label_train, label_dev, label_test = load_dataset('qa_dataset/Beauty_domain.txt', max_len = 60) File "bert.py", line 96, in load_dataset text_list = prepare_data(text_list) File "bert.py", line 54, in prepare_data tokenizer = BertTokenizer.from_pretrained("bert-base-chinese") File "/home/hgy/anaconda3/envs/pytorch-python3/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 286, in from_pretrained return cls._from_pretrained(*inputs, **kwargs) File "/home/hgy/anaconda3/envs/pytorch-python3/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 372, in _from_pretrained raise EnvironmentError(msg) OSError: Couldn't reach server at '{}' to download vocabulary files.
12-10-2019 02:52:54
12-10-2019 02:52:54
Did you find out what the problem was @venusafroid ?<|||||>> Did you find out what the problem was @venusafroid ? I think op had a problem connecting to s3 as shown in the log ``` requests.exceptions.ConnectionError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /models.huggingface.co/bert/bert-base-chinese-vocab.txt (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f912c2cc550>: Failed to establish a new connection: [Errno 110] Connection timed out',)) ``` @paulmwatson Not sure if it s the same issue for you. The cases for `EnvironmentError` varies. There are file not exist, archive format not recognized, and many other raise by OS. I got cache directory access denied, for example. You may refer to the earlier part of your log. However, I think it is a bug having `"{}"` in log string. Maybe they forgot to put a more informative argument in the format string.
transformers
2,115
closed
[WIP] Add MMBT Model to Transformers Repo
Implements the MMBT Model from Supervised Multimodal Bitransformers for Classifying Images and Text by Douwe Kiela, Suvrat Bhooshan, Hamed Firooz, Davide Testuggine (https://arxiv.org/abs/1909.02950) (https://github.com/facebookresearch/mmbt/) Adds run_mmimdb.py to show example training run on MM-IMDb dataset (http://lisi1.unal.edu.co/mmimdb/)
12-10-2019 02:41:18
12-10-2019 02:41:18
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2115?src=pr&el=h1) Report > Merging [#2115](https://codecov.io/gh/huggingface/transformers/pull/2115?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1d189304624db17749aee23fa2345f009cc48215?src=pr&el=desc) will **decrease** coverage by `0.53%`. > The diff coverage is `21.32%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2115/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2115?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2115 +/- ## ========================================= - Coverage 83.24% 82.7% -0.54% ========================================= Files 110 112 +2 Lines 16053 16189 +136 ========================================= + Hits 13363 13389 +26 - Misses 2690 2800 +110 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2115?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/modeling\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/2115/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX21tYnQucHk=) | `18.25% <18.25%> (ø)` | | | [transformers/configuration\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/2115/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fbW1idC5weQ==) | `60% <60%> (ø)` | | | [transformers/tests/modeling\_common\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2115/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2NvbW1vbl90ZXN0LnB5) | `73.98% <0%> (-0.56%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2115?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2115?src=pr&el=footer). Last update [1d18930...df39611](https://codecov.io/gh/huggingface/transformers/pull/2115?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hi! This is great, thank you for adding this. There's a few things I'll need to change before merging: - I'll complete the documentation so that it is visible on our huggingface.co/transformers and not only in the source code. - I'll add some tests - I'll move the scripts to a folder inside examples, e.g. `examples/mmbt/*`, as it was done with PPLM/Distillation/summarization. I'll push directly on your fork if that's okay!<|||||>That sounds great. Thank you! <|||||>I'm trying to run the `run_mmimdb.py` script, could you tell me where to download the dataset? The link you've provided downloads a .tar that contains a `split.json` as well as training/evaluation data, but no `dev.jsonl` or `train.jsonl` as specified in the `load_examples` method.<|||||>Ok merging, for now, to have the code in the codebase cleanup. Let's not forget to add: - documentation - tests - pretrained model weights later so that people can really use the model.<|||||>where is the `dev.jsonl` or `train.jsonl`?
transformers
2,114
closed
Split models to multiple GPUs
I am willing to fine-tune GPT2-large which simply does not fit into GPU memory. I wanted to run the script `run_lm_finetuning.py` with GPT2-large having two Nvidia Tesla P100, but I suppose model splitting in not supported. Or am I wrong?
12-09-2019 19:28:37
12-09-2019 19:28:37
Indeed, as of now we don't support model splitting across different GPUs. However, I believe Tesla P100s have 16gb (or 12?) of VRAM and GPT-2 XL fits in ~7-8gb of VRAM. Do you get an OOM error when loading GPT-2 large in memory?<|||||>Thanks @LysandreJik. I trained gpt2-medium and it took almost the whole ram ~15gb. When I tried the same with gpt2-large the script was interrupted with "Killed" message twice and I didn't try further.<|||||>@LysandreJik XL needs around 7gb to do an inference but for finetuning it needs more. @dkajtoch did you try reducing your batch size?<|||||>@anandhperumal I have batch size set to 1 and gradient accumulation steps set to 32. I am running on Google Cloud's dedicated virtual machine for deep learning with pytorch 1.2 and cuda 10.0. I can investigate it further if you direct me. I am finetuning gpt2-medium right now and here is a screenshot from nvidia-smi ![image](https://user-images.githubusercontent.com/32985207/70520809-671b8300-1b3e-11ea-86fb-090ecb631ce1.png) <|||||>@dkajtoch for time being keep the gradient accumulation to 1 and let me know if it is able to run for 1 batch?<|||||>@anandhperumal here is what I get when trying to run gpt2-large on Google Colab with Nvidia P100: ``` 12/10/2019 21:26:39 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False 12/10/2019 21:26:39 - INFO - transformers.file_utils - https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-large-config.json not found in cache or force_download set to True, downloading to /tmp/tmprqss7xx9 100% 529/529 [00:00<00:00, 394731.69B/s] 12/10/2019 21:26:39 - INFO - transformers.file_utils - copying /tmp/tmprqss7xx9 to cache at /root/.cache/torch/transformers/c8f887cdfff4327916f4b7ed06a379c0add42bd9c66e1fe3b4a5a8525a4b2678.bc44facd742477605da5434f20a32607ead98e78fff95c5ca9523e47b453e1ad 12/10/2019 21:26:39 - INFO - transformers.file_utils - creating metadata file for /root/.cache/torch/transformers/c8f887cdfff4327916f4b7ed06a379c0add42bd9c66e1fe3b4a5a8525a4b2678.bc44facd742477605da5434f20a32607ead98e78fff95c5ca9523e47b453e1ad 12/10/2019 21:26:39 - INFO - transformers.file_utils - removing temp file /tmp/tmprqss7xx9 12/10/2019 21:26:39 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-large-config.json from cache at /root/.cache/torch/transformers/c8f887cdfff4327916f4b7ed06a379c0add42bd9c66e1fe3b4a5a8525a4b2678.bc44facd742477605da5434f20a32607ead98e78fff95c5ca9523e47b453e1ad 12/10/2019 21:26:39 - INFO - transformers.configuration_utils - Model config { "attn_pdrop": 0.1, "embd_pdrop": 0.1, "finetuning_task": null, "initializer_range": 0.02, "is_decoder": false, "layer_norm_epsilon": 1e-05, "n_ctx": 1024, "n_embd": 1280, "n_head": 20, "n_layer": 36, "n_positions": 1024, "num_labels": 1, "output_attentions": false, "output_hidden_states": false, "output_past": true, "pruned_heads": {}, "resid_pdrop": 0.1, "summary_activation": null, "summary_first_dropout": 0.1, "summary_proj_to_labels": true, "summary_type": "cls_index", "summary_use_proj": true, "torchscript": false, "use_bfloat16": false, "vocab_size": 50257 } 12/10/2019 21:26:39 - INFO - transformers.file_utils - https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-large-vocab.json not found in cache or force_download set to True, downloading to /tmp/tmphav3yghk 100% 1042301/1042301 [00:00<00:00, 6030201.52B/s] 12/10/2019 21:26:40 - INFO - transformers.file_utils - copying /tmp/tmphav3yghk to cache at /root/.cache/torch/transformers/69f8d734111f39eaa51a85907bfdc81a7ef42242d638ffab6f77df305402b2b2.1512018be4ba4e8726e41b9145129dc30651ea4fec86aa61f4b9f40bf94eac71 12/10/2019 21:26:40 - INFO - transformers.file_utils - creating metadata file for /root/.cache/torch/transformers/69f8d734111f39eaa51a85907bfdc81a7ef42242d638ffab6f77df305402b2b2.1512018be4ba4e8726e41b9145129dc30651ea4fec86aa61f4b9f40bf94eac71 12/10/2019 21:26:40 - INFO - transformers.file_utils - removing temp file /tmp/tmphav3yghk 12/10/2019 21:26:40 - INFO - transformers.file_utils - https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-large-merges.txt not found in cache or force_download set to True, downloading to /tmp/tmpnslvtbfy 100% 456318/456318 [00:00<00:00, 3892131.92B/s] 12/10/2019 21:26:40 - INFO - transformers.file_utils - copying /tmp/tmpnslvtbfy to cache at /root/.cache/torch/transformers/38d28acc17953e356348dca948e152c653c0ccf5058a552eea30168e27f02046.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda 12/10/2019 21:26:40 - INFO - transformers.file_utils - creating metadata file for /root/.cache/torch/transformers/38d28acc17953e356348dca948e152c653c0ccf5058a552eea30168e27f02046.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda 12/10/2019 21:26:40 - INFO - transformers.file_utils - removing temp file /tmp/tmpnslvtbfy 12/10/2019 21:26:40 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-large-vocab.json from cache at /root/.cache/torch/transformers/69f8d734111f39eaa51a85907bfdc81a7ef42242d638ffab6f77df305402b2b2.1512018be4ba4e8726e41b9145129dc30651ea4fec86aa61f4b9f40bf94eac71 12/10/2019 21:26:40 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-large-merges.txt from cache at /root/.cache/torch/transformers/38d28acc17953e356348dca948e152c653c0ccf5058a552eea30168e27f02046.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda 12/10/2019 21:26:40 - INFO - transformers.file_utils - https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-large-pytorch_model.bin not found in cache or force_download set to True, downloading to /tmp/tmppfw2_223 100% 3247202234/3247202234 [01:12<00:00, 44997623.14B/s] 12/10/2019 21:27:53 - INFO - transformers.file_utils - copying /tmp/tmppfw2_223 to cache at /root/.cache/torch/transformers/bcc61dff8b1b03d0fd33a1eb1dc4db00875cae33296848155c6882d4bab03db4.999a50942f8e31ea6fa89ec2580cb38fa40e3db5aa46102d0406bcfa77d9142d 12/10/2019 21:28:05 - INFO - transformers.file_utils - creating metadata file for /root/.cache/torch/transformers/bcc61dff8b1b03d0fd33a1eb1dc4db00875cae33296848155c6882d4bab03db4.999a50942f8e31ea6fa89ec2580cb38fa40e3db5aa46102d0406bcfa77d9142d 12/10/2019 21:28:05 - INFO - transformers.file_utils - removing temp file /tmp/tmppfw2_223 12/10/2019 21:28:06 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-large-pytorch_model.bin from cache at /root/.cache/torch/transformers/bcc61dff8b1b03d0fd33a1eb1dc4db00875cae33296848155c6882d4bab03db4.999a50942f8e31ea6fa89ec2580cb38fa40e3db5aa46102d0406bcfa77d9142d 12/10/2019 21:28:44 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, block_size=1024, cache_dir='', config_name='', device=device(type='cuda'), do_eval=False, do_lower_case=False, do_train=True, eval_all_checkpoints=False, eval_data_file=None, evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=6e-05, local_rank=-1, logging_steps=50, max_grad_norm=1.0, max_steps=1, mlm=False, mlm_probability=0.15, model_name_or_path='gpt2-large', model_type='gpt2', n_gpu=1, no_cuda=False, num_train_epochs=1.0, output_dir='finetuning', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=4, per_gpu_train_batch_size=1, save_steps=50, save_total_limit=None, seed=42, server_ip='', server_port='', tokenizer_name='', train_data_file='shakespeares.txt', warmup_steps=0, weight_decay=0.0) 12/10/2019 21:28:44 - INFO - __main__ - Creating features from dataset file at 12/10/2019 21:28:51 - INFO - __main__ - Saving features into cached file gpt2-large_cached_lm_1024_shakespeares.txt 12/10/2019 21:28:51 - INFO - __main__ - ***** Running training ***** 12/10/2019 21:28:51 - INFO - __main__ - Num examples = 1783 12/10/2019 21:28:51 - INFO - __main__ - Num Epochs = 1 12/10/2019 21:28:51 - INFO - __main__ - Instantaneous batch size per GPU = 1 12/10/2019 21:28:51 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 1 12/10/2019 21:28:51 - INFO - __main__ - Gradient Accumulation steps = 1 12/10/2019 21:28:51 - INFO - __main__ - Total optimization steps = 1 Epoch: 0% 0/1 [00:00<?, ?it/s] Iteration: 0% 0/1783 [00:00<?, ?it/s]Traceback (most recent call last): File "/content/transformers/examples/run_lm_finetuning.py", line 594, in <module> main() File "/content/transformers/examples/run_lm_finetuning.py", line 546, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "/content/transformers/examples/run_lm_finetuning.py", line 261, in train outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 549, in forward inputs_embeds=inputs_embeds) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 460, in forward head_mask=head_mask[i]) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 232, in forward head_mask=head_mask) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 193, in forward attn_outputs = self._attn(query, key, value, attention_mask, head_mask) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 145, in _attn w = torch.matmul(q, k) RuntimeError: CUDA out of memory. Tried to allocate 80.00 MiB (GPU 0; 15.90 GiB total capacity; 15.16 GiB already allocated; 11.88 MiB free; 34.49 MiB cached) ``` Script is executed with the following flags: ``` !python /content/transformers/examples/run_lm_finetuning.py \ --train_data_file=shakespeares.txt \ --output_dir=finetuning \ --model_type=gpt2 \ --model_name_or_path=gpt2-large \ --do_train \ --per_gpu_train_batch_size=1 \ --gradient_accumulation_steps=1 \ --learning_rate=0.00006 \ --max_steps=1 ```<|||||>BTW from [gpt2-simple repo](https://github.com/minimaxir/gpt-2-simple) ![image](https://user-images.githubusercontent.com/32985207/70571258-c361bf80-1b9d-11ea-8be2-0d570ef1bbd3.png) <|||||>I am facing the same issue. I am able to fine-tune gpt2 and gpt2-medium but not the gpt2-large. I tried batch_size=1 and gradient_accumulation_steps=1 but still have the same issue.<|||||>@dkajtoch inference would never take too much of memory. Can you try loading the model into your GPU and tell us how much memory is being used? and did you try apex? <|||||>@anandhperumal I loaded the models with the following commands in Colab: ``` import torch from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large') model = GPT2LMHeadModel.from_pretrained('gpt2-large') model.to(torch.device("cuda")) !nvidia-smi ``` and `gpt2-medium` takes about 2GB whereas `gpt2-large` ~3.6GB. I haven't tried apex cause I do not know what that is. Just wanted to know if it is possible to train gpt2-large or higher on gpu, but it seems it is not. <|||||>Apex installed, flag `fp16` set and the same out of memory error<|||||>@dkajtoch I ran the following code on Colab it works perfectly fine. I would recommend you to write your own code rather than using huggingface code. ![image](https://user-images.githubusercontent.com/12907396/70671261-a7f7c280-1c38-11ea-8a64-2bbb0c464cc0.png) <|||||>Thanks @anandhperumal. That is a positive message. So it can work on gpu, but it does not with huggingface script. Maybe this needs further investigation and a fix could be pushed.<|||||>@dkajtoch you can still use the huggingface library but just don't use the run_lm_finetuning.py or debug it your self. It would be great to investigate this problem but it is very subtle. Anyways, I think you can train your model with your own script.<|||||>Right @anandhperumal !<|||||>I am dealing with long sentences and found that setting block_size overcame the out of memory issue. I had batch size = 1 and gradient accumulation = 1 and still got out of memory until on Tesla p100 (16GB) Until I used this to truncate the input sentences. Not sure how it will affects the quality of the results yet though.<|||||>if block_size is the problem for you then rather than truncating the over all input sequence you can change the code to handle batch wise max length that should help you.<|||||>@anandhperumal The code already handles the length per batch with args.block_size = min(args.block_size, tokenizer.max_len_single_sentence)<|||||>@PyxAI You tried for even batch size of 1 so what is your max sequence length ? what kind of dataset are you using.
transformers
2,113
closed
Running run_lm_finetuning.py within python
## Setup * Model: roberta-base * Language: english * OS: Ubuntu 18.04.3 * Python version: 3.7.3 * PyTorch version: 1.3.1+cpu * PyTorch Transformers version (or branch): 2.2.0 * Using GPU ? No * Distributed of parallel setup ? No * Script inputs: ``` python run_lm_finetuning.py \ --output_dir=$OUTPUT_DIR \ --model_type=roberta \ --model_name_or_path=roberta_base \ --do_train \ --train_data_file=$TRAIN_FILE \ --do_eval \ --eval_data_file=$TEST_FILE \ --mlm \ --no_cuda ``` ## ❓ Questions & Help Is there a way to run the above within python? Said differently, if I want to call `run_lm_finetuning.py` from within one of my own python scripts using the above configurations, how would I best go about doing that? Thanks in advance!
12-09-2019 17:43:39
12-09-2019 17:43:39
You have two choices: - transform the code into `run_lm_finetuning.py` as Python functions and use them (I think it's **the most elegant solution**). In order to do that, you've to convert the `main` method with `argparse` arguments to a method without `argparse` and after that you can use the script as given - call `run_lm_finetuning.py` from your Python script with e.g. [subprocess](https://docs.python.org/3.6/library/subprocess.html) > ## Setup > * Model: roberta-base > * Language: english > * OS: Ubuntu 18.04.3 > * Python version: 3.7.3 > * PyTorch version: 1.3.1+cpu > * PyTorch Transformers version (or branch): 2.2.0 > * Using GPU ? No > * Distributed of parallel setup ? No > * Script inputs: > > ``` > python run_lm_finetuning.py \ > --output_dir=$OUTPUT_DIR \ > --model_type=roberta \ > --model_name_or_path=roberta_base \ > --do_train \ > --train_data_file=$TRAIN_FILE \ > --do_eval \ > --eval_data_file=$TEST_FILE \ > --mlm \ > --no_cuda > ``` > > ## Questions & Help > Is there a way to run the above within python? Said differently, if I want to call `run_lm_finetuning.py` from within one of my own python scripts using the above configurations, how would I best go about doing that? > > Thanks in advance!<|||||>@TheEdoardo93 roger that. Thanks!
transformers
2,112
closed
XLM model masked word prediction Double Language
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I am trying to generate in-context word translations. For instance, if the target language is french and "well" is the word to translate. - I walked to the well. -> the translation for "well" should be "puit" - I am doing well. -> the translation for "well" should be "bien" I have a simple solution inspired from https://github.com/huggingface/transformers/issues/1842#issuecomment-555734728 and https://github.com/qiang2100/BERT-LS I am basically concatenating a sentence clone, masking the second target word and changing its language inside the "langs" tensor. The code is something like this : ``` from transformers import XLMTokenizer, XLMWithLMHeadModel import torch # model model_string = "xlm-mlm-tlm-xnli15-1024" # load tokenizer tokenizer = XLMTokenizer.from_pretrained(model_string) # encode sentence with a masked token in the middle encoded_array = tokenizer.encode( "That is a well. That is a " + tokenizer.mask_token + ".") sentence = torch.tensor([encoded_array]) # Identify the masked token position masked_index = torch.where(sentence == tokenizer.mask_token_id)[1].tolist()[0] # Load model model = XLMWithLMHeadModel.from_pretrained(model_string) # Load languages language_id_from = tokenizer.lang2id['en'] # 0 language_id_to = tokenizer.lang2id['fr'] # 0 languages_array = [language_id_from] * len(encoded_array) languages_array[masked_index] = language_id_to langs = torch.tensor(languages_array) langs = langs.view(1, -1) # Get the five top answers result = model(input_ids=sentence, langs=langs) prediction_scores = result[0] result = prediction_scores[:, masked_index].topk(20).indices result = result.tolist()[0] print(tokenizer.decode(result)) ``` I've been getting some positive results, and not so positive results (with longer sentences). For example: "I walked to the well" : well -> Puit is the 3rd result. [easily identifiable as correct!] "I felt well." : well -> Bien is the 6th result. [easily identifiable as correct!] "If the sentence is too convoluted, it won't translate well." -> The results are all parts of correct answers but wrong individually. Like "correct" and "ly" in french. [very hard to piece together answer] "After a hard day's work like yesterday, I really like to go jump in the well to cool down." -> all results are in English for some reason. [no french at all!] Sentences with the animal meaning of "bat" don't work at all. I don't think it knows a bat is an animal. My questions. - Should I be looking at something else than xlm; is there a better way of doing this already out there, or existing solution? I would rather avoid a cloud service like Yandex translate, as I will want to do this a lot! - If this can be achieved with xlm, is there a way to force a particular language, and full words?
12-09-2019 16:47:25
12-09-2019 16:47:25
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,111
closed
Could not run run_ner.py based on XLNET model
## ❓ Questions & Help Hello everyone, when I try to use ELnet model for the NER task through run_ner.py, it shows the following problem: __init__() got an unexpected keyword argument 'do_lower_case' So is it some problem in the modeling_utils.py? Thanks for someone's response!
12-09-2019 14:53:15
12-09-2019 14:53:15
What is ELnet model? The list of models that can be used for NER are: BERT, RoBERTa, DistilBERT (only for English text) and CamemBERT (only for French text). > ## Questions & Help > Hello everyone, when I try to use ELnet model for the NER task through run_ner.py, it shows the following problem: > > **init**() got an unexpected keyword argument 'do_lower_case' > > So is it some problem in the modeling_utils.py? Thanks for someone's response!<|||||>> What is ELnet model? The list of models that can be used for NER are: BERT, RoBERTa, DistilBERT (only for English text) and CamemBERT (only for French text). > > > ## Questions & Help > > Hello everyone, when I try to use ELnet model for the NER task through run_ner.py, it shows the following problem: > > **init**() got an unexpected keyword argument 'do_lower_case' > > So is it some problem in the modeling_utils.py? Thanks for someone's response! I add XLNet in the run_ner.py: from transformers import XLNetConfig, XLNetTokenizer, XLNetForTokenClassification MODEL_CLASSES = { "bert": (BertConfig, BertForTokenClassification, BertTokenizer), "roberta": (RobertaConfig, RobertaForTokenClassification, RobertaTokenizer), "distilbert": (DistilBertConfig, DistilBertForTokenClassification, DistilBertTokenizer), "camembert": (CamembertConfig, CamembertForTokenClassification, CamembertTokenizer), "xlnet": (XLNetConfig, XLNetTokenizer, XLNetForTokenClassification), }<|||||>> > What is ELnet model? The list of models that can be used for NER are: BERT, RoBERTa, DistilBERT (only for English text) and CamemBERT (only for French text). > > > ## Questions & Help > > > Hello everyone, when I try to use ELnet model for the NER task through run_ner.py, it shows the following problem: > > > **init**() got an unexpected keyword argument 'do_lower_case' > > > So is it some problem in the modeling_utils.py? Thanks for someone's response! > > I add XLNet in the run_ner.py: > from transformers import XLNetConfig, XLNetTokenizer, XLNetForTokenClassification > MODEL_CLASSES = { > "bert": (BertConfig, BertForTokenClassification, BertTokenizer), > "roberta": (RobertaConfig, RobertaForTokenClassification, RobertaTokenizer), > "distilbert": (DistilBertConfig, DistilBertForTokenClassification, DistilBertTokenizer), > "camembert": (CamembertConfig, CamembertForTokenClassification, CamembertTokenizer), > "xlnet": (XLNetConfig, XLNetTokenizer, XLNetForTokenClassification), > } Did you read #1592 and #2051 and similar?<|||||>> > > What is ELnet model? The list of models that can be used for NER are: BERT, RoBERTa, DistilBERT (only for English text) and CamemBERT (only for French text). > > > > ## Questions & Help > > > > Hello everyone, when I try to use ELnet model for the NER task through run_ner.py, it shows the following problem: > > > > **init**() got an unexpected keyword argument 'do_lower_case' > > > > So is it some problem in the modeling_utils.py? Thanks for someone's response! > > > > > > I add XLNet in the run_ner.py: > > from transformers import XLNetConfig, XLNetTokenizer, XLNetForTokenClassification > > MODEL_CLASSES = { > > "bert": (BertConfig, BertForTokenClassification, BertTokenizer), > > "roberta": (RobertaConfig, RobertaForTokenClassification, RobertaTokenizer), > > "distilbert": (DistilBertConfig, DistilBertForTokenClassification, DistilBertTokenizer), > > "camembert": (CamembertConfig, CamembertForTokenClassification, CamembertTokenizer), > > "xlnet": (XLNetConfig, XLNetTokenizer, XLNetForTokenClassification), > > } > > Did you read #1592 and #2051 and similar? I just find it, thanks sir!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,110
closed
unable to load the downloaded BERT model offline in local machine . could not find config.json and Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index'] |
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I have downloaded the bert model [from the link in bert github page](https://storage.googleapis.com/bert_models/2018_10_18/cased_L-12_H-768_A-12.zip) offline but unable to load the model offline . from transformers import * model = BertForMaskedLM.from_pretrained("/Users/Downloads/uncased_L-12_H-768_A-12/") Model name '/Users/Downloads/uncased_L-12_H-768_A-12/' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). We assumed '/Users/Downloads/uncased_L-12_H-768_A-12/' was a path or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url. below are the files present in /Users/Downloads/uncased_L-12_H-768_A-12/ bert_config.json bert_model.ckpt.data-00000-of-00001 bert_model.ckpt.index bert_model.ckpt.meta vocab.txt what should i do to load the downloaded model offline ? since the error was saying config.json not found i changed the above 4 file names by removing the word bert from it.Below are the new file names config.json model.ckpt.data-00000-of-00001 model.ckpt.index model.ckpt.meta vocab.txt now when i load the downloaded model offline i get a different error from transformers import * model = BertForMaskedLM.from_pretrained("/Users/Downloads/uncased_L-12_H-768_A-12/") Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index'] found in directory /Users/Downloads/uncased_L-12_H-768_A-12/ or `from_tf` set to False python version:3.7 tensorflow version:1.12
12-09-2019 12:58:52
12-09-2019 12:58:52
Hi, you're downloading one of the original implementation BERT models, which is in TensorFlow and you are trying to load it into one of our Pytorch models. You can either download one of our checkpoints hosted on our S3 with: ```py from transformers import BertForMaskedLM model = BertForMaskedLM.from_pretrained("bert-base-cased") ``` This model will now be available offline as it will be saved in your pytorch cache. Or you can convert the BERT model you downloaded to a checkpoint readable by our library by using the script [convert_bert_original_tf_checkpoint_to_pytorch.py](https://github.com/huggingface/transformers/blob/master/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py)<|||||>@LysandreJik thank you . **How to differentiate Bert tensor flow and pytorch models ?** I followed the instruction and create a PyTorch model using this pyhton code ->convert_bert_original_tf_checkpoint_to_pytorch.py INFO:transformers.modeling_bert:Initialize PyTorch weight ['cls', 'seq_relationship', 'output_weights'] INFO:transformers.modeling_bert:Skipping cls/seq_relationship/output_weights/adam_m INFO:transformers.modeling_bert:Skipping cls/seq_relationship/output_weights/adam_v INFO:transformers.modeling_bert:Skipping global_step Save PyTorch model to /content/drive/My Drive/BMaskLang the BMaskLang file was 402 MB size and it did not have any file extension ,now when i tired to load this pytorch model i get an error _from transformers import BertForMaskedLM model = BertForMaskedLM.from_pretrained("/content/drive/My Drive/BMaskLang") Error: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte_ Basically what im trying to do is train a BertForMaskedLM on a custom corpus . what are the steps to train BertForMaskedLM model on custom corpus and After train model how to load it ? After loading how to test it on a new sentence ? For example if there was a sentence in sample_text.txt corpus like "He went to space.He brought a moon" if i want to test my pretrained BertForMaskedLM to check if it correctly predicts the masked word in sentences" He went to [Mask] .He brought a gallon [Mask] so the model must predict the same words which was in sample_text.txt corpus "space","moon" rather than other words like "store","water" since it was trained on this sample_text.txt corpus .im expecting this behavior .Is this possible to pretrain and build language model using transformers bert ? <|||||>> @LysandreJik thank you . > **How to differentiate Bert tensor flow and pytorch models ?** > > I followed the instruction and create a PyTorch model using this pyhton code ->convert_bert_original_tf_checkpoint_to_pytorch.py > > INFO:transformers.modeling_bert:Initialize PyTorch weight ['cls', 'seq_relationship', 'output_weights'] > INFO:transformers.modeling_bert:Skipping cls/seq_relationship/output_weights/adam_m > INFO:transformers.modeling_bert:Skipping cls/seq_relationship/output_weights/adam_v > INFO:transformers.modeling_bert:Skipping global_step > Save PyTorch model to /content/drive/My Drive/BMaskLang > > the BMaskLang file was 402 MB size and it did not have any file extension ,now when i tired to load this pytorch model i get an error > > _from transformers import BertForMaskedLM model = BertForMaskedLM.from_pretrained("/content/drive/My Drive/BMaskLang") Error: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte_ > > Basically what im trying to do is train a BertForMaskedLM on a custom corpus . > > what are the steps to train BertForMaskedLM model on custom corpus and > After train model how to load it ? > After loading how to test it on a new sentence ? > > For example if there was a sentence in sample_text.txt corpus like > "He went to space.He brought a moon" > > if i want to test my pretrained BertForMaskedLM to check if it correctly predicts the masked word in sentences" He went to [Mask] .He brought a gallon [Mask] > > so the model must predict the same words which was in sample_text.txt corpus "space","moon" rather than other words like "store","water" since it was trained on this sample_text.txt corpus .im expecting this behavior .Is this possible to pretrain and build language model using transformers bert ? I haved the same problem that how to load bert model yesterday. And now I found the solution. 1. run convert_bert_original_tf_checkpoint_to_pytorch.py to create pytorch_model.bin 2. rename bert_config.json to config.json after that, the dictionary must have config.json (BertForMaskedLM.from_pretrained() need it) pytorch_model.bin (BertForMaskedLM.from_pretrained() need it) vocab.txt (BertTokenizer.from_pretrained() need it) python version 3.7 pytorch version 1.3.1 tensorflow version 2.0.0<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>> Hi, you're downloading one of the original implementation BERT models, which is in TensorFlow and you are trying to load it into one of our Pytorch models. > > You can either download one of our checkpoints hosted on our S3 with: > > ```python > from transformers import BertForMaskedLM > > model = BertForMaskedLM.from_pretrained("bert-base-cased") > ``` > > This model will now be available offline as it will be saved in your pytorch cache. > > Or you can convert the BERT model you downloaded to a checkpoint readable by our library by using the script [convert_bert_original_tf_checkpoint_to_pytorch.py](https://github.com/huggingface/transformers/blob/master/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py) hi, I can't open this link https://github.com/huggingface/transformers/blob/master/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py, where can I find this script? Thanks!<|||||>It's [here](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py)<|||||>Traceback (most recent call last): File "/users/sroychou/BERT_text_summarisation/scripts/train_bert_summarizer.py", line 12, in from metrics import optimizer, loss_function, label_smoothing, get_loss_and_accuracy, tf_write_summary, monitor_run File "/users/sroychou/BERT_text_summarisation/scripts/metrics.py", line 16, in _, _, _ = b_score(["I'm Batman"], ["I'm Spiderman"], lang='en', model_type='bert-base-uncased') File "/users/sroychou/.local/lib/python3.7/site-packages/bert_score/score.py", line 105, in score tokenizer = AutoTokenizer.from_pretrained(model_type) File "/users/sroychou/.local/lib/python3.7/site-packages/transformers/tokenization_auto.py", line 298, in from_pretrained config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) File "/users/sroychou/.local/lib/python3.7/site-packages/transformers/configuration_auto.py", line 330, in from_pretrained config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/users/sroychou/.local/lib/python3.7/site-packages/transformers/configuration_utils.py", line 382, in get_config_dict raise EnvironmentError(msg) OSError: Can't load config for 'bert-base-uncased'. Make sure that: 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models' or 'bert-base-uncased' is the correct path to a directory containing a config.json file
transformers
2,109
closed
Error in TFBertForSequenceClassification
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): Bert Language I am using the model on (English, Chinese....): Multi-lingual The problem arise when using: * [x] the official example scripts: (give details) * [ ] my own modified scripts: (give details) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details) <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ## Expected behavior I have fine-tuned a language model using `run_lm_finetuning.py`. When trying to load it with TFBertForSequenceClassification however, it fails. ``` config = transformers.BertConfig.from_json_file('./bertlm_model/config.json') model = transformers.TFBertForSequenceClassification.from_pretrained('./bertlm_model/', from_pt = True) ``` Showing the following error: ``` >>> model = transformers.TFBertForSequenceClassification.from_pretrained('./bertlm_model/', from_pt = True, config = config) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 288, in from_pretrained return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file) File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 82, in load_pytorch_checkpoint_in_tf2_model return load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys) File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 145, in load_pytorch_weights_in_tf2_model assert name in pt_state_dict, "{} not found in PyTorch model".format(name) AssertionError: classifier.weight not found in PyTorch model ``` If I try to run either `transformers.BertForSequenceClassification.from_pretrained('bertlm_model')` or `transformers.TFBertModel.from_pretrained('bertlm_model', from_pt = True)` all is fine! <!-- A clear and concise description of what you expected to happen. --> ## Environment * OS: Ubuntu 18.04 * Python version: 3.7.5 * PyTorch version: 1.3.1 * Transformers version (or branch): Git repo master comit 0cb163865a4c761c226b151283309eedb2b1ca4d * Using GPU: Yes * Distributed of parallel setup ? * Any other relevant information:
12-09-2019 12:43:48
12-09-2019 12:43:48
The code line that loads the BERT configuration is surely correct: ``` > config = transformers.BertConfig.from_json_file('./bertlm_model/config.json') ``` But, for what concern the loading of a fine-tuned BERT model on a custom dataset, I think it's not correct the line you've used. Can you try with the following line suggested by me? ``` > from transformers import TFBertForSequenceClassification > model = TFBertForSequenceClassification.from_pretrained('bertlm_model', from_pt = True) ``` I suspect that it doesn't work however. **It's a PyTorch->TF 2.0 conversion problem**. It would be useful to understand that this bug occurs with _only_ BERT model or with _other_ models. > ## Bug > Model I am using (Bert, XLNet....): Bert > > Language I am using the model on (English, Chinese....): Multi-lingual > > The problem arise when using: > > * [x] the official example scripts: (give details) > * [ ] my own modified scripts: (give details) > > The tasks I am working on is: > > * [ ] an official GLUE/SQUaD task: (give the name) > * [x] my own task or dataset: (give details) > > ## Expected behavior > I have fine-tuned a language model using `run_lm_finetuning.py`. > > When trying to load it with TFBertForSequenceClassification however, it fails. > > ``` > config = transformers.BertConfig.from_json_file('./bertlm_model/config.json') > model = transformers.TFBertForSequenceClassification.from_pretrained('./bertlm_model/', from_pt = True) > ``` > > Showing the following error: > > ``` > >>> model = transformers.TFBertForSequenceClassification.from_pretrained('./bertlm_model/', from_pt = True, config = config) > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 288, in from_pretrained > return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file) > File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 82, in load_pytorch_checkpoint_in_tf2_model > return load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys) > File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 145, in load_pytorch_weights_in_tf2_model > assert name in pt_state_dict, "{} not found in PyTorch model".format(name) > AssertionError: classifier.weight not found in PyTorch model > ``` > > If I try to run either `transformers.BertForSequenceClassification.from_pretrained('bertlm_model')` or `transformers.TFBertModel.from_pretrained('bertlm_model', from_pt = True)` all is fine! > > ## Environment > * OS: Ubuntu 18.04 > * Python version: 3.7.5 > * PyTorch version: 1.3.1 > * Transformers version (or branch): Git repo master comit [0cb1638](https://github.com/huggingface/transformers/commit/0cb163865a4c761c226b151283309eedb2b1ca4d) > * Using GPU: Yes > * Distributed of parallel setup ? > * Any other relevant information:<|||||>Thanks for your answer - unfortunately it didn't work.. As I'm fine-tuning the LM on bert-multilingual, I can't try it out with other models. However I have tried to load all the different BERT huggingface-sub-models using my fine-tuned language model and it seems it is only TFBertModel and TFBertForMaskedLM it will load? Hope that can lead you in a direction? ``` import transformers model_dir = 'bertlm_model/' config = transformers.BertConfig.from_json_file(model_dir + 'config.json') ``` ### TFBertModel (works fine) ``` >>> model = transformers.TFBertModel.from_pretrained(model_dir, from_pt = True, config = config) >>> ``` ### TFBertForPreTraining (won't load) ``` >>> model = transformers.TFBertForPreTraining.from_pretrained(model_dir, from_pt = True, config = config) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 288, in from_pretrained return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file) File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 82, in load_pytorch_checkpoint_in_tf2_model return load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys) File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 145, in load_pytorch_weights_in_tf2_model assert name in pt_state_dict, "{} not found in PyTorch model".format(name) AssertionError: cls.seq_relationship.weight not found in PyTorch model >>> ``` ### TFBertForMaskedLM (works fine) ``` >>> model = transformers.TFBertForMaskedLM.from_pretrained(model_dir, from_pt = True, config = config) >>> ``` ### TFBertForNextSentencePrediction (won't load) ``` >>> model = transformers.TFBertForNextSentencePrediction.from_pretrained(model_dir, from_pt = True, config = config) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 288, in from_pretrained return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file) File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 82, in load_pytorch_checkpoint_in_tf2_model return load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys) File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 145, in load_pytorch_weights_in_tf2_model assert name in pt_state_dict, "{} not found in PyTorch model".format(name) AssertionError: cls.seq_relationship.weight not found in PyTorch model >>> ``` ### TFBertForSequenceClassification (won't load) ``` >>> model = transformers.TFBertForSequenceClassification.from_pretrained(model_dir, from_pt = True, config = config) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 288, in from_pretrained return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file) File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 82, in load_pytorch_checkpoint_in_tf2_model return load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys) File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 145, in load_pytorch_weights_in_tf2_model assert name in pt_state_dict, "{} not found in PyTorch model".format(name) AssertionError: classifier.weight not found in PyTorch model >>> ``` ### TFBertForMultipleChoice (won't load) ``` >>> model = transformers.TFBertForMultipleChoice.from_pretrained(model_dir, from_pt = True, config = config) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 288, in from_pretrained return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file) File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 82, in load_pytorch_checkpoint_in_tf2_model return load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys) File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 109, in load_pytorch_weights_in_tf2_model tfo = tf_model(tf_inputs, training=False) # Make sure model is built File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__ outputs = self.call(cast_inputs, *args, **kwargs) File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_bert.py", line 943, in call seq_length = shape_list(input_ids)[2] IndexError: list index out of range >>> ``` ### TFBertForTokenClassification (won't load) ``` >>> model = transformers.TFBertForTokenClassification.from_pretrained(model_dir, from_pt = True, config = config) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 288, in from_pretrained return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file) File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 82, in load_pytorch_checkpoint_in_tf2_model return load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys) File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 145, in load_pytorch_weights_in_tf2_model assert name in pt_state_dict, "{} not found in PyTorch model".format(name) AssertionError: classifier.weight not found in PyTorch model >>> ``` ### TFBertForQuestionAnswering (won't load) ``` >>> model = transformers.TFBertForQuestionAnswering.from_pretrained(model_dir, from_pt = True, config = config) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 288, in from_pretrained return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file) File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 82, in load_pytorch_checkpoint_in_tf2_model return load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys) File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 145, in load_pytorch_weights_in_tf2_model assert name in pt_state_dict, "{} not found in PyTorch model".format(name) AssertionError: qa_outputs.weight not found in PyTorch model >>> ```<|||||>The same pattern of working (e.g. _TFBertForMaskedLM_) vs not working (e.g. _TFBertForQuestionAnswering_) appears also with the PyTorch version of these models? e.g. _BertForMaskedLM_ > Thanks for your answer - unfortunately it didn't work.. > > As I'm fine-tuning the LM on bert-multilingual, I can't try it out with other models. However I have tried to load all the different BERT huggingface-sub-models using my fine-tuned language model and it seems it is only TFBertModel and TFBertForMaskedLM it will load? > > Hope that can lead you in a direction? > > ``` > import transformers > model_dir = 'bertlm_model/' > config = transformers.BertConfig.from_json_file(model_dir + 'config.json') > ``` > > ### TFBertModel (works fine) > ``` > >>> model = transformers.TFBertModel.from_pretrained(model_dir, from_pt = True, config = config) > >>> > ``` > > ### TFBertForPreTraining (won't load) > ``` > >>> model = transformers.TFBertForPreTraining.from_pretrained(model_dir, from_pt = True, config = config) > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 288, in from_pretrained > return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file) > File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 82, in load_pytorch_checkpoint_in_tf2_model > return load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys) > File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 145, in load_pytorch_weights_in_tf2_model > assert name in pt_state_dict, "{} not found in PyTorch model".format(name) > AssertionError: cls.seq_relationship.weight not found in PyTorch model > >>> > ``` > > ### TFBertForMaskedLM (works fine) > ``` > >>> model = transformers.TFBertForMaskedLM.from_pretrained(model_dir, from_pt = True, config = config) > >>> > ``` > > ### TFBertForNextSentencePrediction (won't load) > ``` > >>> model = transformers.TFBertForNextSentencePrediction.from_pretrained(model_dir, from_pt = True, config = config) > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 288, in from_pretrained > return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file) > File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 82, in load_pytorch_checkpoint_in_tf2_model > return load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys) > File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 145, in load_pytorch_weights_in_tf2_model > assert name in pt_state_dict, "{} not found in PyTorch model".format(name) > AssertionError: cls.seq_relationship.weight not found in PyTorch model > >>> > ``` > > ### TFBertForSequenceClassification (won't load) > ``` > >>> model = transformers.TFBertForSequenceClassification.from_pretrained(model_dir, from_pt = True, config = config) > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 288, in from_pretrained > return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file) > File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 82, in load_pytorch_checkpoint_in_tf2_model > return load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys) > File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 145, in load_pytorch_weights_in_tf2_model > assert name in pt_state_dict, "{} not found in PyTorch model".format(name) > AssertionError: classifier.weight not found in PyTorch model > >>> > ``` > > ### TFBertForMultipleChoice (won't load) > ``` > >>> model = transformers.TFBertForMultipleChoice.from_pretrained(model_dir, from_pt = True, config = config) > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 288, in from_pretrained > return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file) > File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 82, in load_pytorch_checkpoint_in_tf2_model > return load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys) > File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 109, in load_pytorch_weights_in_tf2_model > tfo = tf_model(tf_inputs, training=False) # Make sure model is built > File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 822, in __call__ > outputs = self.call(cast_inputs, *args, **kwargs) > File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_bert.py", line 943, in call > seq_length = shape_list(input_ids)[2] > IndexError: list index out of range > >>> > ``` > > ### TFBertForTokenClassification (won't load) > ``` > >>> model = transformers.TFBertForTokenClassification.from_pretrained(model_dir, from_pt = True, config = config) > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 288, in from_pretrained > return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file) > File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 82, in load_pytorch_checkpoint_in_tf2_model > return load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys) > File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 145, in load_pytorch_weights_in_tf2_model > assert name in pt_state_dict, "{} not found in PyTorch model".format(name) > AssertionError: classifier.weight not found in PyTorch model > >>> > ``` > > ### TFBertForQuestionAnswering (won't load) > ``` > >>> model = transformers.TFBertForQuestionAnswering.from_pretrained(model_dir, from_pt = True, config = config) > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 288, in from_pretrained > return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file) > File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 82, in load_pytorch_checkpoint_in_tf2_model > return load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys) > File "/home/myuser/.local/share/virtualenvs/BERTlm-giwpujkO/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py", line 145, in load_pytorch_weights_in_tf2_model > assert name in pt_state_dict, "{} not found in PyTorch model".format(name) > AssertionError: qa_outputs.weight not found in PyTorch model > >>> > ```<|||||>All models load fine using the PyTorch version. So it is only some of the TF versions that are not working.. ``` >>> model = transformers.BertModel.from_pretrained(model_dir, config = config) >>> model = transformers.BertForPreTraining.from_pretrained(model_dir, config = config) >>> model = transformers.BertForMaskedLM.from_pretrained(model_dir, config = config) >>> model = transformers.BertForNextSentencePrediction.from_pretrained(model_dir, config = config) >>> model = transformers.BertForSequenceClassification.from_pretrained(model_dir, config = config) >>> model = transformers.BertForMultipleChoice.from_pretrained(model_dir, config = config) >>> model = transformers.BertForTokenClassification.from_pretrained(model_dir, config = config) >>> model = transformers.BertForQuestionAnswering.from_pretrained(model_dir, config = config) >>> ```<|||||>Hello! If I understand correctly, you fine-tuned a BERT model with a language modeling head (`BertForMaskedLM`), which was then saved and now you're trying to load it in TensorFlow. You can load it with `TFBertModel` and `TFBertForMaskedLM` as the weights are there, but can't load it in other architectures as some weights are lacking. In PyTorch you can load them but it randomly initializes the lacking weights. I believe we should have the same behavior between our TensorFlow models and our PyTorch models so I'll take a look at it. In the meantime, here's a workaround that will allow you to load the models in TensorFlow, for example from a `BertForMaskedLM` checkpoint to a `TFBertForSequenceClassification`: - Save the `BertForMaskedLM` checkpoint - Load it in `BertForSequenceClassification` - Save the checkpoint from `BertForSequenceClassification` - Load this checkpoint in `TFBertForSequenceClassification` Here's an example that will allow you to do that, make sure the directories exist : ```py from transformers import BertForMaskedLM, BertForSequenceClassification, TFBertForSequenceClassification # This must have already been done by the script you used model = BertForMaskedLM.from_pretrained("bert-base-cased") model.save_pretrained("here") # Load the saved checkpoint in a PyTorch BertForSequenceClassification model and save it model = BertForSequenceClassification.from_pretrained("here") model.save_pretrained("here-seq") # Load the PyTorch model in the TF model of the same type TFBertForSequenceClassification.from_pretrained("here-seq", from_pt=True) ```<|||||>Perfect - the workaround works - thanks a lot 👍 And yes, that is sort of the procedure I've used. However I did't run the BertForMaskedLM directly but instead used the run_lm_finetuning.py script to generate my fine-tuned LM: ``` python run_lm_finetuning.py \ --train_data_file=<pathToTrain.txt>\ --output_dir=bertlm_model \ --eval_data_file=<pathToTest.txt>\ --model_type=bert \ --model_name_or_path=bert-base-multilingual-cased \ --mlm \ --cache_dir=cache \ --do_train \ --do_eval \ --per_gpu_train_batch_size=8\ --per_gpu_eval_batch_size=8 ``` And from there, I then try to load it with: ``` import transformers model_dir = 'bertlm_model' config = transformers.BertConfig.from_json_file(model_dir + '/config.json') model = transformers.TFBertForSequenceClassification.from_pretrained(model_dir, from_pt = True, config = config) ```
transformers
2,108
closed
I am running bert fine tuning with cnnbase model but my project stops at loss.backward() without any prompt in cmd.
My aim is to make a five-category text classification I am running transformers fine tuning bert with `cnnbase` model but my program stops at `loss.backward()` without any prompt in `cmd`. I debug find that the program stop at the loss.backward line without any error prompt My program runs successfully in `rnn base` such as `lstm` and `rcnn`. But when I am running some `cnnbase` model the strange bug appears. My cnn model code: ``` import torch import torch.nn as nn import torch.nn.functional as F from transformers.modeling_bert import BertPreTrainedModel, BertModel n_filters = 200 filter_sizes = [2,3,4] class BertCNN(BertPreTrainedModel): def __init__(self, config): super(BertPreTrainedModel, self).__init__(config) self.num_filters = n_filters self.filter_sizes = filter_sizes self.bert = BertModel(config) for param in self.bert.parameters(): param.requires_grad = True self.convs = nn.ModuleList( [nn.Conv2d(1, self.num_filters, (k, config.hidden_size)) for k in self.filter_sizes]) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.fc_cnn = nn.Linear(self.num_filters * len(self.filter_sizes), config.num_labels) def conv_and_pool(self, x, conv): x = F.relu(conv(x)).squeeze(3) x = F.max_pool1d(x, x.size(2)).squeeze(2) return x def forward(self, input_ids, attention_mask=None, token_type_ids=None, head_mask=None): outputs = self.bert(input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, head_mask=head_mask) encoder_out, text_cls = outputs out = encoder_out.unsqueeze(1) out = torch.cat([self.conv_and_pool(out, conv) for conv in self.convs], 1) out = self.dropout(out) out = self.fc_cnn(out) return out ``` My train code: ``` for step, batch in enumerate(data): self.model.train() batch = tuple(t.to(self.device) for t in batch) input_ids, input_mask, segment_ids, label_ids = batch print("input_ids, input_mask, segment_ids, label_ids SIZE: \n") print(input_ids.size(), input_mask.size(),segment_ids.size(), label_ids.size()) # torch.Size([2, 80]) torch.Size([2, 80]) torch.Size([2, 80]) torch.Size([2]) logits = self.model(input_ids, segment_ids, input_mask) print("logits and label ids size: ",logits.size(), label_ids.size()) # torch.Size([2, 5]) torch.Size([2]) loss = self.criterion(output=logits, target=label_ids) #loss function:CrossEntropyLoss() if len(self.n_gpu) >= 2: loss = loss.mean() if self.gradient_accumulation_steps > 1: loss = loss / self.gradient_accumulation_steps if self.fp16: with amp.scale_loss(loss, self.optimizer) as scaled_loss: scaled_loss.backward() clip_grad_norm_(amp.master_params(self.optimizer), self.grad_clip) else: loss.backward() # I debug find that the program stop at this line without any error prompt ``` HELP~!~ 、 I posted my questions on various community platforms,stackoverflow、other github repositories. No one replied to me.
12-09-2019 10:36:09
12-09-2019 10:36:09
the step1 logits : logits tensor([[ 0.8831, -0.0368, -0.2206, -2.3484, -1.3595]], device='cuda:1', grad_fn=<AddmmBackward>) the step1 loss: tensor(1.5489, device='cuda:1', grad_fn=NllLossBackward>) but why can't loss.backward()?
transformers
2,107
closed
create encoder attention mask from shape of hidden states
As noted by @efeiefei (#1770) we currently create masks on the encoder hidden states (when they're not provided) based on the shape of the inputs to the decoder. This is obviously wrong; sequences can be of different lengths. We now create the encoder attention mask based on the `batch_size` and `sequence_length` of the encoder hidden states.
12-09-2019 10:22:43
12-09-2019 10:22:43
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2107?src=pr&el=h1) Report > Merging [#2107](https://codecov.io/gh/huggingface/transformers/pull/2107?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0cb163865a4c761c226b151283309eedb2b1ca4d?src=pr&el=desc) will **increase** coverage by `<.01%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2107/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2107?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2107 +/- ## ========================================== + Coverage 82.67% 82.67% +<.01% ========================================== Files 111 111 Lines 16162 16164 +2 ========================================== + Hits 13362 13364 +2 Misses 2800 2800 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2107?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2107/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2JlcnQucHk=) | `87.72% <100%> (+0.04%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2107?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2107?src=pr&el=footer). Last update [0cb1638...3520be7](https://codecov.io/gh/huggingface/transformers/pull/2107?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>👍
transformers
2,106
closed
RobertaTokenizer runs slowly after add _tokens
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Hi, I use RobertaTokenizer like this: ```python tokenizer = RobertaTokenizer.from_pretrained(FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case) tokenizer.add_tokens([x.strip() for x in open('add_tokens.txt').readlines()]) ``` There are about 200 words in `add_tokens.txt`. I tested it on 300 sample datasets, and 250% more time after using `add_tokens.txt`. Is there any way to optimize it?
12-09-2019 09:58:35
12-09-2019 09:58:35
Hi, I've done a short study and I confirm the behavior you see. I've proposed a simple PR attached that gives interesting results and quite important speed improvement in any case. To be discussed!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,105
closed
Some bug in using eval_all_checkpoints
when using --eval_all_checkpoints checkpoints will find a pytorch_model.bin just under output_dir when calling evaluate(args, model, tokenizer, prefix=global_step) will get a FileNotFoundError
12-09-2019 04:36:55
12-09-2019 04:36:55
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2105?src=pr&el=h1) Report > Merging [#2105](https://codecov.io/gh/huggingface/transformers/pull/2105?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0cb163865a4c761c226b151283309eedb2b1ca4d?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2105/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2105?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2105 +/- ## ======================================= Coverage 82.67% 82.67% ======================================= Files 111 111 Lines 16162 16162 ======================================= Hits 13362 13362 Misses 2800 2800 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2105?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2105?src=pr&el=footer). Last update [0cb1638...4757840](https://codecov.io/gh/huggingface/transformers/pull/2105?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,104
closed
Having trouble reproducing SQuAD 2.0 results using ALBERT v2 models
## ❓ Questions & Help I tried to finetune ALBERT v2 models on SQuAD 2.0, but sometimes the loss doesn't decrease and performance on dev set is low. The problem may happen when using `albert-large-v2` and `albert-xlarge-v2` in my case. Any suggestions? ![TIM截图20191209111606](https://user-images.githubusercontent.com/14048129/70404527-54894700-1a75-11ea-8ec2-1471547e01a9.png) ![TIM截图20191209111551](https://user-images.githubusercontent.com/14048129/70404532-58b56480-1a75-11ea-9f60-17f4fc0b0200.png) ![TIM截图20191209111533](https://user-images.githubusercontent.com/14048129/70404534-5b17be80-1a75-11ea-88ea-a02a4f0dcb2e.png)
12-09-2019 03:07:46
12-09-2019 03:07:46
What GPU(s) and hyperparameters are you using? Specifically: --learning_rate ? --per_gpu_train_batch_size ? --gradient_accumulation_steps ? --warmup_steps ? I'm on my third xxlarge-v1 fine-tune, ~23 hours each epoch plus eval on 2x NVIDIA 1080Ti. Results are relatively good, best of all the models I've fine-tuned on SQuAD 2.0 so far: ``` albert_xxlargev1_squad2_512_bs32: { "exact": 83.67725090541565, "f1": 87.51235434089064, "total": 11873, "HasAns_exact": 81.86572199730094, "HasAns_f1": 89.54692697189559, "HasAns_total": 5928, "NoAns_exact": 85.48359966358284, "NoAns_f1": 85.48359966358284, "NoAns_total": 5945 } ``` ![lr](https://user-images.githubusercontent.com/44321615/70405627-b5bc0680-19f2-11ea-8670-8385bce5f98c.jpg) ![loss](https://user-images.githubusercontent.com/44321615/70405643-c1a7c880-19f2-11ea-8fb5-cd216e26dc80.jpg) <|||||>I use 6xP40 for xlarge-v2 and 4xP40 for large-v2 with a same total batch size of 48 (8x6 & 12x4), lr is set to 3e-5 for all the runs. Other options remain default. I also launched several runs with same setting, sometimes the problem happened but sometimes didn't, this is weird because I didn't even change the random seed. <|||||>I meant to include this link in my post above, which details the Google-Research (GR) `run_squad_sp.py` hyperparameters: #https://github.com/huggingface/transformers/issues/1974 As demonstrated and referenced in my link, GR's bs=32 was a very slight improvement for me over my initial bs=48 fine-tune as you also chose. Peak learning_rate=5e-5 after a 10% linear lr warm-up proportion and linear lr decay after that. Hope this helps, please post your results for comparison.<|||||>From tensorboard, the best-performed one is albert-xxlarge-v2 with 88.49 F1 and 84.83 EM at step 25k. I didn't run any experiment on v1 models<|||||>> From tensorboard, the best-performed one is albert-xxlarge-v2 with 88.49 F1 and 84.83 EM at step 25k. I didn't run any experiment on v1 models Nice results, 6 epochs? According to GR at the time of V2 release, the xxlarge-V1 model outperforms the xxlarge-V2 model.<|||||>Not sure if this is related, but I found that ALBERT is very unstable. When running in non-deterministic mode, it will sometimes get stuck in a very strange spot and never recover. This becomes very clear when you use a secondary score as a sanity check (e.g. Pearson correlation for regression, f1 for classification). So for the exact same parameters (but each time presumably another random seed), I would sometimes get e.g. `r=0.02` and other times `r=0.77`. I'd have to test more to get conclusive results, but it's something that I haven't experienced before with other models.<|||||>The best I can get with xxlarge-v2 is ` Results: {'exact': 84.86481933799377, 'f1': 88.43795242530017, 'total': 11873, 'HasAns_exact': 82.05128205128206, 'HasAns_f1': 89.20779506504576, 'HasAns_total': 5928, 'NoAns_exact': 87. 67031118587047, 'NoAns_f1': 87.67031118587047, 'NoAns_total': 5945, 'best_exact': 84.86481933799377, 'best_exact_thresh': 0.0, 'best_f1': 88.4379524253, 'best_f1_thresh': 0.0} ` with 2e-5 lr, 4xV100, 2 samples per GPU, no gradient accumulation, and ran for 3 epochs. The current results are pretty about the same with Roberta large, but I expect better performance from ALBERT. Still tuning. Any idea on how to improve it? <|||||>Same issue with `albert-large-v2` but don't know why. Any result?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,103
closed
Is there any way to treat the whitespace characters same as other characters when tokenizing?
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I am trying to work on clinical notes data, and the whitespaces in the notes may contain useful information (e.g. section separation). Is there any way to encode the whitespaces as well during tokenization?
12-09-2019 02:34:43
12-09-2019 02:34:43
Did you try replacing all meaningful whitespaces by a special token `<space>` and just add this new token to the tokenizer and train your model with it?<|||||>> Did you try replacing all meaningful whitespaces by a special token `<space>` and just add this new token to the tokenizer and train your model with it? I guess it will be a bit tricky to define the "meaningful" whitespaces, but I will give it a shot. Thanks :)
transformers
2,102
closed
How to pretrain BERT whole word masking (wwm) model?
## 🚀 Feature Code to pretrain BERT whole word masking (wwm) model ## Motivation WWM offers better performance, but the current codebase doesn't seem to support this feature. ## Additional context Related i #1352
12-08-2019 23:11:42
12-08-2019 23:11:42
Any ideas on whether this will be included sooner or later?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,101
closed
:bug: #2096 in tokenizer.decode, adds a space after special tokens for string format
This correction is cosmetic to correct the observed formatting issue. No test was implemented because ideally composition of functions `encode.decode` should in theory return the original sentence. Yet there are some space strip (and lower-casing) in code so it's not certain to return exactly the original sentence with same spaces.
12-08-2019 22:58:20
12-08-2019 22:58:20
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2101?src=pr&el=h1) Report > Merging [#2101](https://codecov.io/gh/huggingface/transformers/pull/2101?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0cb163865a4c761c226b151283309eedb2b1ca4d?src=pr&el=desc) will **decrease** coverage by `2.58%`. > The diff coverage is `19.23%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2101/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2101?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2101 +/- ## ========================================== - Coverage 82.67% 80.08% -2.59% ========================================== Files 111 112 +1 Lines 16162 16874 +712 ========================================== + Hits 13362 13514 +152 - Misses 2800 3360 +560 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2101?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/tests/tokenization\_bert\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2101/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl9iZXJ0X3Rlc3QucHk=) | `89.47% <ø> (ø)` | :arrow_up: | | [transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2101/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2F1dG8ucHk=) | `29.89% <ø> (-0.72%)` | :arrow_down: | | [transformers/tests/tokenization\_gpt2\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2101/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl9ncHQyX3Rlc3QucHk=) | `97.43% <ø> (ø)` | :arrow_up: | | [transformers/data/metrics/squad\_metrics.py](https://codecov.io/gh/huggingface/transformers/pull/2101/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2RhdGEvbWV0cmljcy9zcXVhZF9tZXRyaWNzLnB5) | `0% <0%> (ø)` | | | [transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2101/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `90.86% <100%> (ø)` | :arrow_up: | | [transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2101/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `91.02% <100%> (+0.55%)` | :arrow_up: | | [transformers/tests/tokenization\_tests\_commons.py](https://codecov.io/gh/huggingface/transformers/pull/2101/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl90ZXN0c19jb21tb25zLnB5) | `100% <100%> (ø)` | :arrow_up: | | [transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2101/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl94bG0ucHk=) | `83.46% <100%> (+0.13%)` | :arrow_up: | | [transformers/data/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/2101/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2RhdGEvX19pbml0X18ucHk=) | `100% <100%> (ø)` | :arrow_up: | | [transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2101/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl94bG5ldC5weQ==) | `90.4% <100%> (+0.15%)` | :arrow_up: | | ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/2101/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2101?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2101?src=pr&el=footer). Last update [0cb1638...35737ea](https://codecov.io/gh/huggingface/transformers/pull/2101?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hi @mandubian, thanks for opening a PR to fix this. I think this raises an issue when there are two new tokens added which are right after each other, as spaces get added before and after the tokens. Here's an example of the issue, with your change and based on #2096: ```py from transformers import BertTokenizer bert_tokenizer = BertTokenizer.from_pretrained('bert-base-cased') bert_tokenizer.add_tokens(['[ENT]', '[TEN]']) print(len(bert_tokenizer)) x = bert_tokenizer.encode("you are the [ENT] [TEN] with [ENT] and [ENT]") print(bert_tokenizer.decode(x)) # outputs: [CLS] you are the [ENT] [TEN] with [ENT] and [ENT] [SEP] # with two spaces ----------------^^ ```<|||||>You're right, I hadn't thought about the case of 2 consecutive tokens. Let's try to make it better.<|||||>@LysandreJik I've pushed a new version for discussion. Added tokens aren't prepended with space anymore but subtexts are. I've considered different solutions but none is perfect and a compromise has to be made. In Bert tokenizer, `convert_tokens_to_string` joins with space between sub-strings (not in GPT2 tokenizer) but then those sub-strings and added tokens need to be separated also by spaces. So I choose to remove the space before added tokens and add a space in subtexts join so that there are always spaces. But it can add spaces where there weren't. With Bert Tokenizer, if you have `[ABC] toto tata [DEF] [GHI]`, `decode.encode` returns the same string (except lower case). But when you have less spaces `[ABC]toto tata [DEF][GHI]`, `decode.encode` returns `[ABC] toto tata [DEF] [GHI]` with more spaces. For GPT2, it's the same, it doesn't respect all spaces from input. I've added a test in `tokenizer_bert_test` and `tokenizer_gpt2_test` but it's not so good as it must be implemented for all tokenizers. Don't hesitate to give more ideas, it's just a proposition on this quite stupid issue quite far from models 😄 <|||||>This seems to work, thanks @mandubian! I pushed another commit on your fork to test every tokenizer + rebase on master.<|||||>@LysandreJik (I've deleted my previous message from tonight, I had misread your message on mobile :D) Just to know, don't you squash commits in general?
transformers
2,100
closed
Unclear how to decode a model's output
## Unclear how to decode a model's output Hello, after digging through the docs for about an hour it's still rather unclear to me how one is supposed to decode a model's output. Using the following code: ``` tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased') model = DistilBertModel.from_pretrained('distilbert-base-uncased') input = torch.tensor(tokenizer.encode('Some text')).unsqueeze(0) outputs = model(input) lhs = outputs[0] print(tokenizer.decode(lhs)) ``` The lhs is always decoded as `[UNK]` Is this just the expected result due to the model being untrained ? Is the decode functionality of the tokenizer being used in the wrong way ? Searching for `decode` in the docs yields no code examples with it being used with a model's output.
12-08-2019 22:32:32
12-08-2019 22:32:32
DistilBERT as any BERT is a Transformer encoder so it encodes a sequence of tokens into a vector in the embedding space. It doesn't return a sequence of tokens. The output of the model is `return output # last-layer hidden-state, (all hidden_states), (all attentions)` https://github.com/huggingface/transformers/blob/master/transformers/modeling_distilbert.py#L484. If you check the size of this hidden-state, it is `torch.Size([1, 4, 768])`. `768` being the size of the hidden-state ie the size of the embedding vector. `4` is the number of token in input sequence (`[CLS]` and `[SEP]` tokens are added by tokenizer) `encode` is meant to return a sequence of token from a sequence of words. `decode` is meant to return a sequence of words from a sequence of tokens. So if you do: ```python encoded = tokenizer.encode('Some text') # encoded: [101, 2070, 3793, 102] decoded = tokenizer.decode(encoded)) # decoded: [CLS] some text [SEP] ``` But, you can't use `decode` on the output of the model as it's not a sequence of tokens but an embedding vector. Don't hesitate to ask question if my explanation isn't clear.<|||||>Hmh, I'll try to re-phrase my question because your answer did not clear up any of my confusion: Given the output of any hugging face model (e.g. the ones with a language modeling head, take for example `GPT2LMHeadModel`), how does one actually go from the model's output to words ? <|||||>Hi, you can read the [quickstart](https://huggingface.co/transformers/quickstart.html#openai-gpt-2) of the documentation to see how to use `GPT2LMHeadModel` with the decoding method.<|||||>@George3d6 you can decode if the output of your model has the size of the vocabulary. So you need an output head that convert the hidden-size of the encoder or decoder into the vocabulary size. For `GPT2LMHeadModel`, follow what Lysandre said.<|||||>thank you all for your explanations here. I can do that with the GPT2 models with no issues, but my issue is that now I want to do the same but with the smaller simpler DistilmBERT model which is also multilingual in 104 languages, so I want to generate text in for example Spanish and English and with this lighter model. So I do this: tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-multilingual-cased') model = DistilBertForMaskedLM.from_pretrained('distilbert-base-multilingual-cased') input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1 outputs = model(input_ids, masked_lm_labels=input_ids) loss, prediction_scores = outputs[:2] but now, how do I get the continuation of the phrase at that point? I tried to apply tokenizer.decode with no luck there, thank you<|||||>This issue is quite old. But for those who still looking for the answer, you should load the model with LM Head instead. for BERT is BertForMaskedLM.from_pretrained() for DistilBERT is DistilBertForMaskedLM.from_pretrained() The tokenizer is the same.
transformers
2,099
closed
which special token is used to predict the score in roberta?
In bert, we use the embedding of <cls> to predict the score, how about the roberta?
12-08-2019 14:24:20
12-08-2019 14:24:20
Can you give some more information? It's not clear what you mean by "score". The special classification token for RoBERTa is `<s>`.<|||||>Thanks!<|||||>@tzhxs If that's everything you need, please close this topic.<|||||>ok
transformers
2,098
closed
Understanding output of models and relation to token probability
## ❓ Questions & Help So I understand that different models were trained on different objectives. An important one is a masked language modeling objective. I would assume, then, that the model outputs probabilities for each token as the final output. Is that true? For models that have not been trained on MLM, is it still possible to get the model's given probability for that token? (I imagine that just taking the sigmoid is not exactly the probability of the model, right?)
12-08-2019 09:35:25
12-08-2019 09:35:25
There are different kinds of models. But as you talk about MLM, you might be talking about BERT-like models. BERT is based on a transformer encoder so by definition of transformer, it takes a sequence of tokens (a token is just an encoding of each word into a vocabulary of known size) and returns a sequence of vectors in an embedding space: each token of the input sequence has a representation in embedding space (768 or 1024 are the common sizes called hidden-sizes in general). Naturally, this embedding is specialized on a given task by adding one or several heads after the encoder. the MLMHead is one of those heads. NextSentencePredictionHead is another one. If you look at BERT code in MLM Mode, it uses the following head: ```python class BertLMPredictionHead(nn.Module): def __init__(self, config): super(BertLMPredictionHead, self).__init__() self.transform = BertPredictionHeadTransform(config) # The output weights are the same as the input embeddings, but there is # an output-only bias for each token. self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False) self.bias = nn.Parameter(torch.zeros(config.vocab_size)) def forward(self, hidden_states): hidden_states = self.transform(hidden_states) hidden_states = self.decoder(hidden_states) + self.bias return hidden_states ``` You see here that the output of the model is passed through a simple `nn.Linear(config.hidden_size, config.vocab_size)` converting the embedding vector of size `hidden_size` into a vector of `vocab_size` (vocabulary size). Then you can softmax that into a vector of probability on the whole vocabulary and use argmax to get the most probable token. So for other models, it really depends on the head used. If it is the NextSentencePrediction head, it just classifies the embedding vector into binary true/false so you lose the probabilities on the vocabulary. Does it answer to your questions?<|||||>@mandubian Aha, that last Linear layer was what I was missing. I didn't quite understand how one could get there from simply the output of the BertModel itself (i.e. the encoder). I do have one more question, though. The parameters for BertLMPredictionHead are not pretrained, right? Would one still need to finetune that head on a huge dataset? (In particular I'm interested in RoBERTa, but I assume that it works similar to BERT.) More concretely, if I wanted to do inference and just get a token's probability (e.g. 'cookies' in the sentence 'I like cookies'), how could I do that? Thanks for your time and input!<|||||>BERT is pretrained on MLM and NSP heads and provided by transformers as is. ROBERTA is pretrained on MLM only. Check there https://huggingface.co/transformers/model_doc/bert.html#bertforpretraining, you should find what you need in outputs ;) <|||||>How did I not see this?! I'm blind. Well, perhaps I should read the documentation rather than the source code sometimes. To be honest, I didn't even know that that documentation existed! So I see > **prediction_scores**: torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size) > Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). So what this actually returns (if you'd apply SoftMax) is the probability for the whole vocabulary. In other words, a probability across the vocabulary which should then sum to one. So what you'd need to do, then is to find the ID of the input token in the vocabulary, and then get that ID from the output of prediction_scores. But how then can you deal with subword units? If an input token is split, how can you then recover the original token and its output? I'm sorry for the flood of questions, but it seems like a snowball effect; every answer results in new questions. Seems like the example here is useful but I'll have to dig deeper to understand it completely. https://github.com/huggingface/transformers/blob/0cb163865a4c761c226b151283309eedb2b1ca4d/examples/utils_squad.py#L803-L808<|||||>No worry. Yes wordpiece tokenizer used in Bert (and BPE in GPT2) can cut a word into several tokens. With WordPiece tokenizer, it prefixes `##` to sub-word tokens. Check that code https://github.com/huggingface/transformers/blob/master/transformers/tokenization_bert.py#L419-L421 When decoding, it's very basic, it just convert back to pieces of strings and re-concatenate them by removing ` ##` in this function for example https://github.com/huggingface/transformers/blob/master/transformers/tokenization_bert.py#L191-L194. The rest can be found in papers, doc or code.<|||||>I understand the encoding/decoding process, but I don't quite understand how you can keep track of the subword units in the model. Let's take an example "I like granola.", which can be encoded as "[CLS] i like gran ##ola . [SEP]" in e.g. BERT. This means we'll get seven output tokens. Here, if we want to get the output values for the input word 'granola' we need index 3 and 4. My question then is, is there a way to keep track of this dynamically/automatically? In other words, a mapping from the input words to the tokenized words, so you can go back and see that granola was split into tokens at indices position 3 and 4.<|||||>In transformers models, AFAIK, there is no tracking of token <-> indices in original text (I can be wrong). In the cases I know, it's just using `##xyz` to mean it's a sub-word token belonging to word. In decoding, final erasure of ` ##` and concatenation rebuilds words. Yet, in examples/run_squad.py, there might be what you need as it seems to keep track of mapping between tokens and index original doc. Have a look at it, you'll see it's not trivial ;)
transformers
2,097
closed
about the special tokens
## ❓ Questions & Help <!-- A clear and concise description of the question. --> The question about roberta. I konw that Bert use the embedding of token 'cls' to do predict, but when it comes to roberta, I dont know it clearly. Can you tell me which token's embedding is used to do predict in this project? Is it '<s>' ?
12-08-2019 09:07:46
12-08-2019 09:07:46
Please close this. It's a duplicated of your other question.
transformers
2,096
closed
The added tokens do not work as expected
Here is a minimum example, where we add a special token [ENT] ``` from transformers import BertTokenizer bert_tokenizer = BertTokenizer.from_pretrained('bert-base-uncased-vocab.txt') bert_tokenizer.add_tokens(['[ENT]']) print(len(tokenizer)) x = bert_tokenizer.encode("you are the [ENT] with [ENT] and [ENT]") print(x) bert_tokenizer.decode(x) ``` After decoding, we will end up having ``` you are the [ENT]with [ENT]with [ENT] ``` rather than ``` 'you are the [ENT] with [ENT] with [ENT]' ```
12-08-2019 06:28:59
12-08-2019 06:28:59
I think you can keep the issue open, this is a bug that should be fixed.<|||||>This could be related, I'm on commit d46147294852694d1dc701c72b9053ff2e726265 ![image](https://user-images.githubusercontent.com/1544039/70831841-91c93e00-1dc1-11ea-9bb3-3803312d4456.png) It's strange that the id for "student" changed after adding the special token <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,095
closed
Can't get gradients from TF TransformerXL model forward pass
## 🐛 Bug (Actually I'm not very sure if it's a bug or am I doing something wrong) <!-- Important information --> - Model I am using : Transformer-XL - Language I am using the model on (English, Chinese....): Chinese - The problem arise when using: my own modified scripts - The tasks I am working on is: my own task or dataset ## To Reproduce 0. My current testing environment is a CPU machine and a Cloud GPU machine I rented. 1. Clone my repository(https://github.com/Morizeyao/Decoders-Chinese-TF2.0) and install requirements. 2. Copy contents from scripts folder to root folder. 3. Run prepare_data.sh 4. Run train_xl.sh 5. You will see the print result from line 107 of file train_transformer_xl.py shows that the gradients are all zero. 6. And the loss don't change during the training process. ## Expected behavior I have tested TFGPT2 model from Transformers library, it worked fine (just run train_gpt2.py). Only TFTransfomerXL has this problem. ## Environment * OS: macOS and Ubuntu * Python version: 3.7 * PyTorch version: NA * PyTorch Transformers version (or branch): 2.2.1 * Using GPU ? No * Distributed of parallel setup ? No ## And... - Thank you guys for this awesome project!
12-07-2019 10:50:00
12-07-2019 10:50:00
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,094
closed
How to save a model as a BertModel
## ❓ Questions & Help I first fine-tuned a bert-base-uncased model on SST-2 dataset with run_glue.py. Then i want to use the output pytorch_model.bin to do a further fine-tuning on MNLI dataset. But if i directly use this pytorch_model.bin, an error will occur: > RuntimeError: Error(s) in loading state_dict for BertForSequenceClassification: > size mismatch for classifier.weight: copying a param with shape torch.Size([2, 768]) from checkpoint, the shape in current model is torch.Size([3, 768]). > size mismatch for classifier.bias: copying a param with shape torch.Size([2]) from checkpoint, the shape in current model is torch.Size([3]). This error occurred because SST-2 has two classes but MNLI has three classes. Issue #1108 provide a solution by saving the BertModel without the classification head. But i wander if it‘s feasible for that the model class is chosen as BertForSequenceClassification at the beginning. How do i change the model class in saving step?
12-07-2019 10:11:43
12-07-2019 10:11:43
Hello! If you try to load your `pytorch_model.bin` directly in `BertForSequenceClassification`, you'll indeed get an error as the model won't know that it is supposed to have three classes. That's what the configuration is for! I guess you're doing something similar to this: ```py from transformers import BertForSequenceClassification model = BertForSequenceClassification.from_pretrained("bert-base-cased") model.load_state_dict(torch.load("SAVED_SST_MODEL_DIR/pytorch_model.bin")) # Crashes here ``` Instead, if you saved using the `save_pretrained` method, then the directory already should have a `config.json` specifying the shape of the model, so you can simply load it using: ```py from transformers import BertForSequenceClassification model = BertForSequenceClassification.from_pretrained("SAVED_SST_MODEL_DIR") ``` If you didn't save it using `save_pretrained`, but using `torch.save` or another, resulting in a `pytorch_model.bin` file containing your model state dict, you can initialize a configuration from your initial configuration (in this case I guess it's `bert-base-cased`) and assign three classes to it. You can then load your model by specifying which configuration to use: ```py from transformers import BertForSequenceClassification, BertConfig config = BertConfig.from_pretrained("bert-base-cased", num_labels=3) model = BertForSequenceClassification.from_pretrained("bert-base-cased", config=config) model.load_state_dict(torch.load("SAVED_SST_MODEL_DIR/pytorch_model.bin")) ``` Let me know how it works out for you.<|||||>Yes!!! Setting the num_labels is useful! And I found that if i delete the classifier.weights and classifier.bias before i use torch.save(model_to_save.state_dict(), output_model_file), the pytorch_model.bin will be loaded well when further fine-tuning. And this model can be also used for QA or MultipleChoice. > Hello! If you try to load your `pytorch_model.bin` directly in `BertForSequenceClassification`, you'll indeed get an error as the model won't know that it is supposed to have three classes. That's what the configuration is for! > > I guess you're doing something similar to this: > > ```python > from transformers import BertForSequenceClassification > > model = BertForSequenceClassification.from_pretrained("bert-base-cased") > model.load_state_dict(torch.load("SAVED_SST_MODEL_DIR/pytorch_model.bin")) > # Crashes here > ``` > > Instead, if you saved using the `save_pretrained` method, then the directory already should have a `config.json` specifying the shape of the model, so you can simply load it using: > > ```python > from transformers import BertForSequenceClassification > > model = BertForSequenceClassification.from_pretrained("SAVED_SST_MODEL_DIR") > ``` > > If you didn't save it using `save_pretrained`, but using `torch.save` or another, resulting in a `pytorch_model.bin` file containing your model state dict, you can initialize a configuration from your initial configuration (in this case I guess it's `bert-base-cased`) and assign three classes to it. You can then load your model by specifying which configuration to use: > > ```python > from transformers import BertForSequenceClassification, BertConfig > > config = BertConfig.from_pretrained("bert-base-cased", num_labels=3) > model = BertForSequenceClassification.from_pretrained("bert-base-cased", config=config) > model.load_state_dict(torch.load("SAVED_SST_MODEL_DIR/pytorch_model.bin")) > ``` > > Let me know how it works out for you. Yes!!! Setting the num_labels is useful! And I found that if i delete the classifier.weights and classifier.bias before i use torch.save(model_to_save.state_dict(), output_model_file), the pytorch_model.bin will be loaded well when further fine-tuning. And this model can be also used for QA or MultipleChoice. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
2,093
closed
Remove pytest dependency.
This is a follow-up to PR #2055. This file was added between the moment I wrote #2055 and the moment in was merged.
12-07-2019 08:56:57
12-07-2019 08:56:57
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2093?src=pr&el=h1) Report > Merging [#2093](https://codecov.io/gh/huggingface/transformers/pull/2093?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2670b0d682746e1fe94ab9c7b4d2fd7f4af03193?src=pr&el=desc) will **decrease** coverage by `0.01%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/2093/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/2093?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #2093 +/- ## ========================================== - Coverage 82.67% 82.65% -0.02% ========================================== Files 111 111 Lines 16162 16162 ========================================== - Hits 13362 13359 -3 - Misses 2800 2803 +3 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2093?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [transformers/tests/optimization\_tf\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2093/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL29wdGltaXphdGlvbl90Zl90ZXN0LnB5) | `86.76% <100%> (ø)` | :arrow_up: | | [transformers/tests/modeling\_tf\_common\_test.py](https://codecov.io/gh/huggingface/transformers/pull/2093/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `95.38% <0%> (-1.54%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2093?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2093?src=pr&el=footer). Last update [2670b0d...010489c](https://codecov.io/gh/huggingface/transformers/pull/2093?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@julien-c Could you merge this please? :-)
transformers
2,092
closed
When I use albertModel, it prints the following repeatedly.
```python 0 0 Layer index 0 0 1 Layer index 0 0 2 Layer index 0 0 3 Layer index 0 0 4 Layer index 0 0 5 Layer index 0 0 6 Layer index 0 0 7 Layer index 0 0 8 Layer index 0 0 9 Layer index 0 0 10 Layer index 0 0 11 Layer index 0 0 0 Layer index 0 0 1 Layer index 0 0 2 Layer index 0 0 3 Layer index 0 0 4 Layer index 0 0 5 Layer index 0 0 6 Layer index 0 0 7 Layer index 0 0 8 Layer index 0 0 9 Layer index 0 0 10 Layer index 0 0 11 Layer index 0 ....... ``` Is this a mistake?
12-07-2019 08:38:18
12-07-2019 08:38:18
I encountered this issue when using apex mixed precision, and I put `amp.initialize` after wrapping the model in `DistributedDataParallel`, and I believe reversing the order to first call `amp.initialize` fixed it<|||||>I did not use mixed precision.<|||||>+1, also having this issue for *-v1 and *-v2 models. I'm not using mixed precision either.<|||||>I encountered the same problem. This problem can be fixed by removing line 289 and line 331 in modeling_albert.py (Those lines are not existed in this repo): ``` class AlbertLayerGroup(nn.Module): def __init__(self, config): super(AlbertLayerGroup, self).__init__() self.output_attentions = config.output_attentions self.output_hidden_states = config.output_hidden_states self.albert_layers = nn.ModuleList([AlbertLayer(config) for _ in range(config.inner_group_num)]) def forward(self, hidden_states, attention_mask=None, head_mask=None): layer_hidden_states = () layer_attentions = () for layer_index, albert_layer in enumerate(self.albert_layers): layer_output = albert_layer(hidden_states, attention_mask, head_mask[layer_index]) hidden_states = layer_output[0] print("Layer index", layer_index) if self.output_attentions: ... ``` ``` class AlbertTransformer(nn.Module): def __init__(self, config): super(AlbertTransformer, self).__init__() self.config = config self.output_attentions = config.output_attentions self.output_hidden_states = config.output_hidden_states self.embedding_hidden_mapping_in = nn.Linear(config.embedding_size, config.hidden_size) self.albert_layer_groups = nn.ModuleList([AlbertLayerGroup(config) for _ in range(config.num_hidden_groups)]) def forward(self, hidden_states, attention_mask=None, head_mask=None): hidden_states = self.embedding_hidden_mapping_in(hidden_states) all_attentions = () if self.output_hidden_states: all_hidden_states = (hidden_states,) for i in range(self.config.num_hidden_layers): # Number of layers in a hidden group layers_per_group = int(self.config.num_hidden_layers / self.config.num_hidden_groups) # Index of the hidden group group_idx = int(i / (self.config.num_hidden_layers / self.config.num_hidden_groups)) # Index of the layer inside the group layer_idx = int(i - group_idx * layers_per_group) print(group_idx, layer_idx) ... ```<|||||>> print(group_idx, layer_idx) Thanks!