repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
13,328
closed
Licenses for Helsinki-NLP models
Some of the models in the hf-hub under the Helsinki-NLP repo are listed under the apache 2.0 license, but most are listed without a license. Example of model without license: https://huggingface.co/Helsinki-NLP/opus-mt-en-de Only 371 models tagged with a license here: https://huggingface.co/models?license=license:apache-2.0&sort=downloads&search=helsinki-nlp Is this omission intentional or are all models in the repo actually intended to be apache licensed? If so would it be possible to update them with license info?
08-30-2021 09:49:59
08-30-2021 09:49:59
@jorgtied might be able to answer this (and then we can programmatically update all models if needed) Thanks! (also cc @sshleifer and @patil-suraj for visibility)<|||||> They come with a CC-BY 4.0 license. Jörg > On 30. Aug 2021, at 13.16, Julien Chaumond ***@***.***> wrote: > > > @jorgtied <https://github.com/jorgtied> might be able to answer this (and then we can programmatically update all models if needed) > > Thanks! > > (also cc @sshleifer <https://github.com/sshleifer> and @patil-suraj <https://github.com/patil-suraj> for visibility) > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub <https://github.com/huggingface/transformers/issues/13328#issuecomment-908221978>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAEWCPQVHTREFMSMDBTZPCLT7NK6NANCNFSM5DBQJLIQ>. > Triage notifications on the go with GitHub Mobile for iOS <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675> or Android <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>. > <|||||>Thanks a lot Jörg 🙏 . I'll update the repos programmatically tomorrow morning<|||||>Done. For reference, here's the script I've run (depends on https://github.com/huggingface/huggingface_hub/pull/339 to be able to run it using `huggingface_hub`): https://gist.github.com/julien-c/b2dcde5df5d5e41ad7c4b594cb54aba3 And here's a partial list of the generated commits (full list attached to the gist): ``` https://huggingface.co/Helsinki-NLP/opus-mt-bg-en/commit/3a34359f5781368c7748219c2868ffd065f24df0 https://huggingface.co/Helsinki-NLP/opus-mt-bg-fi/commit/04d4dd3690cc730690da31b45745fb3f74198b0f https://huggingface.co/Helsinki-NLP/opus-mt-bg-sv/commit/7f2c7cc3887492a080441266c63b20fd13497e56 https://huggingface.co/Helsinki-NLP/opus-mt-bi-en/commit/feb365f89ee1f47cad4f1581896b80ae88978983 https://huggingface.co/Helsinki-NLP/opus-mt-bi-es/commit/40001c75cc73df30ac2ffe45d8c3f224ee17781b https://huggingface.co/Helsinki-NLP/opus-mt-bi-fr/commit/31712329599ad7b50590cd35299ccc8d94029122 https://huggingface.co/Helsinki-NLP/opus-mt-bi-sv/commit/fa443f611486bd359dee28a2ef896a03ca81e515 https://huggingface.co/Helsinki-NLP/opus-mt-bzs-en/commit/4a0238e6463445a99590c0abe7aed5f2f95e064d https://huggingface.co/Helsinki-NLP/opus-mt-bzs-es/commit/b03449222edb29b8497af1df03c30782995912f5 https://huggingface.co/Helsinki-NLP/opus-mt-bzs-fi/commit/26a623904cfb745bdc48f4e62f4de8ec0f0f0bbb https://huggingface.co/Helsinki-NLP/opus-mt-bzs-fr/commit/5f69cdba6de378f61042d90ed0a19f3047837ea1 https://huggingface.co/Helsinki-NLP/opus-mt-bzs-sv/commit/2a12941aeaeaa78979240cfcb1d63e44958af76f https://huggingface.co/Helsinki-NLP/opus-mt-ca-en/commit/22113f5e0e8e89677d6e0142e55c85402eecb455 https://huggingface.co/Helsinki-NLP/opus-mt-ca-es/commit/3b93f0ccce95f7d8c7a78d56ec5c658271f6d244 https://huggingface.co/Helsinki-NLP/opus-mt-ceb-es/commit/94ff5e6902541d95fc1890e7e5e185477d922271 https://huggingface.co/Helsinki-NLP/opus-mt-ceb-fi/commit/8c5cdaa45a8ef959061c6d97a7f118e2714725bc https://huggingface.co/Helsinki-NLP/opus-mt-ceb-fr/commit/90d773c1774988007f9fd8f44477de8d5ee310b6 https://huggingface.co/Helsinki-NLP/opus-mt-ceb-sv/commit/bf1810fb698cbeb2a7beeecb96917557ece3158f https://huggingface.co/Helsinki-NLP/opus-mt-chk-en/commit/d9a7fad4fdc70b734457a5eee20835d8899e7415 https://huggingface.co/Helsinki-NLP/opus-mt-chk-es/commit/c41790360ecb70331ba71c881db1c592b0923502 https://huggingface.co/Helsinki-NLP/opus-mt-chk-fr/commit/6db3456d236063ccbb97abdea52dc574da37a898 https://huggingface.co/Helsinki-NLP/opus-mt-chk-sv/commit/de1bf0196adc388148bb52c5388fd795c46191b6 https://huggingface.co/Helsinki-NLP/opus-mt-crs-de/commit/f0552c0fcef8dc8b03acc5ecf9c170a3a9356ca1 https://huggingface.co/Helsinki-NLP/opus-mt-crs-en/commit/7ee4bb979dd28886b7d98f890298c4548e84a847 https://huggingface.co/Helsinki-NLP/opus-mt-crs-es/commit/808d78b9c72092991bba047542192f26c3bff3b8 https://huggingface.co/Helsinki-NLP/opus-mt-crs-fi/commit/e61325e6904fe87fbad3e6d978dca63fb4e766ba https://huggingface.co/Helsinki-NLP/opus-mt-crs-fr/commit/341ed6222bcb84709acf9b8a3d5d57991b350c5e https://huggingface.co/Helsinki-NLP/opus-mt-crs-sv/commit/a338a7e5ef9b876f1edc63b0af6c6cd11e6a7611 https://huggingface.co/Helsinki-NLP/opus-mt-cs-de/commit/f5a1b1443dc5381df3a0a83d790b3c2eb16cf811 https://huggingface.co/Helsinki-NLP/opus-mt-cs-en/commit/186ab5dff3e18ca970a492525c0ca4b398d525ab https://huggingface.co/Helsinki-NLP/opus-mt-cs-fi/commit/d60a357cfb2c4d1df38b43f2fafe34dbff0199cf https://huggingface.co/Helsinki-NLP/opus-mt-cs-fr/commit/3040852ec5404c1da928602fa1ec636b6ddf9a2e https://huggingface.co/Helsinki-NLP/opus-mt-cs-sv/commit/ab967fe66d1c0d4f9403ae0b4c97c06ae8947b89 https://huggingface.co/Helsinki-NLP/opus-mt-csg-es/commit/9742b7a5ed07cb69c4051567686b2e1ace50b061 https://huggingface.co/Helsinki-NLP/opus-mt-csn-es/commit/c3086bbf7d9101947a5a07d286cb9ccc533f9e0a https://huggingface.co/Helsinki-NLP/opus-mt-cy-en/commit/775c85089bc7a55c8203bff544e9fa34cd4ba7ca https://huggingface.co/Helsinki-NLP/opus-mt-da-de/commit/2e4d10f7054f579178b167e5082b0e57726eee44 https://huggingface.co/Helsinki-NLP/opus-mt-da-en/commit/8971eb3839ec41bddd060128b9b83038bb43fd96 https://huggingface.co/Helsinki-NLP/opus-mt-da-es/commit/59b50e55d16babe69b0facb1fb1c4dfb175328fe https://huggingface.co/Helsinki-NLP/opus-mt-da-fi/commit/a2e614cb32e2b0fa09c5c1dcaba8122d9d647b18 https://huggingface.co/Helsinki-NLP/opus-mt-da-fr/commit/186e4c938bc1744a9ddbd67073fe572c93a494c8 https://huggingface.co/Helsinki-NLP/opus-mt-de-ZH/commit/93d4bc065a572a35ab1f1110ffeccc9740444a42 https://huggingface.co/Helsinki-NLP/opus-mt-de-ase/commit/09e461fdf799287e13c7c48df0573fd89273b1bd https://huggingface.co/Helsinki-NLP/opus-mt-de-bcl/commit/628737ef8907e7d2db7989660f413420cfad41f5 https://huggingface.co/Helsinki-NLP/opus-mt-de-bi/commit/7c40aed9a4611cec93aa9560f2bb99e49e895789 https://huggingface.co/Helsinki-NLP/opus-mt-de-bzs/commit/30ed515b4d391e1f98cefdbf5f6fcc340c979fce https://huggingface.co/Helsinki-NLP/opus-mt-de-crs/commit/b9de144126655b973cd8cf74a5651ac999e551a2 https://huggingface.co/Helsinki-NLP/opus-mt-de-cs/commit/683666e07ca027d76af9ac23c0902b29084a0d18 https://huggingface.co/Helsinki-NLP/opus-mt-de-da/commit/bccfbee95d55ba1333fd447f67574453eba5d948 https://huggingface.co/Helsinki-NLP/opus-mt-de-de/commit/7be6c82bcda2cf76f48ba1f730baeeebcbcb172d https://huggingface.co/Helsinki-NLP/opus-mt-de-ee/commit/42218c447d3da4a8836adb6de710d06bbad480c9 https://huggingface.co/Helsinki-NLP/opus-mt-de-efi/commit/1309ccb2f74acba991a654adf4ff1363a577d51b https://huggingface.co/Helsinki-NLP/opus-mt-de-el/commit/ad3da773c26cf72780d46b4a75333226a19760e4 https://huggingface.co/Helsinki-NLP/opus-mt-de-en/commit/6137149949ac01d19d8eeef6e35d32221dabc8e4 https://huggingface.co/Helsinki-NLP/opus-mt-de-eo/commit/9188e5326cba934d553fcb0150a9e88de140a286 https://huggingface.co/Helsinki-NLP/opus-mt-de-es/commit/d6bff091731341b977e4ca7294d2c309a2ca11e4 https://huggingface.co/Helsinki-NLP/opus-mt-de-et/commit/55157cd448f864a87992b80aef23f95546a0280c https://huggingface.co/Helsinki-NLP/opus-mt-de-fi/commit/bbd50eeefdc1e26d75f6a806495192b55878c04a https://huggingface.co/Helsinki-NLP/opus-mt-de-fj/commit/596580a8225fb340357d25cd38639fed5d662681 https://huggingface.co/Helsinki-NLP/opus-mt-de-fr/commit/6aa8c4011488513f5575b235ce75d6d795d90b35 https://huggingface.co/Helsinki-NLP/opus-mt-de-gaa/commit/0722f96d5ce2e9fd6b2e0df3987105a78d062d1c https://huggingface.co/Helsinki-NLP/opus-mt-de-gil/commit/56bb25bf50c7b8268c9fd1ec8f8124e54631af59 https://huggingface.co/Helsinki-NLP/opus-mt-de-guw/commit/7a441fe0e9e7c4c430889b46b3b4541005c93bb1 https://huggingface.co/Helsinki-NLP/opus-mt-de-ha/commit/5a241c2d7ce3f36d42b7bbd7f563bd0da651d480 https://huggingface.co/Helsinki-NLP/opus-mt-de-he/commit/44d42278e67bf34bd1c0a8dcca06c6525eca6263 https://huggingface.co/Helsinki-NLP/opus-mt-de-hil/commit/4f0571df9d70e36af0435f1368a03cd059750c40 https://huggingface.co/Helsinki-NLP/opus-mt-de-ho/commit/6f07189ef39e3e609a24c45936c40e30fd6b3ef8 https://huggingface.co/Helsinki-NLP/opus-mt-de-hr/commit/d1b7e5205290af5c36e8be8cd6d73f6b5d9bba5f https://huggingface.co/Helsinki-NLP/opus-mt-de-ht/commit/2d296463f4735961ca4512271b415aacf7c0ba91 https://huggingface.co/Helsinki-NLP/opus-mt-de-hu/commit/4b30440320ea86d33b6927fe70c46e20f671da86 https://huggingface.co/Helsinki-NLP/opus-mt-de-ig/commit/862152c08618d17ff651fc7df9145d81519ba9f7 https://huggingface.co/Helsinki-NLP/opus-mt-de-ilo/commit/e9260adbaa77c85f5a0203460399c1cec12357c1 https://huggingface.co/Helsinki-NLP/opus-mt-de-iso/commit/d3d1caff0521142085ee7faa07112ce593803734 https://huggingface.co/Helsinki-NLP/opus-mt-de-it/commit/cd2319a082a7be0dd471fe62701ae557a71833c2 https://huggingface.co/Helsinki-NLP/opus-mt-de-kg/commit/495d68528e086b0ccea38761513241152e4f217f https://huggingface.co/Helsinki-NLP/opus-mt-de-ln/commit/05dd393385fb99c42d5849c22cef67931922eff3 https://huggingface.co/Helsinki-NLP/opus-mt-de-loz/commit/efc9fe11206c281704056c9c3eda0b42f1cf43a0 https://huggingface.co/Helsinki-NLP/opus-mt-de-lt/commit/e0105109d696baf37e2a4cca511a46f59fa97707 https://huggingface.co/Helsinki-NLP/opus-mt-de-lua/commit/319b94b75439b497c0860a3fc80a34ecacb597a0 https://huggingface.co/Helsinki-NLP/opus-mt-de-mt/commit/0d71c2c09e3838d7276288da102f7e66d2d24032 https://huggingface.co/Helsinki-NLP/opus-mt-de-niu/commit/6b15b26f7d7752bfde0368809479c544880174cd https://huggingface.co/Helsinki-NLP/opus-mt-de-nl/commit/da037ec1ad70f9d79735c287d418c00158b55b68 https://huggingface.co/Helsinki-NLP/opus-mt-de-nso/commit/fbd9a40fa66f610b52855ad16263d4ea32c8bd7c https://huggingface.co/Helsinki-NLP/opus-mt-de-ny/commit/595549133dfde470a3ea04e93674ff1c90c5ac5a https://huggingface.co/Helsinki-NLP/opus-mt-de-pag/commit/f03679f6d038388c5a0a40918acc4bf6406cac28 https://huggingface.co/Helsinki-NLP/opus-mt-de-pap/commit/6c57622b7e815f9e1cb24f6e1f9a09b58627f0b7 https://huggingface.co/Helsinki-NLP/opus-mt-de-pis/commit/ddfb8177ff0559adc697171c2c4c7704921bd4ec https://huggingface.co/Helsinki-NLP/opus-mt-de-pl/commit/67458bb97566391315397d8e0aa5f14f774bd238 https://huggingface.co/Helsinki-NLP/opus-mt-de-pon/commit/d18f29c5ef79abbca40d53e34b94c8514ffd6235 https://huggingface.co/Helsinki-NLP/opus-mt-ee-de/commit/5e01b793901fec6acbcaf6b35e9e0873d7190147 https://huggingface.co/Helsinki-NLP/opus-mt-ee-en/commit/a69e3d990dc8b84d8d727b9502c20511a50233ed https://huggingface.co/Helsinki-NLP/opus-mt-ee-es/commit/976bee3eb2616b35a55d6e6467ca2d211ba68d49 https://huggingface.co/Helsinki-NLP/opus-mt-ee-fi/commit/8547cfc9f2c5ef75f00c78ef563eef59fc0204ee https://huggingface.co/Helsinki-NLP/opus-mt-ee-fr/commit/066e2a847a6098c2a999d6db7a1f50b878578c8e https://huggingface.co/Helsinki-NLP/opus-mt-ee-sv/commit/8170bc4af3be1e3633e37ef4180cada5eb177b2c https://huggingface.co/Helsinki-NLP/opus-mt-efi-de/commit/cedf2694630c1ee2ea1d75dffead02c4dc49ef80 https://huggingface.co/Helsinki-NLP/opus-mt-efi-en/commit/0bf437954f943da3d49a172b6f91aa7157c3525a https://huggingface.co/Helsinki-NLP/opus-mt-efi-fi/commit/02877c2ef68a205047cde71b4b376ffcc565e4a7 https://huggingface.co/Helsinki-NLP/opus-mt-efi-fr/commit/7b528531e45c04716015e7c211ef2b74817ff438 https://huggingface.co/Helsinki-NLP/opus-mt-efi-sv/commit/c02cd07b017c7c71d4583dbd6050dfee383a1cf0 https://huggingface.co/Helsinki-NLP/opus-mt-el-fi/commit/aef52d8c3cc2129847cf9ea84c62a5e7b9bb41bc https://huggingface.co/Helsinki-NLP/opus-mt-el-fr/commit/b00ba91c42b2f20768228b179f01274048158001 https://huggingface.co/Helsinki-NLP/opus-mt-el-sv/commit/e8894cf2f5713e1cc68fe7710636ecc4b4dc99d7 https://huggingface.co/Helsinki-NLP/opus-mt-en-CELTIC/commit/69fe75e42d848a1b30f968800ff94783e3ed8fe2 https://huggingface.co/Helsinki-NLP/opus-mt-en-ROMANCE/commit/92870a2f094c444064c7a568c25eef6971e07b03 https://huggingface.co/Helsinki-NLP/opus-mt-en-af/commit/c6a79302395db2b59af8b15f4016081a66095ace https://huggingface.co/Helsinki-NLP/opus-mt-en-bcl/commit/fdda7e146d903da0f4da8895800c52bdcfa07ecc https://huggingface.co/Helsinki-NLP/opus-mt-en-bem/commit/7d0c704d934f400158d645345a7ed27c6cfe73e8 https://huggingface.co/Helsinki-NLP/opus-mt-en-ber/commit/cad15de24b5374102d6dd95619d0c4011102dcce https://huggingface.co/Helsinki-NLP/opus-mt-en-bi/commit/b3e9ed52697fffab06a733a23c37d843a3464976 https://huggingface.co/Helsinki-NLP/opus-mt-en-bzs/commit/2b7c7d345202d17dd7f42850eae846e4d11b6fda https://huggingface.co/Helsinki-NLP/opus-mt-en-ca/commit/81d80b5921b66885e45c3b27615752da4b511b40 https://huggingface.co/Helsinki-NLP/opus-mt-en-ceb/commit/a5e0a21b4e9db37945be9cd5977573b53cd95999 https://huggingface.co/Helsinki-NLP/opus-mt-en-chk/commit/a57e025c3f8a7a9b20968190b6a6db234ef1541a https://huggingface.co/Helsinki-NLP/opus-mt-en-crs/commit/1f25af1f9d1c0680005a9f0d16ed8bb412784c32 https://huggingface.co/Helsinki-NLP/opus-mt-en-cs/commit/7cba4a7e3daff13c48fc2fcd740ef0711b1dd075 https://huggingface.co/Helsinki-NLP/opus-mt-en-cy/commit/038aee0304224b119582e0258c0dff2bc1c1c411 https://huggingface.co/Helsinki-NLP/opus-mt-en-da/commit/9786126ba34f1f86636af779ef13557bd9d1b246 https://huggingface.co/Helsinki-NLP/opus-mt-en-de/commit/6c00b328d3da7183582a4928b638b24a4a14a79f https://huggingface.co/Helsinki-NLP/opus-mt-en-ee/commit/45d6ef20f2aac6de3ad001d7452ff5243f25f219 https://huggingface.co/Helsinki-NLP/opus-mt-en-efi/commit/08b5f78e0bb66e8e1940fe1eb976a5b9de276f84 https://huggingface.co/Helsinki-NLP/opus-mt-en-el/commit/cd8ab0896f1d0598007ba5266a0a30884fed71de https://huggingface.co/Helsinki-NLP/opus-mt-en-eo/commit/20a8920034dfbb6b2e5909f5065a32d6b1b5990b https://huggingface.co/Helsinki-NLP/opus-mt-en-et/commit/f696ce2db3f802cf4dd723ea97b2af1eda90c7e9 https://huggingface.co/Helsinki-NLP/opus-mt-en-fi/commit/627fe90df5c335be61521cd89c68f62e2bdce050 https://huggingface.co/Helsinki-NLP/opus-mt-en-fj/commit/2c98ee541817946993595aa514f12804b6c95efc https://huggingface.co/Helsinki-NLP/opus-mt-en-fr/commit/a8fbc1c711cb6263e8a20c5229b210cc05c57ff0 https://huggingface.co/Helsinki-NLP/opus-mt-en-gaa/commit/2f75e3d8bc190f8e0e412beecf00e564c40e33c4 ```<|||||>Looking great! Thank you for the quick resolution of this!<|||||>I have a doubt, on the [opus-MT github page](https://github.com/Helsinki-NLP/OPUS-MT-train/tree/master) it says that all pretrained models are under the cc by 4.0 license, but on hugging face many opus-MT models have apache 2.0 license, for example, [this model](https://huggingface.co/Helsinki-NLP/opus-mt-it-fr) on hugging face has the apache 2.0 license, while downloading it from the opus-MT github page ([here](https://github.com/Helsinki-NLP/OPUS-MT-train/tree/master/models/it-fr)) it has the cc by 4.0 license among the files, is this an error or did I miss something?<|||||>Hi @niedev, thanks for raising this! Could you open a discussion on the respective checkpoint pages on the hub?
transformers
13,327
closed
Wrong weight initialization for TF t5 model
## Environment info - `transformers` version: 4.9.2 - Platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.5.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.3.4 (gpu) - Jax version: 0.2.18 - JaxLib version: 0.1.69 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: yes ### Who can help @patil-suraj @patrickvonplaten ## Information Model I am using: Pre-training T5-base The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below): Added to run_mlm.py the t5 data collator and keras adafactor optimizer The tasks I am working on is: * [X] my own task or dataset: Pre-training T5 base with oscar dataset (as in FLAX example) ## Expected behavior Before updating init weights to normal distribution (as in transformers/src/transformers/models/t5/modeling_flax_t5.py) loss stuck at 4.5 (unlike FLAX behaviour). after update of init weights i get same behaviour as in FLAX and reach <2 loss. Example: In flax code: class: FlaxT5DenseReluDense: lines 95:,96 wi_init_std = self.config.initializer_factor * (self.config.d_model ** -0.5) wo_init_std = self.config.initializer_factor * (self.config.d_ff ** -0.5) In TF code, the default initializer is used. My suggested fix: wi_initializer = tf.keras.initializers.RandomNormal(mean = 0, stddev = config.initializer_factor * (config.d_model ** -0.5)) wo_initializer = tf.keras.initializers.RandomNormal(mean = 0, stddev = config.initializer_factor * (config.d_ff ** -0.5)) self.wi = tf.keras.layers.Dense(config.d_ff, use_bias=False, name="wi",kernel_initializer=wi_initializer) self.wo = tf.keras.layers.Dense(config.d_model, use_bias=False, name="wo",kernel_initializer=wo_initializer) This is relevant for all weights and embeddings initialization.
08-30-2021 07:50:28
08-30-2021 07:50:28
I agree! Would you like to open a PR to fix it? :-)<|||||>Will try to do it on coming days<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,326
closed
Wav2Vec2ForCTC is not BaseModelOutput
On the website of huggingface: https://huggingface.co/transformers/model_doc/wav2vec2.html#wav2vec2forctc it says Wav2VecForCTC is "BaseModelOutput". But actually it is "CausalLMOutput", it has no attribute 'last_hidden_state' or the others of "BaseModelOutput". Its returns should belong to "CausalLMOutput": https://huggingface.co/transformers/main_classes/output.html#causallmoutput **The description of the returns of Wav2VecForCTC on the website:** <img width="1031" alt="130951325-9fd86ab4-4b2a-4965-b4bf-88b2cc556b46" src="https://user-images.githubusercontent.com/15671418/131300701-11106f9c-ab2a-42b8-8c35-9a7418e37474.png"> **The error when call "the last hidden state" of Wav2VecForCTC:** <img width="663" alt="130951343-eb4655a3-af57-4a2f-a387-0fa628f854dc" src="https://user-images.githubusercontent.com/15671418/131300840-107609a5-6d30-4d89-a0e1-ae003feb3934.png"> **The description of CasualLMOutput which the Wav2VecForCTC shoud be:** <img width="1077" alt="130951334-093e1df0-6207-45f0-b803-76b276f17f7b" src="https://user-images.githubusercontent.com/15671418/131300953-a8eaf063-db8d-46f3-836e-4baf534e1554.png"> @patrickvonplaten
08-30-2021 07:20:28
08-30-2021 07:20:28
Hi @patrickvonplaten , I read the source code and found the wav2vecctc only conducts word-level tokenization. Does it support ctc fine-tuning on grapheme level or character level? Thanks.<|||||>Oh yeah that's a typo in the docs indeed, but it's already fixed on master I think :-) See: https://huggingface.co/transformers/master/model_doc/wav2vec2.html#wav2vec2forctc<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,325
closed
Handling tag with no prefix for aggregation_strategy in TokenClassificationPipeline
# 🚀 Feature request Previously the parameter grouped_entities would handle entity with no prefix (like "PER" instead of "B-PER") and would correctly group similar entities next to each others. With the new parameter aggregation_strategy, this is not the case anymore. ## Motivation In some simple models, the prefix add some complexity that is not always required. Because of this we are forced to add a prefix to make aggregation works even if not required by the model. ## Your contribution
08-29-2021 21:09:09
08-29-2021 21:09:09
cc @Narsil <|||||>Hi @jbpolle what do you mean `correctly` ? We should not have changed behavior there, but indeed it's not part of the testing right now, so there might be some issues. Could you provide a small script on an older transformers version that displays the intended behavior ?<|||||>Hello Nicolas, Here is what it looks like now in the "hosted inference API » panel: This is from my model here: https://huggingface.co/Jean-Baptiste/camembert-ner?text=Je+m%27appelle+jean-baptiste+et+je+vis+%C3%A0+montr%C3%A9al In previous version, It would display « jean-baptiste PER » and « Montreal LOC ». However I renamed my entities in the config.json file to I-PER, I-ORG,…which I believe should fix this issue. Before that the entities were just PER, LOC,… I hope this help, Thank you, Jean-Baptiste > Le 30 août 2021 à 09:15, Nicolas Patry ***@***.***> a écrit : > > > Hi @jbpolle <https://github.com/jbpolle> what do you mean correctly ? We should not have changed behavior there, but indeed it's not part of the testing right now, so there might be some issues. > > Could you provide a small script on an older transformers version that displays the intended behavior ? > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub <https://github.com/huggingface/transformers/issues/13325#issuecomment-908333781>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AMIMGPPBHLPBSACPHW5PZWDT7OAAXANCNFSM5DAT4PAA>. > Triage notifications on the go with GitHub Mobile for iOS <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675> or Android <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>. > <|||||>Adding missing screenshot in previous message: <img width="541" alt="PastedGraphic-1" src="https://user-images.githubusercontent.com/51430205/131381199-64fa35a0-05dd-4233-a3de-e53307bd6f71.png"> <|||||>I went back to `4.3.3` and I can see that the splitting was exactly the same. (no grouping when tags didn't include B-, I- ). The fact that the cache wasn't probably cleaned on the widget is still an issue, clearing it.<|||||>I was working on 4.3.2 and here is how this was working: ![image](https://user-images.githubusercontent.com/51430205/132059845-039b0ade-54f2-45b1-80b1-993641b520e6.png) But now in 4.9: ![image](https://user-images.githubusercontent.com/51430205/132060946-dce28417-0080-43cb-93c3-95d0cc76e2cc.png) And even when playing with new aggregation_strategy parameters, I can't get previous results. Anyway it's fixed in my case by adding the prefix so don't hesitate to close the ticket. Thank you, <|||||>Ok, I must have tested it wrong before. I can confirm. This is indeed because the default for tags wasn't really explicited, but did behave as `I- ` Code was: ```python entity["entity"].split("-")[0] != "B" ``` Which would resolve to `"PER" != "B"` whereas now the default tag was explicitely set as B-: https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/token_classification.py#L413 The fix would be easy but I am unsure about reverting this now that was merged 6th June. Tagging a core maintainer for advice for how to handle this. @LysandreJik We would need to run some numbers on the hub too, to get an idea of amount of affected repos.<|||||>I would fix it to behave the same as it was in v4.3.2 as this is the expected behavior when using `grouped_entities`<|||||>PR opened. https://github.com/huggingface/transformers/pull/13493
transformers
13,324
closed
distilbert-flax
# What does this PR do? DistilBert Flax <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @VictorSanh @patrickvonplaten
08-29-2021 11:09:54
08-29-2021 11:09:54
Great Great job @kamalkraj! Think the only major thing to update is to docs in the modeling file (at the moment it looks like it's the PyTorch docs, but should be Flax :-)) <|||||>@patrickvonplaten Thanks for the review. Done changes according to your review. <|||||>Hi @kamalkraj , I'm also really interested in that PR - thanks for adding it :hugs: Do you also plan to add a script for the distillation process (like it is done in the ["old" script](https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation)), as I would like to re-distillate some of my previous DistilBERT models (I don't have access to multi GPU setups, only to TPUs at the moment).<|||||>Hi @stefan-it, I will go through the scripts and pings you. I have multi-GPU access. Which TPU do you use? v3-8 ?<|||||>``` JAX_PLATFORM_NAME=cpu RUN_SLOW=1 pytest tests/test_modeling_flax_distilbert.py::FlaxDistilBertModelIntegrationTest::test_inference_no_head_absolute_embedding ``` passes and the code looks good :-) Ready to merge IMO :tada: ! @patil-suraj the slow test doesn't pass on TPU since distilbert has pretty extreme activations in the forward pass like a couple of other models. We need to think a bit how to adapt the slow test depending on whether they're run on TPU or not in general...<|||||>Great work @kamalkraj !
transformers
13,323
closed
Documentation mismatch in Preprocessing data
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.10.0.dev0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.9.2 - PyTorch version (GPU?): 1.9.0 (False) - Tensorflow version (GPU?): 2.6.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu) - Jax version: 0.2.19 - JaxLib version: 0.1.70 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @sgugger @SaulLu <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information There seems a conflict in [ Utilities for tokenizers ](https://huggingface.co/transformers/internal/tokenization_utils.html?highlight=truncation#transformers.tokenization_utils_base.PreTrainedTokenizerBase.__call__) and [Preprocessing data](https://huggingface.co/transformers/preprocessing.html?highlight=truncation#everything-you-always-wanted-to-know-about-padding-and-truncation). In **Preprocessing data**, For **`truncation_strategy = True`**, It states "truncate to a maximum length specified by the max_length argument or the maximum length accepted by the model if no max_length is provided (max_length=None). This will only truncate the first sentence of a pair if a pair of sequences (or a batch of pairs of sequences) is provided." whereas for the same in **Utilities for tokenizers**, it states "Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.". <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior In Preprocessing_data documentation , `truncation_strategy=True` must match with `longest_first` instead of `only_first`. <!-- A clear and concise description of what you would expect to happen. -->
08-29-2021 04:46:39
08-29-2021 04:46:39
Indeed. Would you mind opening a PR with the change?
transformers
13,322
closed
DestilGTP2 code from pytorch-transformers does not work in transformers, I made a basic example
How would i convert this to new version of transformers. Or is it possible to somehow use DestilGTP2 with pytorch-transformers. use_transformers = True if use_transformers: import torch from transformers import GPT2Tokenizer, GPT2Model, GPT2LMHeadModel tokenizer1 = GPT2Tokenizer.from_pretrained('distilgpt2',cache_dir="/var/software/Models/") model1 = GPT2LMHeadModel.from_pretrained('distilgpt2',cache_dir="/var/software/Models/") model1.eval() model1.to('cuda') text = "Who was Jim Henson ?" indexed_tokens = tokenizer1.encode(text) tokens_tensor = torch.tensor([indexed_tokens]) tokens_tensor = tokens_tensor.to('cuda') with torch.no_grad(): predictions_1 = model1(tokens_tensor) print(predictions_1) else: import torch from pytorch_transformers import GPT2Tokenizer, GPT2Model, GPT2LMHeadModel tokenizer1 = GPT2Tokenizer.from_pretrained('gpt2',cache_dir="/var/software/Models/") # cache_dir=None model1 = GPT2LMHeadModel.from_pretrained('gpt2',cache_dir="/var/software/Models/") model1.eval() model1.to('cuda') text = "Who was Jim Henson ?" indexed_tokens = tokenizer1.encode(text) tokens_tensor = torch.tensor([indexed_tokens]) tokens_tensor = tokens_tensor.to('cuda') with torch.no_grad(): predictions_1 = model1(tokens_tensor) print(predictions_1) When i try i get an error, and tried to follow the guide but do not get what the new tokeniser does differently.
08-29-2021 02:38:20
08-29-2021 02:38:20
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,321
closed
Add missing module __spec__
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds a missing `__spec__` object when importing the library that would be `None` otherwise. Fixes #12904 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-28-2021 18:18:49
08-28-2021 18:18:49
Thanks for your PR! As you can see, this makes all the tests fail because you changed the init of `_LazyModule` without adapting all the places it's used (all the intermediates init of each model). I'm not sure whether those intermediate inits need to pass along the spec attribute or not, if they do you should add it in each one of them (don't forget the model template as well), and if they don't, you should make that argument optional.<|||||>@sgugger Thanks for looking at it! Changed the `module_spec` arg to be optional as I don't see why the other intermediate inits would need it.<|||||>Great! One last thing: could you run `make style` on your branch to solve the code quality issue?<|||||>Last thing caught by the CI new that the style is correct: your new test file will never be run by the CI. Since it's linked to `_LazyModule` defined in file_utils, could you move it to `test_file_utils`? Thanks a lot.<|||||>Is this one ready to be merged and published?
transformers
13,320
closed
examples: only use keep_linebreaks when reading TXT files
Hi, this is a follow-up (bug-fix) PR for #13150. It turns out - as reported in #13312 - that the `keep_linebreaks` argument only works, when Datasets extension is `text`. I used this logic to only pass the `keep_linebreaks` argument, when extension is `text`, simplified as: ``` dataset_args = {} if extension == "text": dataset_args["keep_linebreaks"] = True dataset = load_dataset(extension, data_files=data_files, **dataset_args) print(dataset["train"][0]) ``` When `keep_linebreaks` was set to `True` and reading in a text file, the output looks like: ```bash {'text': 'Heute ist ein schöner Tach\n'} ``` For `keep_linebreaks` set to `False` the output looks like: ```bash {'text': 'Heute ist ein schöner Tach'} ``` So the proposed way is working with the `dataset_args` argument. I also checked all examples that they're working when passing a CSV dataset.
08-28-2021 10:20:16
08-28-2021 10:20:16
transformers
13,319
closed
neptune.ai logger: add ability to connect to a neptune.ai run
single line is changed when `NEPTUNE_RUN_ID` environmetnt variable is set, neptune will log into the previous run with id `NEPTUNE_RUN_ID` trainer: @sgugger
08-28-2021 10:18:25
08-28-2021 10:18:25
Thanks a lot for your PR!
transformers
13,318
closed
Errors when fine-tuning RAG on cloud env
Hi the team, I'm trying to fine-tune RAG with [the scripts you provided](https://github.com/huggingface/transformers/tree/9ec0f01b6c3aff4636869aee735859fb6f89aa98/examples/research_projects/rag). My env is cloud servers (4 V100 with 48G GRAM), and I always have these errors when do the fine-tuning: > RuntimeError: Error in faiss::Index* faiss::read_index(faiss::IOReader*, int) at /__w/faiss-wheels/faiss-wheels/faiss/faiss/impl/index_read.cpp:480: Error: 'ret == (size)' failed: read error in <cache path>: 6907889358 != 16160765700 (Success) It seems like errors are from faiss (and I don't know how to interpret it. Sizes do not macth?). I used this command to do the fine-tuning: ```bash - python run_rag_ft.py --data_dir /msmarco --output_dir ./msmarco_rag --model_name_or_path facebook/rag-sequence-nq --model_type rag_sequence --fp16 --gpus 4 --distributed_retriever pytorch --num_retrieval_workers 4 --fp16 --profile --do_train --do_predict --n_val -1 --train_batch_size 8 --eval_batch_size 1 --max_source_length 128 --max_target_length 40 --val_max_target_length 40 --test_max_target_length 40 --label_smoothing 0.1 --dropout 0.1 --attention_dropout 0.1 --weight_decay 0.001 --adam_epsilon 1e-08 --max_grad_norm 0.1 --lr_scheduler polynomial --learning_rate 3e-05 --num_train_epochs 2 --warmup_steps 500 --gradient_accumulation_steps 1 ``` Nothing special but just use my own data (MSMARCO). I sticked to the pytorch for the distributed retriever, and have not yet tested the ray version. Is that the problem? I cannot run this on my local machines because of OOM errors (two 24GRAM GPUs). I think @patrickvonplaten could help me on this. Thanks!
08-28-2021 09:41:37
08-28-2021 09:41:37
Hey @DapangLiu, We sadly don't actively maintain the `research_projects` folder except for Wav2Vec2. Could you try to use the forum: https://discuss.huggingface.co/ instead? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,317
closed
How to use the pretraining task of ProphetNet
I want to use the pretraining task of ProphetNet, that recovers the mask span of the input sentence. I follow the instruction of Figure 1 in the paper. For example, the input is `But I [MASK][MASK] my life for some lovin\' and some gold` and I only recover the first `[MASK]`. (the sentence is from the pretraining corpus BookCorpus) I use the following code: ```python from transformers import ProphetNetTokenizer, ProphetNetForConditionalGeneration tokenizer = ProphetNetTokenizer.from_pretrained('prophetnet') model = ProphetNetForConditionalGeneration.from_pretrained('prophetnet') # the sentence is from the pretraining corpus BookCorpus input_ids = tokenizer('But I traded all my life for some lovin\' and some gold', return_tensors="pt")['input_ids'] mask_id = input_ids[0][2] input_ids[0][2:4] = tokenizer.pad_token_id decoder_input_ids = tokenizer('[MASK][MASK] I', return_tensors="pt")['input_ids'] # the way of MASS: decoder_input_ids = tokenizer('[MASK][MASK][MASK]', return_tensors="pt")['input_ids'] output = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) probs = output.logits[0][2] # the rank of the target word in the vocabulary print((probs[mask_id]<probs).sum()) ``` However, the rank of `traded` is 15182 among 30522 words. And I also tried different masked words and masked spans, but the results are all unexpected. So, I want to ask if my way to recover the mask has some errors? @patrickvonplaten
08-28-2021 09:02:04
08-28-2021 09:02:04
cc @qiweizhen<|||||>@StevenTang1998 - could you maybe try to use the forum: https://discuss.huggingface.co/ for such questions. I haven't played around with the model enough to give a qualified answer here sadly :-/ <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,316
closed
Squeeze and Excitation Network
# What does this PR do? This PR implements an optional Squeeze and Excitation Block in Bert and the copied modules (RoBerta, Electra, splinter and layoutlm) in pytorch. Fixes #11998 Additional tests have been added to the corresponding test scripts and the docs updated. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. @LysandreJik
08-28-2021 05:38:51
08-28-2021 05:38:51
Hi, Thanks for your PR! However, I don't think that we want to add this block to files of other models. It's more appropriate to add a new SesameBERT model (if pretrained weights are available), or add it under the `research_projects` directory. cc @LysandreJik <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @AdityaDas-IITM, are you interested in working on this PR (making it a research project instead)?<|||||>Hey @NielsRogge, Yes I'll get started on it soon<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,315
closed
Current trainer.py doesn't support beam search
# 🚀 Feature request Currently I can't find any support for beam search in trainer.py - to begin with it doesn't even import the BearScorer or BeamHypotheses classes and the evaluation_loop and prediction_loop don't make any use of beam search logic internally. Its misleading because in the predict and evaluate functions in trainer_seq2seq.py it includes setting the self._num_beams to a passed in hyper parameter however that isn't used by the parent predict or evaluate functions. Also the run_summarization.py script also includes a beam search hyperparameter which isn't made use of. What would be the simplest way to have an evaluation and prediction step call and evaluate beam search? ## Motivation Beam search is very critical for the evaluation of seq2seq methods. HuggingFace must have a trainer that does integrate with beam search just not sure where it is exposed / how that integration works. Currently for reference https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L2342 the prediction_step doesn't call beam search / perform beam search and this is called to get the loss, logits, labels for each step in evaluation. Thus it isn't actually performing a search over all the possible beams and instead evaluating for each next step in the dataloader. ## Your contribution I would further look into how the Beam search util file is used in other models. The code exists https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L1612 I just wonder why trainer.py isn't calling it in the evaluate or predict functions - is there a reason for that? @patil-suraj
08-28-2021 01:00:24
08-28-2021 01:00:24
This post on the forum will answer your question: https://discuss.huggingface.co/t/trainer-vs-seq2seqtrainer/3145/2?u=nielsr<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,314
closed
neptune.ai logger: utilize `rewrite_logs` in `NeptuneCallback` as in `WandbCallback`
single line ischanged, utilized a missing conversion in neptune logger as implemented in wandb logger trainer: @sgugger
08-27-2021 23:50:49
08-27-2021 23:50:49
it turns out neptune.ai ui doesnt support charts for nested logged variables
transformers
13,313
closed
[Testing] Add Flax Tests on GPU, Add Speech and Vision to Flax & TF tests
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR does two things: - 1. Adds Flax to the daily slow tests on GPU and adds flax tests on GPU. There is a TF Docker image that works well for JAX on GPU - see: https://github.com/google/jax/discussions/6338 . I think it's easiest to just use this image for now until there is an official JAX docker image for GPU. - 2. We now have slow tests in both TF and Flax that require `soundfile` (TFHubert, TFWav2Vec2, FlaxWav2Vec2, ... Also there is FlaxViT in Flax which requires the `vision` package IMO. A new `tf-flax-speech` extension is added to make sure one doesn't install torch along torchaudio for TF and Flax's speech models and it is added to all the tests. Also it is very likely that some slow tests in Flax will fail at the moment since they have been written to pass on TPU. If ok, @patil-suraj and I can fix them one-by-one after getting a report from the daily slow tests - we'll probably have to add some `if-else` statements depending on the backend there... 2nd, at the moment, we don't have any multi-GPU or multi-TPU tests for Flax, but I nevertheless enable the tests on multi-gpu on Flax here already. I'll add a multi-gpu/multi-tpu test for all flax models next week (cc @patil-suraj) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-27-2021 22:25:45
08-27-2021 22:25:45
> Looks great! Mostly left nitpicks. I'm fine with seeing which tests failed on a first run and then adapting/removing failed tests. > > Do you have a number in mind regarding total runtime? All non-slow tests together took 1h20, we don't have that many slow tests in Flax at the moment - so I'd assume that the total runtime would be something like 1h40<|||||>Running all the jitted tests takes a lot of time (but they're quite important IMO) <|||||>Ok sounds good!
transformers
13,312
closed
Having problem Pre-training GPT models
## Environment info - `transformers` version: 4.9.2 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.11 - PyTorch version (GPU?): 1.9.0+cu102 (True) - Tensorflow version (GPU?): 2.6.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): EleutherAI/gpt-neo-2.7B The problem arises when using: * [X ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X ] my own task or dataset: (give details below) ## To reproduce I used a csv file with each line have a sample of example Steps to reproduce the behavior: My input: `!python /content/transformers/examples/pytorch/language-modeling/run_clm.py --model_name_or_path EleutherAI/gpt-neo-2.7B --train_file /content/df.csv --output_dir /tmp/test-clm` i also tried using the no trainer version but still doesn't work. What am i doing wrong? What i got back: ``` Traceback (most recent call last): File "/content/transformers/examples/pytorch/language-modeling/run_clm.py", line 520, in <module> main() File "/content/transformers/examples/pytorch/language-modeling/run_clm.py", line 291, in main cache_dir=model_args.cache_dir, File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 830, in load_dataset **config_kwargs, File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 710, in load_dataset_builder **config_kwargs, File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 271, in __init__ **config_kwargs, File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 370, in _create_builder_config builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs) TypeError: __init__() got an unexpected keyword argument 'keep_linebreaks' ``` ## Expected behavior Just want to further train the GPT model notebook: https://colab.research.google.com/drive/1bk8teH0Egu-gAmBC_zlvUifMHS7y_SyM?usp=sharing Any help is much appreciated
08-27-2021 20:37:02
08-27-2021 20:37:02
Probably caused by this PR: #13150 cc @stefan-it <|||||>I'm looking into it right now :)<|||||>Oh no, this is only happening when using CSV files as input :thinking: @mosh98 As a very quick workaround, could you try to "convert" your csv file into a normal text file (file extension .txt) and then re-run the training :thinking: <|||||>The `keep_linebreaks` argument is only implemented for text files in :hugs: Datasets: https://github.com/huggingface/datasets/blob/67574a8d74796bc065a8b9b49ec02f7b1200c172/src/datasets/packaged_modules/text/text.py#L19 For CSV it is not available: https://github.com/huggingface/datasets/blob/67574a8d74796bc065a8b9b49ec02f7b1200c172/src/datasets/packaged_modules/csv/csv.py<|||||>I'm working on a fix now (so that `keep_linebreaks` is only used when file extension is `.txt`)<|||||>Sure i can try that, i do have aquestion tho, when i convert my csv into a text file how will i organize it so that it uses each line as a sample and also what command do i have to put when i run the script? At the moment i have each row is the csv file as an individual sample<|||||>You can use the same structure (one individual sample per line) for the text file. Command would be pretty much the same, but you need to use the file ending `.txt`, so that the training script will infer the correct extension for the `load_dataset` argument :)<|||||>Thank you @stefan-it the script works now, running out of cuda memeory tho but i think it's irrelevant to the actual script and more to do with my device. Thanks Again!
transformers
13,311
closed
[Feature request] Introduce GenericTransformer to ease deployment of custom models to the Hub
# 🚀 Feature request Introduce a GenericTransformer model that can handle many different variants and tweaks of the Transformer architecture. There are 2 ways of doing this and I'm not 100% sure of which one would better suit HF: 1. Introduce a GenericTransformerModel with many different options (extensive config file), such as different positional embeddings or attention variants. The modeling code would constantly be updated by HF or contributions from the community and would be included in each release of the library itself. Backward compatibility would not necessarily be an issue if all new additions were disabled by default in the config class. Also, the model could be designed in a modular way to ease the addition of new variants (see torchtext's MHA container https://github.com/pytorch/text/blob/main/torchtext/nn/modules/multiheadattention.py). 2. Allow users to submit code following HF's interfaces alongside checkpoints. GenericTransformerModel would dynamically download and load code from the hub. I think the first one would be more convenient to avoid third-party dependencies and potentially unsafe code. The second one would be way more flexible, though. ## Motivation An important point of the HF Transformers library philosophy is outlined in the README of the repo: > Why shouldn't I use transformers? > This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files. To clarify, this feature request does NOT intend to modify this philosophy. which clearly has many advantages. Instead, it has the purpose of potentially alleviating one of the drawbacks of this philosophy: the difficulties in sharing custom models, even if these models just introduce small tweaks (see https://github.com/stanford-crfm/mistral/issues/85, https://github.com/huggingface/transformers/pull/12243). This would hopefully encourage researching different variants and combinations. In case one variant stabilized as a well-defined architecture that was worth using, then it might be considered to add it to the library the "classical" way, having a specific class, documentation, etc. ## Your contribution I can't allocate time to this at the moment. Sorry about that.
08-27-2021 19:43:26
08-27-2021 19:43:26
cc @sgugger regarding 2. :)<|||||>The plan is to add in the coming weeks support for custom models directly in the AutoModel classes, with the user providing the code of their models in a modeling file in the same repository on the model hub (same for custom tokenizers). ETA for this feature should be end of next week.<|||||>> The plan is to add in the coming weeks support for custom models directly in the AutoModel classes, with the user providing the code of their models in a modeling file in the same repository on the model hub (same for custom tokenizers). > > ETA for this feature should be end of next week. Perfect, thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>PR https://github.com/huggingface/transformers/pull/13467 introduced a first version of what you were asking for @jordiae! Let us know if it works for you :) <|||||>> PR #13467 introduced a first version of what you were asking for @jordiae! Let us know if it works for you :) Cool! Thanks!
transformers
13,310
closed
:bug: fix small model card bugs
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> - `model_index` ➡️ `model-index` - `metric` ➡️ `metrics` - Metrics Dict ➡️ List of Metrics Dicts These changes fix problem of user-provided evaluation metrics not showing up on model pages pushed to hub from trainer. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-27-2021 17:46:33
08-27-2021 17:46:33
good catch and thanks for working on this<|||||>Heres a repo I created with it: [nateraw/vit-base-beans-demo-v3](https://huggingface.co/nateraw/vit-base-beans-demo-v3)
transformers
13,309
closed
Fixing a typo in the data_collator documentation
# Fixed a typo in the documentation
08-27-2021 16:49:45
08-27-2021 16:49:45
transformers
13,308
closed
[Large PR] Entire rework of pipelines.
# What does this PR do? tl;dr: Make pipeline code much more consistent and enable large speedups with GPU inference. # GPU pipeline Currently the way pipeline are setup, it's kind of hard to keep the GPU busy 100% because we're not enabling the use of DataLoader (on pytorch), which is necessary to keep CPU working on next items to tokenize, while processing an item on GPU. We cannot realistically use the current API to maximize utilization: ```python for item in dataset: # item == "This is some test" for instance output = pipe(item) # output == {"label": "POSITIVE", "score": 0,99} ``` So we need to change up the API to something closer to what `DataLoader` does, which is use an iterable, which enables to have worker CPU threads process next items while the GPU is busy on the current one, meaning we're now using 100% of the GPU. ```python for output in pipe(dataset): # output == {"label": "POSITIVE", "score": 0,99} pass ``` In order to make that change possible, we **need** to separate better what happens on the CPU vs the GPU. The proposed way is to split the __call__ of pipeline into 3 distinct function calls - `preprocess`: in charge of taking the original pipeline input, and output a dict of everything necessary to do `model(**model_inputs)` for instance (or a `generate` call, but stuff that will really involve the GPU. - `forward`: In most cases it's a simple function call to the model forward method, but can be more complex depending on the pipeline. It needs to be separate from the other 2 because this is where the GPU might be used. so we can encapsulate more logic around this in the base class (`no_grad`, sending and retrieving tensors to/from GPU etc..) - `postprocess`: Usually links to processing the logits into something more user-friendly for the task at hand, again usually pretty fast and should happen on CPU (but should be so fast it does not matter really to have a separate thread for this). In order to increase consistency across pipelines, ALL pipelines will have to implement the 3 methods, and should have a `__call__` method (with exceptions discussed in consistency). They should be readable on their own too, meaning, the outputs of `preprocess` should be **exactly** what is sent to `forward` and what is returned by `forward` exactly the inputs of `preprocess`. So: ```python model_inputs = pipe.preprocess(item) model_outputs = pipe.forward(item) outputs = pipe.postprocess(model_outputs) ``` will always be perfectly valid, even if not the most efficient. # Consistency of pipelines Right now, pipelines are quite inconsistent in their returned outputs. - Some have parameters to change the output format (this is fine) - Most pipelines accept lists of items, and will return a list of outputs but: - Some will return a single item only if the input was a list of a single item (regardless of what the inputs originally was) - Some will do it better and return single item only if single item was sent - Some will use lists as batching, some will not, leading to slowdowns at best, OOM errors on large lists, and overall pretty poor efficiency on GPU (more info: https://github.com/huggingface/transformers/issues/13141, https://github.com/huggingface/transformers/pull/11251, https://github.com/huggingface/transformers/pull/11251) Batching on GPU seems like what is speeding up, things, but really it's not at inference times, batching in ML is used because of gradients and it's necessary for the gradient descent to be smooth, the speed part of the GPU is really linked to overall GPU usage, using `DataLoader` is the key part here. Nonetheless, sometimes, depending on actual hardware, pipeline, and input data, batching *can* be used efficiently, so the new design should enable that. However, it shouldn't be done the way it's currently setup, which is some pipelines do, some don't and no consistency overall, it should be done on a different layer than dataprocessing part of the pipeline. Because of the inconsitencies mentionned above, this refactor will include some `__call__` methods to change the return type based on what was previously there so (`preprocess`, `forward` and `postprocess` are mostly pure, while `__call__` will handle backwards compatibilty) # Parameter handling Another cause of concern for pipelines was parameter handling. Most parameters were sent to `__call__` method, but some where sent to `__init__`. Some in both. That meant that you would have to look though the docs to guess if you needed to to ```python pipe = pipeline(....., num_beams=2) outputs = pipe(item) # or pipe = pipeline(....) outputs = pipe(item, num_beams=2) ``` The goal in this PR, was to make that explicit, so BOTH will be supported and have the exact same behavior. In order to do that, we introduced a new mandatory method `set_parameters` which would be called both in `__call__` and `__init__` in the same way so that it would always work. 1. Because this new `set_parameters` is a standard method, we can use it to properly discard unexpected keyword with a real errors instead of just ignoring it. 2. Because `__init__` and `__call__` are now base class only (roughly), we can capture parameters much better, meaning we don't have extra layer of parameter guessing (is it tokenization param, model param, pipeline param ?). Each method will capture everything it needs and pass on the rest, the ultimate method in the chain is `set_parameters` which might be specific parameters, or accept everything (like **generate_kwargs, so utlimately `generate` will have the final word). 3. Because `set_parameters` will be called at least 2 times and we don't know which one will have actual real values, it needs to be done in a somewhat odd way. The ways most pipelines will do, is simply have a default argument to `None`, so if the argument is `None` we know that the caller didn't supply this argument so we don't override it (the default one is defined in the `__init__` if dynamic or directly in the class if static. This however does not work when `None` is a valid choice for some parameter, this is true **only** for `zero-shot-classification` test, where we specially test that we raise some error when passing `None` as a value (so it can probably be changed, but will be backward incompatible regarding tests). For those, more complex logic is required. 4. Because we're now using `self` as the holder for parameters that means that using threading mecanisms to run the pipelines might lead to some oddities (but people most likely aren't using 1 pipeline on different threads, most likely shouldn't be at least). Other options are possible but would passing them though all 3 functions `preprocess`, `forward` and `postprocess` reducing readability IMHO, for debattable gains. # Results Currently we're sitting here performance wise bench code ```python from transformers import pipeline from transformers.pipelines.base import KeyDataset import datasets import tqdm pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0) dataset = datasets.load_dataset("superb", name="asr", split="test") print("New style of pipeline") for out in tqdm.tqdm(pipe(KeyDataset(dataset, "file"))): pass print("Old style of pipeline") for item in tqdm.tqdm(dataset): out = pipe(item["file"]) ``` Speed (done on old suffering GTX 970): ![F02AXFBCPJN](https://user-images.githubusercontent.com/204321/131305601-e1b75b93-97e6-47c8-a9ce-55bcbbaece58.png) ## Backward compatibility We're currently sitting at 100% backward compatibility regarding tests. We're not however 100% backward compatible. By fixing the inconsistencies of pipelines, we will break any code that was using parameters wrong (as they will suddenly start working or crashing because they're invalid). ## Tensorflow I mentionned `DataLoader` which will be used to great effectiveness on Pytorch + `list` inputs or `Dataset` input. (on single inference on GPU + pt, you will get a warning, prompting you to use more efficient methods) On tensorflow however, more work is needed to make it faster there too. At the very least we shouldn't degrade performance too much, this has to be checked (both GPU and CPU). Ideally we would have a similar mecanism than `DataLoader` to maximise efficiency on GPU tensorflow. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-27-2021 16:21:17
08-27-2021 16:21:17
What about integrating this [idea](https://github.com/huggingface/transformers/issues/13274) into this rework?<|||||>@xegulon It would be a great addition, we already have similar functionnality within HF. The code is not open source just because it's messy and wouldn't fit `transformers` requirements (backward compatiblity and maintaining this is out of scope in our opinion) but we do reuse most tools that we provide (like export_onnx), so it's mostly plumbing. If we can find something clean enough, it's probable it would be a welcome addition. Few caveats to mention: - Using `ONNX` in fully optimized mode makes it hardware dependent (you HAVE to run on similar hardware as where the optimized file was created). - Using quantization might lead to performance drop (but also huge speedup). - Using ONNX with fancy methods like `generate` is much harder to do to keep performance (you have to take care of `past_key_values`). - Using ONNX with `generate` and running on GPU is actually counterproductive because we can't run the beam search directly on GPU tensors (that's an ORT limitation). So there's a lot of back&forth between GPU and CPU which is bad for performance. (We also tried the `beam_search` proposed by ORT but didn't find it was worth it as implementation differs significantly from transformers.) With those caveats in mind, feel free to add a PR, it would be a welcome addition if we manage to make it readable and orthogonal (the new refactor should help for sure). Try to make the PR small and more like PoC so everyone could weigh in in terms of design (most notably transformers core maintainers)<|||||>Hey, it's really great to see work on general code organisation to any degree. Thanks for your work. It looks like this PR introduced a bug around completing empty prompts: ``` transformers.pipeline('text-generation')('') ``` ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/user/.local/lib/python3.8/site-packages/transformers/pipelines/text_generation.py", line 150, in __call__ return super().__call__(text_inputs, **kwargs) File "/home/user/.local/lib/python3.8/site-packages/transformers/pipelines/base.py", line 915, in __call__ return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "/home/user/.local/lib/python3.8/site-packages/transformers/pipelines/base.py", line 922, in run_single model_outputs = self.forward(model_inputs, **forward_params) File "/home/user/.local/lib/python3.8/site-packages/transformers/pipelines/base.py", line 871, in forward model_outputs = self._forward(model_inputs, **forward_params) File "/home/user/.local/lib/python3.8/site-packages/transformers/pipelines/text_generation.py", line 162, in _forward generated_sequence = self.model.generate(input_ids=input_ids, **generate_kwargs) # BS x SL File "/home/user/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs) File "/home/user/.local/lib/python3.8/site-packages/transformers/generation_utils.py", line 1016, in generate return self.sample( File "/home/user/.local/lib/python3.8/site-packages/transformers/generation_utils.py", line 1529, in sample outputs = self( File "/home/user/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/user/.local/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 949, in forward transformer_outputs = self.transformer( File "/home/user/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/user/.local/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 673, in forward input_ids = input_ids.view(-1, input_shape[-1]) RuntimeError: cannot reshape tensor of 0 elements into shape [-1, 0] because the unspecified dimension size -1 can be any value and is ambiguous ```<|||||>🤑💪🏻<|||||>how to use my own dataset, that is txt file ,per line is the input for NER model could you pls help me ? <|||||>> how to use my own dataset, that is txt file ,per line is the input for NER model > could you pls help me ? the example scripts want itin jsonlines or csv. https://huggingface.co/docs/transformers/run_scripts#use-a-custom-dataset . you can use a tool to convert to jsonlines. it takes some patience to figure out a way to do each step, and then it works.
transformers
13,307
closed
[Flax] Correct all return tensors to numpy
# What does this PR do? This PR adapts all examples to return `numpy` instead of `jax` to avoid preventing asynchronous dispatch: https://jax.readthedocs.io/en/latest/async_dispatch.html
08-27-2021 15:28:40
08-27-2021 15:28:40
transformers
13,306
closed
Missing on_predict event in TrainerCallback
# 🚀 Feature request Can we add `on_predict` event support in the `Trainer` and `TrainerCallback`? ## Motivation I was in need to use it already in multiple projects. I think it makes sense since `Trainer` already support `on_evaluate` event inside `evaluate()` method. Corresponding event handler is missing in `predict()` method which is part of the `Trainer` class. Some of the training support libraries are supporting such events, so I guess there are no strong reasons against that ([link](https://pytorch-lightning.readthedocs.io/en/latest/extensions/generated/pytorch_lightning.callbacks.Callback.html#pytorch_lightning.callbacks.Callback.on_test_end)). ## Your contribution I am willing to make a PR that will implement that.
08-27-2021 15:22:01
08-27-2021 15:22:01
cc @sgugger <|||||>The reason why there is no `on_predict` event is that the `predict` method is never called during the training loop. You can add the code you want to be run after `Trainer.predict` just after calling that method.<|||||>Ok, I see. For my cases having this custom code in the callback feels cleaner and more consistent with the post-processing done during `evaluate()` method, but will do it as you suggest.
transformers
13,305
closed
Layoutlm onnx support
# What does this PR do? This PR extends ONNX support to LayoutLM as explained in https://huggingface.co/transformers/serialization.html?highlight=onnx#converting-an-onnx-model-using-the-transformers-onnx-package Fixes Issue #13300 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @mfuntowicz @NielsRogge
08-27-2021 15:20:23
08-27-2021 15:20:23
@nishprabhu would you mind closing this PR and opening a new one, without any additional changes? Github has a weird issue making the diff almost impossible to review. Sorry for the inconvenience, please let us know if you need any help.<|||||>Sure, @mfuntowicz I'll open a new PR with the changes.
transformers
13,304
closed
Slow tests - run rag token in half precision
Currently `tests/test_modeling_rag.py::RagModelIntegrationTests::test_rag_token_generate_batch` errors out with OOM in the slow tests -> let's run it in half precision on GPU. Output has been verified to stay the same
08-27-2021 14:48:03
08-27-2021 14:48:03
transformers
13,303
closed
[Slow tests] Disable Wav2Vec2 pretraining test for now
Wav2Vec2 pretraining seems not to be working curretnly. This PR disables the test: tests/test_modeling_wav2vec2.py::Wav2Vec2ModelIntegrationTest::test_inference_integration Test will be re-enabled once succesfull wav2vec2 pretraining has been done
08-27-2021 14:30:20
08-27-2021 14:30:20
transformers
13,302
closed
Fix loading for newer m2m models
Newer version of m2m models have args parameter as None. args are present in cfg['model']. Fixing input arguments to the function as well.
08-27-2021 14:07:19
08-27-2021 14:07:19
Thanks for the PR @harveenchadha ! Could you post the link for the new m2m models, I couldn't find anything new here https://github.com/pytorch/fairseq/tree/master/examples/m2m_100<|||||>Hey Suraj, from newer models I mean models trained with newer version of fairseq. I was trying to convert [Indic Trans](https://github.com/AI4Bharat/indicTrans) and ran into issues using this script.<|||||>I see, thanks! PR looks good, just one style check is failing, you could fix it by running `make style` and `make quality`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>gently pinging @harveenchadha :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,301
closed
Fixing mbart50 with `return_tensors` argument too.
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-27-2021 14:05:55
08-27-2021 14:05:55
transformers
13,300
closed
Support for converting LayoutLM to ONNX
# 🚀 Feature request Transformers currently provides ready configurations for converting BERT, BART, RoBERTa and several other models to ONNX. Can we extend this to also support LayoutLM? ## Motivation ONNX is quickly becoming the default runtime environment in many production settings. Ideally, all models supported by the library should have an easy path to conversion. ## Your contribution I am willing to submit a PR that implements this.
08-27-2021 12:33:50
08-27-2021 12:33:50
Sure that would be great. LayoutLM is literally only adding 4 additional embedding layers to BERT: https://github.com/huggingface/transformers/blob/a3f96f366a49bbe2cbdeaebd2e32ccdc1260a1d6/src/transformers/models/layoutlm/modeling_layoutlm.py#L66-L69 So I guess it won't be that difficult to support? cc @mfuntowicz <|||||>The guide written here is very helpful: https://huggingface.co/transformers/serialization.html?highlight=onnx#converting-an-onnx-model-using-the-transformers-onnx-package<|||||>Thanks! It was very useful!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>dumb question - how do i format the ORT inputs for LayoutLM onnx? does anyone have an example of LayoutLM ONNX inference? I'm trying to pass in the output of a collator into the onxx session. Its not liking the bounding box tensor since it its dimensions are different than input_ids, token_type_ids and attention_mask.
transformers
13,299
closed
Moving `zero-shot-classification` pipeline to new testing.
# What does this PR do? And removing the old mixins ! <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-27-2021 10:52:57
08-27-2021 10:52:57
transformers
13,298
closed
Examples: label mapping for text classication tasks are not written into configuration
Hi, I've shortly discussed this with @patrickvonplaten , and we came to the conclusion that the following scenario is a bug: When using the text classification for a GLUE task, no label mapping will be written into the configuration of the final fine-tuned model. This leads to an unesthetic "label_0" on the model hub, as it can be seen here: ![Bildschirmfoto_2021-08-27_12-17-09](https://user-images.githubusercontent.com/20651387/131112292-ab2fc8f8-2fad-42cf-b436-0a0a5e0a4475.png) One has to manually extend the `config.json`: ```json "id2label": { "0": "NEGATIVE", "1": "POSITIVE" }, "label2id": { "NEGATIVE": 0, "POSITIVE": 1 } ``` to get the following output: ![Bildschirmfoto_2021-08-27_12-17-29](https://user-images.githubusercontent.com/20651387/131112468-bff93d0f-4f3f-428d-ac3b-696ae6e08543.png) The text classification examples should be extended, so that those label mappings are automatically added to the configuration file.
08-27-2021 10:19:44
08-27-2021 10:19:44
cc @sgugger @Rocketknight1 <|||||>Should be fixed by the PR above, I forgot the `label_to_name` dict was left to None for GLUE tasks.
transformers
13,297
closed
Moving `translation` pipeline to new testing scheme.
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-27-2021 09:43:19
08-27-2021 09:43:19
It seems one of the tests will need to be adapted: ``` =================================== FAILURES =================================== _____________ MBartEnroIntegrationTest.test_tokenizer_translation ______________ [gw1] linux -- Python 3.7.11 /usr/local/bin/python self = <tests.test_tokenization_mbart.MBartEnroIntegrationTest testMethod=test_tokenizer_translation> @require_torch def test_tokenizer_translation(self): > inputs = self.tokenizer._build_translation_inputs("A test", src_lang="en_XX", tgt_lang="ar_AR") E TypeError: _build_translation_inputs() missing 1 required positional argument: 'return_tensors' ```<|||||>Updated the test !
transformers
13,296
closed
__version__ attribute missing in mode config for sentence-transformers/paraphrase-mpnet-base-v2
I've manually downloaded a model `paraphrase-mpnet-base-v2`, and it appears that the `SentenceTransformer.py` code above is requesting a field` __version__ `in the model config that doesn't appear to be there. I have read the same topic : https://github.com/UKPLab/sentence-transformers/issues/184 but it doesn't solve the issue. In the link below: https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2 **Code given in the link:** ``` from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/paraphrase-mpnet-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` **I guess the above code will not work as mentioned in the link. Please correct this if it's an ongoing issue.** For Sentence Transformer, I guess we need following files: ``` The folder should consist these files: 0_Transformer/ 1_Pooling/ config.json modules.json ``` But when we download the model `paraphrase-mpnet-base-v2 ` and unzip it. It doesn't have `0_Transformer` in it? Any Suggestions?
08-27-2021 09:38:55
08-27-2021 09:38:55
Maybe of interest to @nreimers <|||||>Hi @pratikchhapolika The above code works well with the most recent sentence-transformers version v1 (v1.2.1) or (better) v2 (>= 2.0.0). With old sentence-transformers versions 1 the model does not work, as the folder structure has changed to make it compatible with the hub. A folder 0_Transformer is not required and was removed in v2, so that models can be also loaded with HF transformers from the hub. Just update to v1.2.1 or v2.0.0 and everything works.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,295
closed
GPT2 model state dictionary Tensor types are not matching with pytorch
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.0.1 and 4.4.2 - Platform: windows - Python version: 3.6 - PyTorch version (GPU?): 1.8 - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: GPT2 - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: @patrickvonplaten, @LysandreJik 1. model = AutoModelForCausalLM.from_pretrained('gpt2', force_download=True) 2. Download size is 548M the size on disk is 535M 2. model.save_pretrained(<some_dir>) the model size comes down to 498M 3. Now a freshly downloaded model not using from_pretrained which has size of 535M is loaded using torch.load 4. for k, v in model.items(): ... if 'attn.bias' in k: ... print(v.type()) 5. The types are FloatTensor whereas if the same code is run with the model in step 1. I get key name transformer.h.0.attn.bias = type = **torch.ByteTensor** key name transformer.h.0.attn.c_attn.bias = type = torch.FloatTensor Is this expected? <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior the model serialization and deserialization shouldn't change the tensor types. <!-- A clear and concise description of what you would expect to happen. -->
08-27-2021 08:37:47
08-27-2021 08:37:47
Hey @snaik2016, Note that `transformer.h.0.attn.bias` is actually not the bias weights of the attention layer but a pre-computed causal mask: https://github.com/huggingface/transformers/blob/319d840b46fd3a13e0434de9de69bd74a2f22f43/src/transformers/models/gpt2/modeling_gpt2.py#L130 => This means that you don't need to pass this parameter - it'll be generated automatically. TLDR; this behavior is expected IMO<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,294
closed
albert flax
# What does this PR do? ALBERT Flax Model <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @LysandreJik @patrickvonplaten
08-27-2021 08:36:49
08-27-2021 08:36:49
@patrickvonplaten Done changes according to your suggestions. Thanks for the review <|||||>Hey @kamalkraj , thanks for that PR! Can't wait to road-test it :hugs: I tried training several ALBERT models with the official implementation, but had no luck in training a good performing model.<|||||> > I tried training several ALBERT models with the official implementation, but had no luck in training a good performing model. pre-training with TF 1 code?<|||||>Yeah, it was the TF 1 code base, and I've also trained various model sizes. Let's see if I have more luck using the FLAX implementation 😅<|||||>Awesome addition @kamalkraj - thanks a lot :-) Left a couple of final additions<|||||>@patrickvonplaten Done changes according to review.<|||||>Slow tests are passing on CPU - thanks for the model addition @kamalkraj !
transformers
13,293
closed
DistilBertTokenizer for distilbert-base-multilingual-cased is unable to encode / decode Japanese characters properly adding unnecessary characters in between
I have used a google collab notebook. `transformers` version used is 4.9.2 Model used : distilbert-base-multilingual-cased @LysandreJik I am facing an issue with the tokenizer: @LysandreJik The problem arises when using the tokenizer in the case of Japanese text. I have attached an example script of what I am going to suggest ![image](https://user-images.githubusercontent.com/23450481/131085602-be7ffceb-c408-4243-92b4-8abd99b4ec5f.png) I wanted to obtain the token ids for string "祝い めでた 動画". When I used the list of token ids to obtain the corresponding string, I obtained another string "祝い めでた 動 画" seems like there is a bug in either of the functions. Steps to reproduce the behavior: ```python from transformers import DistilBertTokenizer import torch tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-multilingual-cased') print(tokenizer.decode([ 101, 5914, 15221, 1965, 12236, 20058, 2621, 115384, 102])) print(tokenizer.decode([ 101, 5914, 1906, 1965, 12236, 20058, 2621, 5618, 102])) inputs = tokenizer("祝い めでた 動画", return_tensors="pt") print(inputs) ``` Expected behavior: I should have obtained the same string when I was decoding the token ids obtained from the input string.
08-27-2021 07:37:24
08-27-2021 07:37:24
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,292
closed
Add REALM
# What does this PR do? This PR adds REALM. - Original paper: https://arxiv.org/abs/2002.08909 - Code and checkpoints: https://github.com/google-research/language/tree/master/language/realm ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Closes https://github.com/huggingface/transformers/issues/3497
08-27-2021 07:31:40
08-27-2021 07:31:40
Hi @qqaatw, Thank you so much for putting effort on this PR, providing pre-trained REALM models in Pytorch Transformers API. I am wondering whether your REALM models in pytorch can reproduce Table 2 of their original [paper](https://arxiv.org/pdf/2002.08909.pdf)? Alternatively, do you verify their tensorflow pre-train model has the same embeddings as your converted pytorch models given arbitrary input sequence? Thanks again for the awesome work! <|||||>Hello @OctoberChang, thanks for your reply! This is my first time trying to port a model from Tensorflow, so I may need some time to clarify the structure and behavior of the original model. Currently, the retriever part was successfully converted. Regarding your concerns, I've verified the retriever's behavior by feeding the same inputs to TensorFlow and PyTorch models respectively, and then checking their outputs that are nearly identical. For now, I may not have enough resources to complete those ablation experiments sadly, but I think it can be reproduced as long as the PyTorch model's behavior is nearly the same as that of Tensorflow.<|||||>> Hello @OctoberChang, thanks for your reply! > > This is my first time trying to port a model from Tensorflow, so I may need some time to clarify the structure and behavior of the original model. Currently, the retriever part was successfully converted. > > Regarding your concerns, I've verified the retriever's behavior by feeding the same inputs to TensorFlow and PyTorch models respectively, and then checking their outputs that are nearly identical. For now, I may not have enough resources to complete those ablation experiments sadly, but I think it can be reproduced as long as the PyTorch model's behavior is nearly the same as that of Tensorflow. Awesome! Looking forward to this PR and the pre-trained Realm models in Pytorch Transformers! <|||||>The reason I didn't add `RealmForQuestionAnswering` is the following: 1. The fine-tuning code is placed at another project, [ORQA](https://github.com/google-research/language/tree/master/language/orqa), which has its own [paper](https://arxiv.org/abs/1906.00300). 2. The architecture of fine-tuned models is not compatible with the existing question answering architecture in `transformers`. Therefore, I think residing the fine-tuning code to research_project folder or making it a new model would be more appropriate. <|||||>Tests related to REALM have passed! Some failures seem related to Flax Big Bird model. @sgugger @LysandreJik @patrickvonplaten <|||||>@OctoberChang Do you have any suggestion on this PR? :-)<|||||>@patrickvonplaten Thank you a lot for the comments and review. I've left replies on the threads.<|||||>Hi ! Sure we can add the index in `datasets`. Do you know what data they used exactly ? Are the texts available ? If yes, did they also share the embeddings of the documents ? Otherwise we can just build an index from scratch using Wikipedia and the model to encode the documents<|||||>Hey @qqaatw, Sorry to answer that late. We just had a long discussion internally with @lhoestq on how to best integrate REALM into `transformers`. Our understanding of [ORQA](https://arxiv.org/pdf/1906.00300.pdf) and [REALM](https://arxiv.org/pdf/2002.08909.pdf) and how it relates to the integration to `transformes` is the following: - The ORQA paper was published before REALM. In ORQA only the retriever was pretrained. REALM was published afterwards and is (subjectively) an improved pre-training method for knowlegde-augmented language models. REALM compares its methods to ORQA by evaluating the models on open-ended question answering, *i.e.*: ``` We evaluate our approach by fine-tuning the models pre-trained with REALM on the task of Opendomain Question Answering (Open-QA), one of the most knowledge-intensive tasks in natural language processing. We evaluate on three popular Open-QA benchmarks (NATURALQUESTIONS-OPEN, WEBQUESTIONS, and CURATEDTREC) and compare to state-of-the-art Open-QA models, including both extremely large models that store knowledge implicitly (such as T5) as well as previous approaches that also use a knowledge retriever to access external knowledge, but implement retrieval in a more heuristic fashion (Lee et al., 2019 - ORQA; Min et al., 2019a; Asai et al., 2019). REALM achieves new state-of-the-art results on all three benchmarks, significantly outperforming all previous systems by 4-16% absolute accuracy. We also demonstrate qualitative benefits of REALM, including interpretability and modularity. ``` (on page 2) => this means that the REALM paper does not just provide a pretraining method, but also fine-tuned checkpoints for Open-QA. - As a conclusion REALM does not necessarly rely on the code of `ORQA`. REALM provides both pre-trained checkpoints: https://github.com/google-research/language/tree/master/language/realm#pre-trained-model-checkpoints as well as fine-tuned ones: https://github.com/google-research/language/tree/master/language/realm#pre-trained-model-checkpoints which from a logical point of view are 1-to-1 related to the REALM paper (since REALM evaluated its models on Open-QA). Therefore, in our opinion, the community should be able to load all of the following checkpoints within `modeling_realm.py`: - ``` REALM pre-trained with CC-News as the target corpus and Wikipedia as the knowledge corpus is available at gs://realm-data/cc_news_pretrained REALM fine-tuned to perform open-domain QA: on WebQuestions: gs://realm-data/orqa_wq_model_from_realm on NaturalQuestions: gs://realm-data/orqa_nq_model_from_realm ``` Therefore, I think we should add all the logic of your [codebase](https://github.com/qqaatw/pytorch-realm-orqa): `RealmSearcher` and `RealmReader` as well as `RealmForOpenQA` in `modeling_realm.py`. This has a lot of advantages also from a community's point of view: 1. REALM without an implementation for QA cannot really be used by most people as pretraining is just to expensive 2. REALM with QA would allow us to nicely demo the model. => would it be ok for you to implement `RealmSearcher` and `RealmReader` similar to how you've done it in [your code-base](https://github.com/qqaatw/pytorch-realm-orqa) as well as a `RealmForOpenQA` class that wraps both of those classes so that QA can be done with a single model instance. In a first step, I think we can transfer most of your code from https://github.com/qqaatw/pytorch-realm-orqa into `modeling_realm.py` and add the integration test - once that passes we can refactor a bit. In a second step, I think we should think a bit about a better abstraction of the retrieval part - ideally we can implement a `RealmRetriever` similar in design to the `RagRetriever` in https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_rag.py (@lhoestq and I are happy to help with that!) Does this make sense to you? cc @lhoestq <|||||>Thank you very much for your detailed replies @patrickvonplaten, I agree with you that we should add `RealmSearcher`, `RealmReader`, and `RealmQA` together. In fact, I initially think that because *REALM* and *ORQA* belong to different codebases, even if fine-tuned *ORQA* checkpoints of *REALM* are provided, they should be put into different models. However, considering UX and costly *REALM* pre-training procedure to most users, integrating them into a single model in `transformers` totally makes sense! The branch of `transformers` codebase in https://github.com/qqaatw/pytorch-realm-orqa is based on the branch of this PR, so I can merge it here seamlessly. Before doing so, there are some concerns that we should clarify first: 1. The term `Searcher` in ORQA's codebase is actually called `Retriever`, and I changed it to `Searcher` in order to prevent the name conflict between REALM's retriever and ORQA's retriever (Their logics are slightly different but have the same purpose.). Do you think this naming is OK? 2. Currently `RealmSearcher` is leveraging `ScaNN` python package as the vector similarity searcher, would it be OK to add this package into `transformers`' requirements in `setup.py`? If we can't, there are two possible alternatives: - Asking users to install it manually in the model's docs. - Implementing brute-force matrix multiplication to replace `ScaNN`, (mentioned [here](https://github.com/google-research/language/tree/master/language/realm#code)). 3. Since `block records` and `block embs` are bound together, `block records` is the corpus containing billions of documents, and `block embs` is a tensor with shape (num_block_records, retriever_proj_size) pre-computed from the corpus, if we upload `block records` into `datasets`'s index, we should also notify users that which model checkpoints in the hub containing `block embs` are corresponded to the `block records` in `datasets`. 4. Would it be suitable to upload Google's official pre-trained and fine-tuned checkpoints to the hub under my HF account? or uploading it under Google's org account is a better choice? If so, do I need to get permission from them first? (Although I've uploaded some checkpoints to my account for testing)<|||||>Hey @qqaatw, Thanks for your answer: 1) I think both "Searcher" and "Retriever" is fine! - happy to stick with "Searcher" 2) Yes good question - we'll probably have to add an optional dependency to `setup.py` indeed. We can take care of this later though. The idea is to verify that scann is installed as soon as `Realm....from_pretrained(...)` is called. For now, feel free to just add it and I think we can later take care of it. It would be nice to have both the possibility to use Scann as well as brute-force matrix multiplication (maybe we can add a switch to the REALM config) - something like `use_scann`? 3) cc @lhoestq what do you think here? Should we add both `block records` and `block embeds` to `datasets` with `block embeds` being model-specific or add just `block records` to `datasets` and `block embeds` to the corresponding model repo? 4) Yes for now feel free to upload under your account name. Thanks a lot for your work on this :-)<|||||>Hello @patrickvonplaten, thanks for your answer. I've added `RealmSearcher`, `RealmReader`, and `RealmForOpenQA` into the model; also, I've added `use_scann` option in the config for search method switching (brute-force matrix multiplication searcher has been implemented). We can complete integration tests as soon as the strategy of storing block records is decided.<|||||>> Hello @patrickvonplaten, thanks for your answer. > > I've added `RealmSearcher`, `RealmReader`, and `RealmForOpenQA` into the model; also, I've added `use_scann` option in the config for search method switching (brute-force matrix multiplication searcher has been implemented). > > We can complete integration tests as soon as the strategy of storing block records is decided. Awesome work @qqaatw :-) Did you already push the additions? I can't see `RealmForOpenQA` in `modeling_realm.py` yet. @lhoestq - I think we should store the block records in `datasets` no?<|||||>@patrickvonplaten The changes have been merged, thanks!<|||||>@qqaatw - ok I think we have a general outline of how to implement everything in mind now. I think this will be a slighly bigger PR than originally expected but the result should be a very nice model :-) I'll try to help you with the integration as much as possible in the coming days/weeks! In a first step, could you add a very hacky way of making an integration test pass? This way I can see exactly how each of the components interact with each other and can reproduce the results locally - feel free to retrieve the block records what is most sutiable for you right now. We'll iterate from this first version then :-)<|||||>I believe @patrickvonplaten and @lhoestq are on it, but it's a very big contribution (thanks @qqaatw!!) so it might take a bit of time. Sorry about that!<|||||>Hey @qqaatw, I freed up some time next week to dive into this! Very sorry for the delay<|||||>@patrickvonplaten, Thanks for taking time to dive into this, especially in the holiday. I'll keep tracking this thread to discuss more in detail. Merry Christmas!<|||||>> @patrickvonplaten, > > Thanks for taking time to dive into this, especially in the holiday. I'll keep tracking this thread to discuss more in detail. > > Merry Christmas! Hey @qqaatw, Thanks a lot for the nice words! Merry Christmas to you as well! The docs are now cleaned and now we can start to look how to best integrate REALM :-) In a first step, it would be amazing if we could make sure that the performance of REALM matches more or less the paper. I see that you have some very nice analysis in your repo here: https://github.com/qqaatw/pytorch-realm-orqa#naturalquestionsnq Do you think you could post a command here that allows to reproduce those results with the current code, *e.g.* using this format: ```python model = RealmForOpenQA.from_pretrained( r"qqaatw/realm-orqa-nq-searcher", r"qqaatw/realm-orqa-nq-reader", BLOCK_RECORDS_PATH, ) question = "Who is the pioneer in modern computer science?" searcher_output, reader_output, predicted_answer = model(question) self.assertEqual(predicted_answer, "alan mathison turing") ``` just like in the test: `test_inference_open_qa`. Once we have verified this, I think I'm able to quickly onboard @lhoestq and others to get this PR merged. Let me know if this is not very clear here :-) In a first step I downloaded `https://storage.cloud.google.com/orqa-data/enwiki-20181220/blocks.tfr` and ran `test_inference_open_qa`. The weird thing is that when running it multiple times, sometimes I'm getting the correct answer, but unfortunately I also sometimes (20% of the time) get the following error: ```bash E AssertionError: 'charles babbage' != 'alan mathison turing' E - charles babbage E + alan mathison turing tests/test_modeling_realm.py:489: AssertionError ``` So there seems to be some kind of randomness in the forward pass - could you verify whether this is the case for you as well? 80% of the time I'm however getting the correct solution. Note that the reason could also be tiny differences in logits precision for multiple forward passes which I've previously seen with RAG as well. So if you can't reproduce the result it's fine as is I think :-) The important part would now be to verify your eval results here: https://github.com/qqaatw/pytorch-realm-orqa#naturalquestionsnq with this HF implementation . BTW, I'm using the following envs (transformers on this branch): ``` - `transformers` version: 4.16.0.dev0 (this branch) - Platform: Linux-5.3.0-64-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyTorch version (GPU?): 1.10.0+cu102 (True) - Tensorflow version (GPU?): 2.7.0 (False) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ``` and ``` scann version: 1.2.4 ``` in case this might be a reason for the randomness in the test.<|||||>Hi @patrickvonplaten, Thanks for testing this out. I cannot reproduce the failure with brute-force searcher but can reproduce with ScaNN searcher. It seems that the result from ScaNN is sometimes not deterministic using the same ScaNN parameters as that of in the `orqa` codebase. In contrast, because brute-force searcher always computes all inner products and finds the top K highest scores, it produces consistent results. Here is the way that uses brute-force searcher: ```python realm_config=RealmConfig(use_scann=False) model = RealmForOpenQA.from_pretrained( r"qqaatw/realm-orqa-nq-searcher", r"qqaatw/realm-orqa-nq-reader", BLOCK_RECORDS_PATH, config=realm_config, ) question = "Who is the pioneer in modern computer science?" searcher_output, reader_output, predicted_answer = model(question) assert predicted_answer == "alan mathison turing" ``` My testing env: ``` - `transformers` version: 4.16.0.dev0 - Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyTorch version (GPU?): 1.9.0+cu111 (True) - Tensorflow version (GPU?): 2.6.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.3.6 (cpu) - Jax version: 0.2.24 - JaxLib version: 0.1.69 - Using GPU in script?: False - Using distributed or parallel set-up in script?: False ScaNN version: scann 1.2.3 ``` For benchmark reproductions, because it requires some data loading, preprocessing, and evaluation function, there might not be practical to paste the entire code here. I just wrote a compact [script](https://github.com/qqaatw/pytorch-realm-orqa/blob/master/benchmark.py) that reproduces both NQ and WQ benchmark results using current up-to-date HF impl. To run NQ: ```bash python benchmark.py \ --dataset_name_path natural_questions \ --retriever_pretrained_name qqaatw/realm-orqa-nq-searcher \ --checkpoint_pretrained_name qqaatw/realm-orqa-nq-reader \ --block_records_path ./data/enwiki-20181220/blocks.tfr \ --device cuda ``` To run WQ: ```bash python benchmark.py \ --dataset_name_path web_questions \ --retriever_pretrained_name qqaatw/realm-orqa-wq-searcher \ --checkpoint_pretrained_name qqaatw/realm-orqa-wq-reader \ --block_records_path ./data/enwiki-20181220/blocks.tfr \ --device cuda ```<|||||>> Hi @patrickvonplaten, > > Thanks for testing this out. I cannot reproduce the failure with brute-force searcher but can reproduce with ScaNN searcher. > > It seems that the result from ScaNN is sometimes not deterministic using the same ScaNN parameters as that of in the `orqa` codebase. In contrast, because brute-force searcher always computes all inner products and finds the top K highest scores, it produces consistent results. > > Here is the way that uses brute-force searcher: > > ```python > realm_config=RealmConfig(use_scann=False) > model = RealmForOpenQA.from_pretrained( > r"qqaatw/realm-orqa-nq-searcher", > r"qqaatw/realm-orqa-nq-reader", > BLOCK_RECORDS_PATH, > config=realm_config, > ) > > question = "Who is the pioneer in modern computer science?" > searcher_output, reader_output, predicted_answer = model(question) > > assert predicted_answer == "alan mathison turing" > ``` > > My testing env: > > ``` > - `transformers` version: 4.16.0.dev0 > - Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.17 > - Python version: 3.8.10 > - PyTorch version (GPU?): 1.9.0+cu111 (True) > - Tensorflow version (GPU?): 2.6.0 (False) > - Flax version (CPU?/GPU?/TPU?): 0.3.6 (cpu) > - Jax version: 0.2.24 > - JaxLib version: 0.1.69 > - Using GPU in script?: False > - Using distributed or parallel set-up in script?: False > > ScaNN version: > scann 1.2.3 > ``` > > For benchmark reproductions, because it requires some data loading, preprocessing, and evaluation function, there might not be practical to paste the entire code here. I just wrote a compact [script](https://github.com/qqaatw/pytorch-realm-orqa/blob/master/benchmark.py) that reproduces both NQ and WQ benchmark results using current up-to-date HF impl. > > To run NQ: > > ```shell > python benchmark.py \ > --dataset_name_path natural_questions \ > --retriever_pretrained_name qqaatw/realm-orqa-nq-searcher \ > --checkpoint_pretrained_name qqaatw/realm-orqa-nq-reader \ > --block_records_path ./data/enwiki-20181220/blocks.tfr \ > --device cuda > ``` > > To run WQ: > > ```shell > python benchmark.py \ > --dataset_name_path web_questions \ > --retriever_pretrained_name qqaatw/realm-orqa-wq-searcher \ > --checkpoint_pretrained_name qqaatw/realm-orqa-wq-reader \ > --block_records_path ./data/enwiki-20181220/blocks.tfr \ > --device cuda > ``` Thanks for diving into the random output story. I indeed only tried the "ScaNN" approach. Given that "ScaNN" has a TF dependency I think it's better anyways to switch to `use_scann=False`. Just tried it and it seems to work well. Thanks a lot for providing the benchmarking script - I'll run it today and start to think about how we can best integrate the model :-) That's great! That's exactly what I was looking for :-) <|||||>@qqaatw - I can reproduce the eval results which is great. I've started to modify the structure of the elements a bit. I still need to do some modifications tomorrow, but after that I could maybe hand back to you :-) <|||||>Hey @qqaatw, I think we are pretty close to have a version that can be integrated into `transformers`. I've now finished the main changes, which were: - No `tokenizer` should ever be saved in a model, which is why I moved all tokenizer logic out of the model's forward pass - Non-PyTorch related code to the retrieval should be handled by a `RealmRetriever` class which I have added and integrated into the model. - The model `RealmForOpenQA` now **only** does PyTorch matrix multiplication and no tokenizer or retrieval-like operations. This way `RealmForOpenQA` keeps the same format of all other Transformer models. For now, I think it is enough to ensure whenever commits are finished that the test `test_inference_open_qa` still works correctly. @qqaatw - the next big step now is to fully remove the `RealmSearcher` class and to load all necessary weights directly in `RealmForOpenQA` . The class `RealmForOpenQA` also should not have a special `from_pretrained(...)` or `save_pretrained(...)` method but should be able to use the default ones. We are looking for the following API in the end: ```python question = "some question" model_id = "/path/to/full/realm/model/on/hub/" tokenizer = RealmTokenizer.from_pretrained(model_id) retriever = RealmRetriever.from_pretrained(model_id) # will load the heavy tf records file model = RealmForOpenQA.from_pretrained(model_id, retriever=retriever) question_ids = tokenizer(question, return_tensors="pt").input_ids predicted_ids = model(question_ids)[0] answer = tokenizer.decode(predicted_ids) ``` which is very similar to https://huggingface.co/facebook/rag-token-nq#usage<|||||>To get ```model = RealmForOpenQA.from_pretrained(model_id, retriever=retriever)``` you will probably have to do the following: 1. Delete the special `save_pretrained(...)` method in `RealmForQA` 2. Load `RealmForQA` as done in the `test_inference_open_qa` now. 3. Then save the model as **one** using the default `save_pretrained(...)` method (since you deleted the old one) in `/temp_dir` 4. Delete the special `from_pretrained(...)` method in `RealmForQA` 5. Now try to load the model with `model = RealmForOpenQA.from_pretrained(model_id, retriever=retriever)` where `model_id` is your just saved `/temp_dir` directory. 6. Debug this until `test_inference_open_qa` works and then upload the single `pytorch_model.bin` file to a repo<|||||>Let me know if you have difficulties with this and I can try to take a look :-)<|||||>Hey @patrickvonplaten, Thanks a lot for doing these modifications. I will work on it tomorrow.<|||||>Hey @qqaatw, I've done the main modifications now to form the structure into what is common for retrieval-augmented models in `transformers`. See my comments here: https://github.com/huggingface/transformers/pull/13292#discussion_r777452449 and here: https://github.com/huggingface/transformers/pull/13292/files#r777456805. Is this ok for you? Does that make sense? Do you want to discuss anything? Could you maybe copy the checkpoint: https://huggingface.co/patrickvonplaten/realm-open-qa under your namespace and play around with it to see if you're ok with those changes? The next steps would be: - refactor the code (make it cleaner, more comments, treat the suggestions above, add `RealmForOpenQAOutputs`, etc...) - Then we also need to wait for @lhoestq to see how we can best store the tf records in a retriever Let me know if you have any questions or would like to discuss anything :-) <|||||>Hi @patrickvonplaten, I've completed the most modifications and still need to do some improvements. Let me know if there is anything that needs to be improved or fixed. <|||||>Great work @qqaatw! We are very close with merging this PR I think. Everything fits very well now IMO and there is not much left to to: - Rename `bert` to `realm` - Add a `test_realm_retrieval.py` file (I can take care of it after talking to @lhoestq) - See how to best integrate the block records in the retriever `.from_pretrained(...)` method. (I'll also take care of this). We could then think also a bit about how to best demo this model :-) Maybe in a space (depending on how big the required RAM is). Also I think a blog post could be a great idea<|||||>@qqaatw - I just discussed with @lhoestq that we should store the block records as a numpy file on the hub. I've done some final changes to the retriever and uploaded the block records in numpy format here: https://huggingface.co/qqaatw/realm-orqa-nq-openqa/commit/ea416e495785cd9612f659b69af3a7857c91fe2a Note that this has the **big** advantage that your PyTorch code is now fully independent of TF -> there is no need to install TF anymore :-) I think the API is not more or less complete. The test `test_inference_open_qa` should now work without having to call upon any hard-coded path. Could you try it out on your side? I noticed that you already corrected your checkpoint to have `realm` in the weight names -> that's great. I've done some updates in the corresponding modeling file (hope you see this before doing the same thing yourself). Think we can merge the PR now by this week :-) It would be great if you could do some final clean-ups (fixing potentially failing tests) and if you want you could also give the `test_retrieval_realm.py` file a try (otherwise I can to it tomorrow or Friday). Let me know if you have any questions or need help :-)<|||||>Hey @patrickvonplaten, Currently all the tests pass, and the new `block_records` handling looks working well. Thank you for making these changes. Now I think we should upload the `.npy` file to every related model repo right? Could you help this out as the file is so huge that might take a long time to upload from my side. (should upload to the repo with suffix `openqa` I think) Also, I've written a standalone checkpoint conversion script [here](https://github.com/qqaatw/pytorch-realm-orqa/blob/master/checkpoint_converter.py) as well as some instructions on the [readme](https://github.com/qqaatw/pytorch-realm-orqa/blob/master/README.md). Do you think we just provide a link directing to my repo in the model docs or should place it into `transformers`? By the way, I'm not sure how to preview the generated docs website (I see this feature is under development in `CONTRIBUTING.md`). <|||||>> We could then think also a bit about how to best demo this model :-) Maybe in a space (depending on how big the required RAM is). Also I think a blog post could be a great idea Both sound appealing! Let me know if I could help with this.<|||||>> Hey @patrickvonplaten, > > Currently all the tests pass, and the new `block_records` handling looks working well. Thank you for making these changes. Now I think we should upload the `.npy` file to every related model repo right? Could you help this out as the file is so huge that might take a long time to upload from my side. (should upload to the repo with suffix `openqa` I think) > > Also, I've written a standalone checkpoint conversion script [here](https://github.com/qqaatw/pytorch-realm-orqa/blob/master/checkpoint_converter.py) as well as some instructions on the [readme](https://github.com/qqaatw/pytorch-realm-orqa/blob/master/README.md). Do you think we just provide a link directing to my repo in the model docs or should place it into `transformers`? > > By the way, I'm not sure how to preview the generated docs website (I see this feature is under development in `CONTRIBUTING.md`). I can definitely help with the uploading part once the PR is merged :-)<|||||>Amazing job ! Thanks for converting the records blocks to numpy format, it's more convenient this way. Later we can even allow to memory-map the numpy file to avoid filling up the RAM :)<|||||>I think this PR is more or less ready to be merged. @sgugger @LysandreJik could you give it a quick look maybe?<|||||>All test failures are due to the latest release of Tokenizers. A quick rebase on master should get rid of them.<|||||>Thanks for all your work on this!
transformers
13,291
closed
torch longformer to tf longformer
# 🚀 Feature request 1、how to use this model in triton 2、how to turn it into a tf model ## Motivation I downloaded the model from "https://huggingface.co/xcjthu/Lawformer/tree/main" which is a torch model. It can not be transformed to a pt file using "torch.jit.trace". I want to use triton to server it. So I want to try to trans it to a tf model and try to transform it to savedmodel ## Your contribution Sorry, I have no idea about how to Implement it I need your help haha. Thanks a lot.
08-27-2021 03:41:19
08-27-2021 03:41:19
Can't help you with the first question, but for the second: > how to turn it into a tf model ``` from transformers import TFLongformerModel model = TFLongformerModel.from_pretrained("xcjthu/Lawformer", from_pt=True) ```<|||||>Hey @aixuedegege, please use the forum: https://discuss.huggingface.co/ for questions like your first one as it's not really a bug of the library, but more a general question. Your chances of getting a good answer should be higher there :-)<|||||>@NielsRogge Thanks for you answer. But using your code I got some warning "Some weights or buffers of the TF 2.0 model TFLongformerModel were not initialized from the PyTorch model and are newly initialized". I will go to https://discuss.huggingface.co/ for discussing this.
transformers
13,290
closed
Add GPT2ForTokenClassification
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> - Add `GPT2ForTokenClassification` class for GPT2 upstream and NER downstream task ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @patrickvonplaten, @LysandreJik @sgugger, @patil-suraj <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-27-2021 03:13:21
08-27-2021 03:13:21
Fixed all test failures and all passed.<|||||>@sgugger Thanks for approving. @patrickvonplaten I just fixed some errors that occurred in `run_tests_torch` CI test and all tests were passed.
transformers
13,289
closed
Fix minor typo in parallelism doc
# What does this PR do? This PR addresses a very minor typo in `parallelism.md`. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
08-27-2021 03:04:00
08-27-2021 03:04:00
transformers
13,288
closed
GPT2 for classification - Errors encountered while running run_glue.py and (possible) fixes
Here is a description of series of errors I encountered while fine-tuning gpt2 pre-trained model using run_glue.py (which were also reported [here](https://github.com/huggingface/transformers/issues/13123)). I am also mentioning here the code fixes I had to make to fix these errors. If the custodians of the code-base are happy with the changes, I will be glad to check the changes in if the set of instructions to submit the patch, get it reviewed and checkin are shared with me. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.10.0.dev0 - Platform: Linux-5.4.0-1051-azure-x86_64-with-glibc2.10 - Python version: 3.8.1 - PyTorch version (GPU?): 1.9.0 - Tensorflow version (GPU?): 2.3.0 - Using GPU in script?: Yes (1 gpu) - Using distributed or parallel set-up in script?: ### Who can help @patrickvonplaten, @sgugger, @patil-suraj Model I am using (Bert, XLNet ...): GPT2 The problem arises when using: * [ ] the official example scripts: (give details below) examples/tensorflow/text-classification/run_glue.py The tasks I am working on is: * [ ] an official GLUE/SQUaD task: GLUE ## To reproduce Steps to reproduce the behavior: (applicable to any GLUE classification task) 1. python run_glue.py --model_name_or_path gpt2 --task_name sst2 --do_train --do_eval --do_predict --output_dir ./output **Error 1** File "run_glue.py", line 567, in <module> main() File "run_glue.py", line 415, in main optimizer = tf.keras.optimizers.Adam( File "/anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/keras/optimizer_v2/adam.py", line 115, in __init__ super(Adam, self).__init__(name, **kwargs) File "/anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py", line 335, in __init__ raise ValueError("Gradient clipping in the optimizer " ValueError: Gradient clipping in the optimizer (by setting clipnorm or clipvalue) is currently unsupported when using a distribution strategy. **Fix** Don't set the clipnorm parameter # clipnorm=training_args.max_grad_norm, **Error 2** ValueError: in user code: /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:806 train_function * return step_function(self, iterator) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:796 step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/distribute/one_device_strategy.py:184 run return super(OneDeviceStrategy, self).run(fn, args, kwargs, options) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:1211 run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/distribute/one_device_strategy.py:367 _call_for_each_replica return fn(*args, **kwargs) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:789 run_step ** outputs = model.train_step(data) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:748 train_step loss = self.compiled_loss( /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/keras/engine/compile_utils.py:204 __call__ loss_value = loss_obj(y_t, y_p, sample_weight=sw) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/keras/losses.py:149 __call__ losses = ag_call(y_true, y_pred) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/keras/losses.py:253 call ** return ag_fn(y_true, y_pred, **self._fn_kwargs) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py:201 wrapper return target(*args, **kwargs) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/keras/losses.py:1566 sparse_categorical_crossentropy return K.sparse_categorical_crossentropy( /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py:201 wrapper return target(*args, **kwargs) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/keras/backend.py:4790 sparse_categorical_crossentropy return array_ops.reshape(res, output_shape[:-1]) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py:201 wrapper return target(*args, **kwargs) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py:195 reshape result = gen_array_ops.reshape(tensor, shape, name) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/ops/gen_array_ops.py:8233 reshape _, _, _op, _outputs = _op_def_library._apply_op_helper( /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/framework/op_def_library.py:742 _apply_op_helper op = g._create_op_internal(op_type_name, inputs, dtypes=None, /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py:591 _create_op_internal return super(FuncGraph, self)._create_op_internal( # pylint: disable=protected-access /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/framework/ops.py:3477 _create_op_internal ret = Operation( /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/framework/ops.py:1974 __init__ self._c_op = _create_c_op(self._graph, node_def, inputs, /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/framework/ops.py:1815 _create_c_op raise ValueError(str(e)) ValueError: Dimension size must be evenly divisible by 192 but is 8 for '{{node sparse_categorical_crossentropy_2/Reshape_2}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32](sparse_categorical_crossentropy_2/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits, sparse_categorical_crossentropy_2/strided_slice_1)' with input shapes: [8], [4] and with input tensors computed as partial shapes: input[1] = [2,8,12,?]. **Fix** It looks the like call to **TFGPT2ForSequenceClassification** return logits in shape (batch_size, sequence_length, num_labels), which is causing the above error. After pooled_logits are computed, add the following line to extract the logits from last step of the sequence pooled_logits = pooled_logits[:, -1, :] and change return TFSequenceClassifierOutputWithPast( loss=loss, logits=pooled_logits, past_key_values=transformer_outputs.past_key_values, hidden_states=transformer_outputs.hidden_states, attentions=transformer_outputs.attentions, ) to return TFSequenceClassifierOutputWithPast( logits=pooled_logits, ) ## Expected behavior Successful completion of training and evaluation
08-26-2021 22:47:19
08-26-2021 22:47:19
Hey @bpraveenk, could you attach a google colab to reproduce the error here? Pinging @Rocketknight1 for TF here.<|||||>TF maintainer here! I reproduced the second error but not the first - but the second one seems like a much more serious problem anyway. The problem does not occur for me in any other models I tested except GPT2, but possibly there are other CLM models where this occurs. My suspicion is that the bug is in our `TFGPT2ForSequenceClassification` code, not in `run_glue.py`. Although you can write some code in `run_glue.py` to work around it, this might break other models that are currently working, like BERT. Either way, thank you for finding this! If you want to try to fix this yourself, please let me know, and ask any questions you like. Please make sure that any fixes you submit also work with MLM models like `bert-base-uncased` as well as `gpt2` though!<|||||>Hey, actually, on further examination, I think the issue is that all CLM-trained models return outputs with past states. Therefore, all we need to do is check whether the output is an instance of `TFSequenceClassifierOutputWithPast`, in `run_glue.py`, and if so, to take `pooled_logits[:, -1, :]` as you suggested, and we shouldn't need to modify the GPT-2 code at all.<|||||>Thank you @Rocketknight1 for your prompt response. I am glad I could help! After going over this tensorflow [issue](https://github.com/tensorflow/tensorflow/issues/33929#issuecomment-634181668), I guess the first error was probably resolved in later version of tensorflow-2.x. Could you share the version of tf that you are using to reproduce the error? Regarding error 2, adding `pooled_logits = pooled_logits[:, -1, :]` alone did not work for me. I had to remove the past states (see below) from the return object for training to proceed successfully. I recommend running the code in tensorflow-eager mode to see more descriptive error. The change I made is specific to GPT2 classification model and it didn't affect fine-tuning/training other models, e.g., bert-base-uncased, which I used to test the change. `return TFSequenceClassifierOutputWithPast( logits=pooled_logits, ) ` Just curious, would your proposed solution to check the instance of the output (e.g., `TFSequenceClassifierOutputWithPast`) in run_glue.py work with `model.fit`? Since the change is in run_glue.py, perhaps we should test the solution to make sure it works with other models too. On a related note, what are your thoughts on using a flag to control the inclusion of past-states and loss in the GPT2Classification model forward-pass output? I am happy to fix the bug. Could you please point me to the document which includes steps to run relevant unit-tests, submit a patch and get it reviewed by the maintainers before its merged? <|||||>Hi @bpraveenk! I was using TF 2.5, which might explain why I didn't see the first error. However, you're correct that the fix I suggested won't work with `model.fit`, so we would need some way to get CLM models to stop returning those past states. I'm going to check with the rest of the team about whether returning `TFSequenceClassifierOutputWithPast` is intended in this case, and what we can do about it. If we decide a flag like you suggested is appropriate, I'd be happy to work with you on implementing that. Also, this isn't really relevant, but can I ask why you want to use a CLM model like GPT-2 for sequence classification instead of a more normal MLM model? It's definitely something we should be supporting, but it's still quite rare, so I'm curious to know what your use-case is there!<|||||>Thank you @Rocketknight1 for your detailed response. I was curious to benchmark the performance of GPT2 against other LMs on classification tasks.<|||||>That's interesting - my intuition is that it will do worse than MLMs, though it has the advantage of being quite a large model. That said, we're adding some equally-big MLM models to the hub, including a TF port of DeBERTaV2 in the next few days, which would be an interesting point of comparison. I'd love to see your benchmark results when they're ready!<|||||>It's indeed exciting to hear that large MLM model will be made available! For discriminative and generative model performance comparison I am planning to use BART (encoder-decoder) model as well. Do I have to write custom code to fine-tune BART model on GLUE tasks or can I use run_glue.py?<|||||>BART is a Seq2Seq model, and I'm not sure if we have a TF implementation of a sequence classifier head for it, unfortunately. You might have to build your own model, starting from TFBartModel and then adding a classifier head on top.<|||||>It seems that pass the pad_token_id works too? I met the same problem today when I want to build a classifier head on the TFGPT2Model, I try to follow the source code in modeling_tf_gpt2.py to build a dense layer after the transformer(which is the gpt2 in this case), but I forgot this step:`in_logits = tf.gather(logits, sequence_lengths, batch_dims=1, axis=1)`, when I use fit function, the bug occurred(shape mismatch). Thanks @bpraveenk @Rocketknight1 you do me big favor to fix the bug. Now, I use dense()[:,-1,:] instead of dense() as the outputs , and it can fit now. But I still hold a concern about why they get the different output between TFAutoModelForSequenceClassification and my model( TFGPT2Model + dense(I copy the 'score' parameter from modeling_tf_gpt2.py)[:,-1,:], is it because of the weights? which I haven't trained. ( But I guess the weight of the score hasn't trained in the TFAutoModelForSequenceClassification...) **my custom model:** `from tensorflow.keras.layers import Dense import tensorflow as tf input_ids = tf.keras.layers.Input(shape=(128,), name='input_ids', dtype='int32') attention_mask = tf.keras.layers.Input(shape=(128,), name='attention_mask', dtype='int32') embeddings = gpt2_hf(input_ids=input_ids,attention_mask=attention_mask)[0] score = tf.keras.layers.Dense(112,kernel_initializer=tf.initializers.TruncatedNormal(config.initializer_range),name="score",use_bias=False,)(embeddings)[:,-1,:] model = tf.keras.Model(inputs=[input_ids,attention_mask], outputs=score,name='GPT2_Multiclass') ` **pad token id** ` if self.config.pad_token_id is None: sequence_lengths = -1 else: if inputs["input_ids"] is not None: sequence_lengths = ( tf.reduce_sum( tf.cast( tf.math.not_equal(inputs["input_ids"], self.config.pad_token_id), dtype=inputs["input_ids"].dtype, ), -1, keepdims=False, ) - 1 ) in_logits = tf.gather(logits, sequence_lengths, batch_dims=1, axis=1)`<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,287
closed
Pretraining T5-v1_1 on Flax
@patrickvonplaten In the[Flax tutorial](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling) it is recommended loading the config from t5-v1_1-base when pretraining, using: `config = T5Config.from_pretrained("google/t5-v1_1-base", vocab_size=tokenizer.get_vocab_size())` This basically copies this [config](https://huggingface.co/google/t5-v1_1-base/blob/main/config.json) It seems like this is tuned for finetuning, since it has the line: '"dropout_rate": 0.1' Google [states](https://huggingface.co/google/t5-v1_1-base) that _"Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning."._ Should this be modified for pretraining? Google also state that "no parameter sharing between embedding and classifier layer". How is this achieved?
08-26-2021 18:18:16
08-26-2021 18:18:16
Think both using `dropout_rate: 0.1` and not using it is fine! It also depends on the dataset you are using. The Flax T5 demo trains on Norwegian which is much smaller than English so it makes more sense here to use dropout for regularization<|||||>OK. Thanks. Im training a much larger model here, just using the Flax T5 Demo as a starting point. But if I understand you correctly, just simply changing this manually to 'dropout_rate: 0', would then be more in line with what Google describes in v1.1 - and then changing it back before finetuning. What about the change that is called _"no parameter sharing between embedding and classifier layer"_ that T5 v1.1 is using? I was unable to see how this is implemented in the example code. Is this a setting in config.json, or does it require changing the architecture. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,286
closed
Moving `token-classification` pipeline to new testing.
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-26-2021 15:32:00
08-26-2021 15:32:00
transformers
13,285
closed
Moving `text-generation` pipeline to new testing framework.
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-26-2021 14:44:32
08-26-2021 14:44:32
transformers
13,284
closed
Question about bart-base model
When I use 'bart-base', I just want to update some parts of the parameters, so I call function"model.named_parameters()", and let "requires_grad" of parameters which I want to update be true, and others be false. However, when I call "model.state_dict()", I found "model.encoder.embed_tokens.weight", "model.decoder.embed_tokens.weight", "lm_head.weight" are both appear in state_dict() but not in named_parameters(). However, after I checked the initialization code of BartModel, I found all these weights are initialized with nn.Embedding() or nn.Parameters(): ''' self.shared = nn.Embedding(vocab_size, config.d_model, padding_idx) self.encoder = BartEncoder(config, self.shared) self.decoder = BartDecoder(config, self.shared) ''' ''' self.lm_head = nn.Linear(config.d_model, self.model.shared.num_embeddings, bias=False) ''' So all of they could be updated during training theoretically.Why don't they appear in mode.named_parameters()?
08-26-2021 14:23:38
08-26-2021 14:23:38
`BartModel` itself doesn't have a language modeling head, only `BartForConditionalGeneration` does. The latter adds a language modeling head on top of `BartModel`.<|||||>> `BartModel` itself doesn't have a language modeling head, only `BartForConditionalGeneration` does. The latter adds a language modeling head on top of `BartModel`. Thanks for your reply!! Sorry that my description is not clear. The model I used is BartForConditionalGeneration. As I described above, "model.encoder.embed_tokens.weight", "model.decoder.embed_tokens.weight","lm_head.weight" , as well as"final_logits_bias" appear in model.state_dict() but not in model.named_parameters(). I know that "final_logits_bias" is registered in model.buffers(), so it's normal. But aren't the other three supposed to be trainable in downstream missions(which means they should be in model.parameters())? <|||||>That's because input and output embeddings are tied (i.e. shared). This can be verified by printing the named parameters: ``` from transformers import BartForConditionalGeneration model = BartForConditionalGeneration.from_pretrained("facebook/bart-base") for name, param in model.named_parameters(): print(name, param.shape) ``` which prints: ``` model.shared.weight torch.Size([50265, 768]) (...) ``` You can also verify that the weights of the embed_tokens and lm_head for example are exactly the same, like so: ``` import torch assert torch.allclose(model.model.encoder.embed_tokens.weight, model.lm_head.weight) ```<|||||>> That's because input and output embeddings are tied (i.e. shared). This can be verified by printing the named parameters: > > ``` > from transformers import BartForConditionalGeneration > > model = BartForConditionalGeneration.from_pretrained("facebook/bart-base") > > for name, param in model.named_parameters(): > print(name, param.shape) > ``` > > which prints: > > ``` > model.shared.weight torch.Size([50265, 768]) > (...) > ``` > > You can also verify that the weights of the embed_tokens and lm_head for example are exactly the same, like so: > > ``` > import torch > > assert torch.allclose(model.model.encoder.embed_tokens.weight, model.lm_head.weight) > ``` I got it. Thanks very much for your patient answer!!!
transformers
13,283
closed
Moving `text2text-generation` to new pipeline testing mecanism
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-26-2021 14:12:24
08-26-2021 14:12:24
transformers
13,282
closed
Hotfixing master tests.
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-26-2021 14:02:15
08-26-2021 14:02:15
transformers
13,281
closed
Moving `table-question-answering` pipeline to new testing
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-26-2021 13:43:09
08-26-2021 13:43:09
transformers
13,280
closed
Moving `table-question-answering` pipeline to new testing.
<!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-26-2021 12:47:17
08-26-2021 12:47:17
transformers
13,279
closed
Moving `summarization` pipeline to new testing format.
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-26-2021 10:56:24
08-26-2021 10:56:24
transformers
13,278
closed
[Hotfix] Fixing the test (warnings was incorrect.)
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-26-2021 10:10:52
08-26-2021 10:10:52
transformers
13,277
closed
Moving question_answering tests to the new testing scheme. Had to tweak a little some ModelTesterConfig for pipelines.
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-26-2021 10:05:32
08-26-2021 10:05:32
transformers
13,276
closed
Announcing the default model used by the pipeline (with a link).
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes https://github.com/huggingface/transformers/issues/12845 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-26-2021 09:44:34
08-26-2021 09:44:34
transformers
13,275
closed
Fix BeitForMaskedImageModeling
# What does this PR do? Fixes #13235 I've also added an integration test for BeitForMaskedImageModeling, to make sure it returns the same logits as the original implementation on the same input image.
08-26-2021 09:41:58
08-26-2021 09:41:58
transformers
13,274
closed
`pipeline` backed with ONNX Runtime and quantization for faster inference
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> It would be cool, when we load a pipeline, to do, during the loading, the conversion to an ONNX `InferenceSession` along with (optionally) the quantization, as these features provide significant speedups. About the live conversion, maybe it could have been done before, i.e. we could load the `.onnx` file from the Hub (among other model assets). Anyway, the main goal is the use of the speedups provided by ONNX Runtime in the pipeline object. One would instantiate the `pipeline` object that way: ```python nlp = pipeline('text-classification', onnx_runtime=True, quantization=True) ``` Reference: https://onnxruntime.ai/docs/tutorials/inferencing/huggingface.html ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> The speed and memory (with quantization) gains. ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md --> I could try :)
08-26-2021 09:08:58
08-26-2021 09:08:58
@sgugger @LysandreJik find this interesting?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,273
closed
Docs: TrainingArguments call incorrect
On [this](https://huggingface.co/transformers/training.html) page, there's the following code: ``` from transformers import TrainingArguments training_args = TrainingArguments("test_trainer", evaluation_strategy="epoch") ``` However, `evaluation_strategy` needs to be an `IntervalStrategy` instead of a `string`. The way `TrainingArguments` actually needs to be called is: ``` training_args = TrainingArguments("test_trainer", evaluation_strategy=IntervalStrategy.EPOCH) ```
08-26-2021 08:59:27
08-26-2021 08:59:27
No, this example is correct, as both syntax work.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,272
closed
Move `image-classification` pipeline to new testing
# What does this PR do? - Enforce `test_small_models_{tf,pt}` methods to exist (enforce checking actual values in small tests) - Add support for non RGB image for the pipeline. - Some tests had to be modified (feature-extraction does not support a bunch of multi modal models, fill-mask can't work on Wav2vec2ForMaskedLM) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-26-2021 08:47:19
08-26-2021 08:47:19
transformers
13,271
closed
Global transformers package imports, render local changes to the transformer src code useless for example scripts
# 🚀 Feature request In the summarization.py example file, it imports the global transformers package not the src/transformers package making local development challenging. I'd like to make changes to the trainer.py file within src/transformers/trainer.py and have those changes reflected when I run the examples/pytorch/summarization/run_summarization.py script. However due to the current package structure in the repository as is - the transformers folder isn't a package on its own and thus setting up a system of relative imports to access this folder from the examples directory is hoping through a lot of loops to avoid errors such as `attempted relative import with no known parent package`. The repo structure also makes it hard to move the example file into the transformers package since none of the higher level folders are packages themselves causing the same issue. ## Motivation Not having this entire repo structured as packages makes it really challenging and not straight forward for someone who wants to experiment with making changes to the models and training exposed through this library. The provided scripts to run various tasks and train are helpful however since they use global imports they hardly work well with the rest of the repo and might as well be stand alone repositories by themselves. ## Your contribution @patil-suraj has mentioned he would be a point of contact for the summarization examples repo directly. Would first ask if he has suggestions into how this can be best addressed (other than asking everyone facing this to restructure the repo into packages etc themselves), otherwise I am looking into getting that restructure done just since its blocking in terms of making any updates and seeing that reflected when running these training scripts. Support here would make a huge difference to the accessibility / extensibility of these provided example scripts and notebooks.
08-26-2021 01:23:24
08-26-2021 01:23:24
Hello! Maybe I misunderstood your issue, but if you clone the repository and install it as an editable package, you should be able to run the `run_summarization` script with the changes you have made. ``` git clone https://github.com/huggingface/transformers cd transformers pip install -e . ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,270
closed
Commit v4.9.2 release appears as v4.5.1 in "transformers-cli env"
## Environment info The command **_transformers-cli env_** returns: _2021-08-26 07:30:45.855430: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2021-08-26 07:30:45.855484: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. WARNING:tensorflow:From /home/mypath/miniconda3/lib/python3.7/site-packages/transformers/commands/env.py:50: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. 2021-08-26 07:30:47.909798: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-08-26 07:30:47.920073: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory 2021-08-26 07:30:47.920123: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303) 2021-08-26 07:30:47.920158: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (sr507): /proc/driver/nvidia/version does not exist_ - `transformers` version: 4.5.1 - Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-centos-7.6.1810-Core - Python version: 3.7.7 - PyTorch version (GPU?): 1.9.0+cu102 (False) - Tensorflow version (GPU?): 2.6.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help Documentation: @sgugger ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [x ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. git clone --recursive https://github.com/merleyc/transformers.git 2. conda create -n hf-dev-py380 python=3.8.0 3. ipython kernel install --user --name=hf-dev-py380 4. cd transformers/ 5. git checkout v4.9.2-release 6. git log -> It appears the correct release version: _commit 41981a25cdd028007a7491d68935c8d93f9e8b47 (HEAD -> exploration, tag: v4.9.2, origin/v4.9.2-release, v4.9.2-release) Author: Lysandre <[email protected]> Date: Mon Aug 9 16:01:36 2021 +0200 Patch release: v4.9.2_ 7. git checkout -b exploration 8. pip uninstall transformers 9. pip install -e ".[dev]" 10. git clone --recursive https://github.com/huggingface/datasets 11. cd datasets/ 12. pip install -e ".[dev]" 13. export http_proxy=http://xxx:yyy; export https_proxy=http://xxx:yyy; export ftp_proxy=xxx:yyy 14. python -m pytest -n 52 --dist=loadfile -s -v ./tests/ > ~/results_cli4.txt 15. conda install cloudpickle -> because it complained: "distributed 1.26.0 requires cloudpickle>=0.2.2, which is not installed." 16. pip uninstall huggingface-hub 17. pip install huggingface-hub==0.0.12 -> because it complained "transformers 4.9.2 requires huggingface-hub==0.0.12, but you have huggingface-hub 0.0.15 which is incompatible." 18. pip uninstall pycodestyle 19. pip install pycodestyle==2.7.0 -> because it complained: "autopep8 1.5.6 requires pycodestyle>=2.7.0, but you have pycodestyle 2.5.0 which is incompatible." 20. pip install -e ".[dev]" -> It complained: "flake8 3.7.9 requires pycodestyle<2.6.0,>=2.5.0, but you have pycodestyle 2.7.0 which is incompatible." and showed the below result: "_9 failed, 3240 passed, 2803 skipped, 172 warnings in 2630.90s (0:43:50)_" There are several issues I observed while following these steps, like unmatched python versions, pip package dependency incompatibilities, and failing tests, but let's, for now, focus on the transformer version please. I will be happy to open other issues if needed. ## Expected behavior The result of **_transformers-cli env_** should be the version I cloned from the repo, which is version: 4.9.2
08-26-2021 00:01:53
08-26-2021 00:01:53
I don't see you activating your conda environment anywhere, can that be the source of your issue? Is there a reason you're using `pip install` instead of `conda install` when you're using a conda environment? What happens when you do `which transformers-cli`? The following seems to work flawlessly: ``` git clone --recursive https://github.com/merleyc/transformers.git cd transformers conda create -n hf-dev-py380 python=3.8.0 conda activate hf-dev-py380 git checkout v4.9.2-release pip install -e . transformers-cli env ``` ``` - `transformers` version: 4.9.2 - Platform: Linux-5.13.12-arch1-1-x86_64-with-glibc2.10 - Python version: 3.8.0 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` <|||||>Hi @LysandreJik , Yes, you're right. Thanks a lot! I was able to successfully reproduce your steps and get the correct info from _transformers-cli env_ command. However when I follow the same steps but just replacing "_pip install -e ._" by "_pip install -e ".[dev]", the _transformers-cli env_ returns the error: "_ImportError: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found_" My steps: ``` git clone --recursive https://github.com/merleyc/transformers.git cd transformers/ conda create -n hf-dev-py380 python=3.8.0 conda activate hf-dev-py380 git checkout v4.9.2-release pip install -e ".[dev]" transformers-cli env -> It throws me error #1 below. conda install -c conda-forge librosa -> as mentioned [here](https://github.com/readthedocs/readthedocs.org/issues/6086) transformers-cli env -> It throws me error #2 below. conda install libgcc -> as mentioned [here](https://github.com/BVLC/caffe/issues/4953) export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/miniconda3/lib/ transformers-cli env -> And I get my expected result! ``` So it seems the command _pip install -e ".[dev]"_ has the dependences mentioned above. What would be the implication of using _pip install -e ".[dev]"_ instead of _pip install -e ._ considering that my goal is change a Bert model (so I am a developer)? I do know that by "Providing the --dev argument will put the dependency in a special [dev-packages] location in the Pipfile. These development packages only get installed if you specify the --dev argument with pipenv install." [source](https://realpython.com/pipenv-guide/) Thanks a lot!! **Error #1:** ``` $ transformers-cli env 2021-08-27 03:34:25.566885: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2021-08-27 03:34:25.566952: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Traceback (most recent call last): File "/home/mypath/miniconda3/envs/hf-dev-py380/bin/transformers-cli", line 33, in <module> sys.exit(load_entry_point('transformers', 'console_scripts', 'transformers-cli')()) File "/home/mypath/miniconda3/envs/hf-dev-py380/bin/transformers-cli", line 25, in importlib_load_entry_point return next(matches).load() File "/home/mypath/miniconda3/envs/hf-dev-py380/lib/python3.8/importlib/metadata.py", line 75, in load module = import_module(match.group('module')) File "/home/mypath/miniconda3/envs/hf-dev-py380/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/myotherpath/transformers/src/transformers/commands/transformers_cli.py", line 23, in <module> from .run import RunCommand File "/myotherpath/transformers/src/transformers/commands/run.py", line 17, in <module> from ..pipelines import SUPPORTED_TASKS, TASK_ALIASES, Pipeline, PipelineDataFormat, pipeline File "/myotherpath/transformers/src/transformers/pipelines/__init__.py", line 26, in <module> from ..models.auto.feature_extraction_auto import FEATURE_EXTRACTOR_MAPPING, AutoFeatureExtractor File "/myotherpath/transformers/src/transformers/models/auto/feature_extraction_auto.py", line 20, in <module> from transformers import DeiTFeatureExtractor, Speech2TextFeatureExtractor, ViTFeatureExtractor File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist File "/myotherpath/transformers/src/transformers/file_utils.py", line 1985, in __getattr__ value = getattr(module, name) File "/myotherpath/transformers/src/transformers/file_utils.py", line 1984, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/myotherpath/transformers/src/transformers/file_utils.py", line 1993, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/home/mypath/miniconda3/envs/hf-dev-py380/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/myotherpath/transformers/src/transformers/models/speech_to_text/feature_extraction_speech_to_text.py", line 23, in <module> import torchaudio.compliance.kaldi as ta_kaldi File "/home/mypath/miniconda3/envs/hf-dev-py380/lib/python3.8/site-packages/torchaudio/__init__.py", line 13, in <module> from torchaudio.backend import ( File "/home/mypath/miniconda3/envs/hf-dev-py380/lib/python3.8/site-packages/torchaudio/backend/__init__.py", line 2, in <module> from . import utils File "/home/mypath/miniconda3/envs/hf-dev-py380/lib/python3.8/site-packages/torchaudio/backend/utils.py", line 7, in <module> from . import ( File "/home/mypath/miniconda3/envs/hf-dev-py380/lib/python3.8/site-packages/torchaudio/backend/soundfile_backend.py", line 11, in <module> import soundfile File "/home/mypath/miniconda3/envs/hf-dev-py380/lib/python3.8/site-packages/soundfile.py", line 142, in <module> raise OSError('sndfile library not found') OSError: sndfile library not found ``` **Error #2:** ``` 2021-08-27 03:51:25.067374: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2021-08-27 03:51:25.067425: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Traceback (most recent call last): File "/home/myotherpath/miniconda3/envs/hf-dev-py380/bin/transformers-cli", line 33, in <module> sys.exit(load_entry_point('transformers', 'console_scripts', 'transformers-cli')()) File "/home/myotherpath/miniconda3/envs/hf-dev-py380/bin/transformers-cli", line 25, in importlib_load_entry_point return next(matches).load() File "/home/myotherpath/miniconda3/envs/hf-dev-py380/lib/python3.8/importlib/metadata.py", line 75, in load module = import_module(match.group('module')) File "/home/myotherpath/miniconda3/envs/hf-dev-py380/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/mypath/transformers/src/transformers/commands/transformers_cli.py", line 23, in <module> from .run import RunCommand File "/mypath/transformers/src/transformers/commands/run.py", line 17, in <module> from ..pipelines import SUPPORTED_TASKS, TASK_ALIASES, Pipeline, PipelineDataFormat, pipeline File "/mypath/transformers/src/transformers/pipelines/__init__.py", line 26, in <module> from ..models.auto.feature_extraction_auto import FEATURE_EXTRACTOR_MAPPING, AutoFeatureExtractor File "/mypath/transformers/src/transformers/models/auto/feature_extraction_auto.py", line 20, in <module> from transformers import DeiTFeatureExtractor, Speech2TextFeatureExtractor, ViTFeatureExtractor File "/mypath/transformers/src/transformers/file_utils.py", line 1985, in __getattr__ value = getattr(module, name) File "/mypath/transformers/src/transformers/file_utils.py", line 1984, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/mypath/transformers/src/transformers/file_utils.py", line 1993, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/home/myotherpath/miniconda3/envs/hf-dev-py380/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/mypath/transformers/src/transformers/models/deit/feature_extraction_deit.py", line 20, in <module> from PIL import Image File "/home/myotherpath/miniconda3/envs/hf-dev-py380/lib/python3.8/site-packages/PIL/Image.py", line 114, in <module> from . import _imaging as core ImportError: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by /home/myotherpath/miniconda3/envs/hf-dev-py380/lib/python3.8/site-packages/PIL/../../.././libLerc.so) ``` <|||||>`pip install -e .[dev]` means you'll be installing all the dependencies that we have put down for the `dev` option; which is the option we use to imply that a user is working on the `transformers` package directly. It adds all possible dependencies that would otherwise be blocking: TensorFlow, PyTorch, Flax, speech, vision, ... It is not necessary to install this to use the `transformers` library, only to work on it. See the `setup.py` for more information: https://github.com/huggingface/transformers/blob/master/setup.py#L301-L308<|||||>Thanks for the explanation, @LysandreJik ! I was only able to successfully run `pip install -e .[dev]` after I installed extra packages (librosa and libgcc ) and set up the LD_LIBRARY_PATH. Do you also need to install these packages and set up this env variable or is it something only in my environment? If you also have this dependency, I should probably open a new issue about this it. Thanks! <|||||>Those are requirements that PIL and `soundfile` have on system-wide dependencies. Unfortunately, these are not dependencies that we can control from within the `setup.py` - but we should at least make that clearer, for example on this page: https://huggingface.co/transformers/installation.html#installation-with-pip Would you like to try your hand at modifying the docs to mention what might need to be done in case of a `[dev]` installation?<|||||>I will be happy to (will work on that next week :) )<|||||>That sounds great @merleyc, looking forward to it!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,269
closed
Add PLBart
# What does this PR do? This PR adds PLBART. - Paper: https://arxiv.org/abs/2103.06333 - Code and Checkpoints: https://github.com/wasiahmad/PLBART - Authors: @wasiahmad @kaiweichang Motivation: This encoder-decoder BART-like model allows for downstream fine-tuning on code summarization, generation, classification, etc. and is useful to the community. The fine-tuned checkpoints are also available and can be used directly. EDIT 1: ------ I am trying to make the embeddings same for the original and my implementation. Have created an issue [here](#13481) as I'm stuck with `padding_idx` outputs not matching. ### Pending Tasks - [ ] Fix FastTokenizer - [ ] Tokenizer - ~MultiTokenizer~ - [x] Fix tests - [x] Modeling - [x] Tokenizer - ~MultiTokenizer~ - [ ] Verify behavior for all checkpoints - [x] Update Docs - [ ] Update Model Cards - [x] Add remaining checkpoints - [x] `plbart-large` - [x] `plbart-base-csnet` checkpoints
08-25-2021 19:24:45
08-25-2021 19:24:45
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>The pre-trained model conversion is working correctly. Have verified with encoder/decoder outputs and embedding outputs for encoder. The states match except for pad tokens, for which the issue is mentioned in #13481. The tokenizer for the pre-trained checkpoint contains 50005 tokens, with the special ones: ``` 0 <s> 1 <pad> 2 </s> 3 <unk> 50001 [java] 50002 [python] 50003 [en_XX] 50004 <mask> ```<|||||>@gchhablani - can I help you somehow to move forward with this PR? <|||||>Any update on this PR? @gchhablani <|||||>@patil-suraj Updates: - Rebased the branch to master. - Updated the documentation. Should I move the generation example and other model/tokenizer docs from the model/tokenization files to the `mdx`? - Removed all the extra changes due to merging issues. Regarding the tokenizes, I am not sure if they can be moved to a single tokenizer considering [this comment](https://github.com/huggingface/transformers/pull/13269#discussion_r790167690). Please let me know if there's a way to do that.<|||||>@patil-suraj @gchhablani - let me know when the PR is ready for a final review :-)<|||||>@patrickvonplaten Would be awesome if you could take a final look now :) <|||||>PR is good for merge to me! Failing test is a flaky one from the Hub.<|||||>Thanks a lot @gchhablani for all your work on this! Great job!<|||||>Hi, @gchhablani @patil-suraj @sgugger Looks like `README.md` (and therefore `doc/source/index.mdx`) is not updated for this new added model. Ideally, it is usually updated when a new model is added, right? <|||||>Thanks for flagging @ydshieh , we indeed have forgotten to ask for it during our reviews! @gchhablani would you like to add it in a follow-up PR?<|||||>@ydshieh Thanks a lot for informing! On it. @sgugger Yes, I'll quickly fix it.
transformers
13,268
closed
additional global attended token in bigbird-roberta
Hello, based on my understanding of the bigbird style attention mechanism, the only tokens in the ITC construction supported in HuggingFace that are global are the first and last tokens. Is there an easy way to add an additional global token position that should always be attended to? For example, if I want to make sure the second token in the sequence is always globally attended to, where in the modeling_bigbird.py (or elsewhere) do I need to add that, or is there another easier way to pass additional globally attended token positions? Any help would be greatly appreciated. Thank you.
08-25-2021 18:22:35
08-25-2021 18:22:35
cc @vasudevgupta7 <|||||>Hey @calderma, sorry for late reply. I somehow missed your comment. Yes, only ITC code is supported for now and hence only 1st & last **blocks** (i.e collection of tokens) are global. So, you can control which tokens should be global by playing around with block_size a bit, though this would increase compute a bit as random & sliding tokens are also dependent on it. But in your case, as you want to make 2nd token as global, it will be a global token if block size is > 2 (Note: default block size is 64).<|||||>Ah ok got it. I thought it was just the first token, not the first block. My mistake. Thanks!
transformers
13,267
closed
Better notification service
Better notification service that splits the scheduled tests in a separate channel for a less spammy experience.
08-25-2021 16:06:55
08-25-2021 16:06:55
transformers
13,266
closed
Add error message concerning revision
# What does this PR do? Adds one more item to the error message when a model, config, or tokenizer cannot be loaded. If (and only if) the user provided a revision, the error message will also tell them that they need to check on the model page whether the revision number actually exists. This does _not_ close issue https://github.com/huggingface/transformers/issues/13264 as using the short commit hash does not work yet. This PR only slightly improves user-friendliness in terms of possible user errors. ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? https://github.com/huggingface/transformers/issues/13264#issuecomment-905614404 ## Who can review? @LysandreJik
08-25-2021 15:53:37
08-25-2021 15:53:37
transformers
13,265
closed
Add DINO conversion script
# What does this PR do? I've uploaded the Vision Transformers trained using the self-supervised method called [DINO](https://github.com/facebookresearch/dino) to the hub: https://huggingface.co/models?other=dino This PR includes the conversion script that was used. I've also added a reference to DeiT, BEiT and the DINO checkpoints to the docs of ViT, to give these models some more love.
08-25-2021 14:38:23
08-25-2021 14:38:23
transformers
13,264
closed
Revisions not working as expected
We were getting a size mismatch when loading a finetuned checkpoint. After looking at the model config, I found that it had been updated and that the [embedding/vocab size had increased](https://huggingface.co/GroNLP/bert-base-dutch-cased/commit/b23d41bddd4d5c925bec648458cabd7cc578e47e). This is slightly annoying but not the core of this issue. My way of dealing with this, then, was naturally to rely on version control and simply use the previous commit which still had the config that we used for finetuning ([this one](https://huggingface.co/GroNLP/bert-base-dutch-cased/commit/61330c1ca1aa3a688f8aa015059142a1b20d3f63)). I would have expected that I can then load this revised model with the commit as given on the website: ```python from transformers import AutoModel model_name = "GroNLP/bert-base-dutch-cased" revision = "61330c1" model = AutoModel.from_pretrained(model_name, revision=revision) ``` This does not work an throws an error that the model cannot be found with the following message: ``` OSError: Can't load config for 'GroNLP/bert-base-dutch-cased'. Make sure that: - 'GroNLP/bert-base-dutch-cased' is a correct model identifier listed on 'https://huggingface.co/models' - or 'GroNLP/bert-base-dutch-cased' is the correct path to a directory containing a config.json file ``` A **first improvement** would be to add to this error message something about revisions, because obviously `GroNLP/bert-base-dutch-cased` is a correct name. The deeper issue is that the model revision is simply not found when I use the commit tag on the website. By coincidence I noticed that the URL includes a much longer identifier that starts with the commit number that you can see on the website (the full commit hash). When you try that, the code does run and the revision is correctly loaded. ```python from transformers import AutoModel model_name = "GroNLP/bert-base-dutch-cased" revision = "61330c1ca1aa3a688f8aa015059142a1b20d3f63" model = AutoModel.from_pretrained(model_name, revision=revision) ``` So the bug is either - the model is not capable of looking up a revision based on the first seven characters of a hash (not sure if it should/could), - or the model hub website does not provide enough information to make this intuitive for users. One way that would help, for instance, is that the "use in transformers" button adapts itself to the current revision that a user is browsing and when clicked it includes the revision (if any) in the example usage. And/or a copy function can be added to the commit identifier that - when clicked - does copies the whole hash. ### Who can help Note sure who to tag for the model page so tagging @sgugger and @LysandreJik
08-25-2021 14:19:40
08-25-2021 14:19:40
Just an update (I've already posted it on Twitter): Git tags or branches should work, such as: ```python from transformers import AutoModel model = AutoModel.from_pretrained("dbmdz/german-gpt2", revision="v1.0") ``` <|||||>Hi @BramVanroy, thanks for opening an issue! This is also tracked in https://github.com/huggingface/huggingface_hub/issues/197 cc @julien-c @Pierrci There's definitely an improvement to be done regarding the mention of the revision in the error message, feel free to give it a try if you have the time to, otherwise we'll take care of it ASAP. <|||||>@LysandreJik Great. I wasn't sure what the underlying issue was: `transformers` not correctly loading the short hash, or the web interface not displaying the full hash. Feel free to close this issue if you think that is better. I made a tiny PR for an improved error message.<|||||>> So the bug is either > > * the model is not capable of looking up a revision based on the first seven characters of a hash (not sure if it should/could), > * or the model hub website does not provide enough information to make this intuitive for users It's a partial mix of both: the model hub website does not currently have the feature to lookup a revision based on the first seven characters of a hash (loading a commit from the first few hash characters is a sugar-y feature of git, not a core feature) > A **first improvement** would be to add to this error message something about revisions, because obviously `GroNLP/bert-base-dutch-cased` is a correct name. Yes definitely, as @LysandreJik said EDIT: and your PR looks a great improvement<|||||>@julien-c I was reading through the docs on [short commit hashes](https://git-scm.com/book/en/v2/Git-Tools-Revision-Selection#_short_sha_1) (truncated) and this seems important: > Git is smart enough to figure out what commit you’re referring to if you provide the first few characters of the SHA-1 hash, as long as that partial hash is at least four characters long **and unambiguous; that is, no other object in the object database can have a hash that begins with the same prefix.** I don't know how (and if you) you would implement loading revisions from short hashes, but checking for ambiguity seems an important point to consider - even though the chances are quite small that the first 4-7 characters are identical between two commits within the same repo.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Bumping to keep this open.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Not sure if I should try to keep this open. Final bump unless others interact.<|||||>It's really on the Hub side of thing, so https://github.com/huggingface/huggingface_hub/issues/197 should be tracking it and it can be closed on this side (unless I'm missing something).<|||||>Hey @BramVanroy (and @sgugger) we solved this through better UX, actually: if you take a look at commit history on https://huggingface.co/bert-base-uncased/commits/main you now have buttons to copy the full commit hash (exactly like on GitHub), thanks to @beurkinger on the Hub team. see screenshot below: <img width="1131" alt="Screenshot 2021-10-21 at 18 51 05" src="https://user-images.githubusercontent.com/326577/138322377-5d0a7195-522b-4f50-af35-a663e10390d7.png"> Hope this helps!
transformers
13,263
closed
Replace assert statement with if condition and raise ValueError
# What does this PR do? The goal of this PR is to replace assert statements with if statements and raise appropriate exceptions (see issue #12789) Replaces ``` assert lr_init > lr_end, f"lr_end ({lr_end}) must be be smaller than initial lr ({lr_init})" ``` with ``` if not (lr_init > lr_end): raise ValueError(f"lr_end ({lr_end}) must be be smaller than initial lr ({lr_init})") ``` in optimization.py Contributes towards fixing issue #12789 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
08-25-2021 13:29:32
08-25-2021 13:29:32
transformers
13,262
closed
Printing weights of a pre-trained model
During the generation of the key matrix, query matrix, and value matrix, what are the weights used? How do I print these weights?
08-25-2021 11:18:43
08-25-2021 11:18:43
In PyTorch, you can easily print out the weights of any model, like so (let's take BERT as an example): ``` from transformers import BertModel model = BertModel.from_pretrained("bert-base-uncased") for name, param in model.named_parameters(): print(name, param.shape) ``` This prints a long list of all parameter names, together with their shape. The keys, values and queries are parameters of each layer of BERT (BERT-base has 12 layers, so there are 12 key, value and query matrices). One of them is the following: ``` encoder.layer.0.attention.self.query.weight torch.Size([768, 768]) encoder.layer.0.attention.self.query.bias torch.Size([768]) encoder.layer.0.attention.self.key.weight torch.Size([768, 768]) encoder.layer.0.attention.self.key.bias torch.Size([768]) encoder.layer.0.attention.self.value.weight torch.Size([768, 768]) encoder.layer.0.attention.self.value.bias torch.Size([768]) ```<|||||>Yes, we can print the shape and parameter names by using this code. However, I wish to print the whole matrix [768,768] /[768]: (the value of the matrix). There must be some value assigned to this after the training is done.<|||||>Just replace `print(name, param.shape)` by `print(name, param)` in the code above.<|||||>Yea, that works. Thanks a lot
transformers
13,261
closed
Fix failing Hubert test
null
08-25-2021 10:40:31
08-25-2021 10:40:31
transformers
13,260
closed
Add require flax to MT5 Flax test
Adds a forgotten `require_flax`
08-25-2021 10:04:31
08-25-2021 10:04:31
🚀🚀🚀 this
transformers
13,259
closed
Some `model_type`s cannot be in the mapping
Some `model_type`s cannot be in the mapping. This PR offers a fallback for these cases. The following had stopped working (tested by `test_bert2bert_summarization`): ``` tokenizer = BertTokenizer.from_pretrained("patrickvonplaten/bert2bert-cnn_dailymail-fp16" ```
08-25-2021 09:59:38
08-25-2021 09:59:38
transformers
13,258
closed
Add CLIP tokenizer to AutoTokenizer
CLIP was not added to the `AutoTokenizer` mapping
08-25-2021 09:40:26
08-25-2021 09:40:26
transformers
13,257
closed
Remove side effects of disabling gradient computaiton
Disabling gradient computation here will affect all next operations using torch, and will remove the gradient computation. This made a few `Trainer` test fail in slow tests.
08-25-2021 09:32:45
08-25-2021 09:32:45
transformers
13,256
closed
ingore_mismatched_sizes Wav2Vec2 unknown argument
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: Win 10 - Python version: 3.8 - PyTorch version (GPU?): yes - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patrickvonplaten @sgugger ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) added size missmatch ignore to loading model The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce ``` model = Wav2Vec2ForCTC.from_pretrained( File "C:\Users\flozi\anaconda3\envs\wav2vec\lib\site-packages\transformers\modeling_utils.py", line 1321, in from_pretrained model = cls(config, *model_args, **model_kwargs) TypeError: __init__() got an unexpected keyword argument 'ingore_mismatched_sizes' ``` ``` model = Wav2Vec2ForCTC.from_pretrained( model_args.model_name_or_path, cache_dir=model_args.cache_dir, activation_dropout=model_args.activation_dropout, attention_dropout=model_args.attention_dropout, hidden_dropout=model_args.hidden_dropout, feat_proj_dropout=model_args.feat_proj_dropout, mask_time_prob=model_args.mask_time_prob, gradient_checkpointing=model_args.gradient_checkpointing, layerdrop=model_args.layerdrop, ctc_loss_reduction="mean", pad_token_id=processor.tokenizer.pad_token_id, vocab_size=len(processor.tokenizer), ingore_mismatched_sizes=True, ) ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The model can be loaded like Vit,Deit or Beit do with the ingore_mismatched_sizes argument
08-25-2021 09:07:52
08-25-2021 09:07:52
Hello, ~kindly check out this [reply](https://github.com/huggingface/transformers/issues/13187#issuecomment-902116183) and see if it can solve your problem.~ There is a misspelling with your argument. `ignore_mismatched_sizes` not `ingore_mismatched_sizes`<|||||>Oh, thank you. I didn't see that, I just copied it from another issue here
transformers
13,255
closed
Label Smoothing for Question Answering task
# 🚀 Feature request Currently label smoothing is only applied when "label" is present in the input in compute_loss function and not in case of question answering default in Trainer class. ## Your contribution I would like to work on this issue and submit a PR by modifying compute_loss to use label_names to apply label smoothing for question answering.
08-25-2021 08:58:08
08-25-2021 08:58:08
cc @sgugger <|||||>I'm not entirely sure how straightforward this could be, plus it seems like a very narrow use case. I think this should be implemented independently by the user with a custom `compute_loss` function.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,254
closed
请问一下,情感分析的模型只是针对英文的吗
classifier = pipeline('sentiment-analysis') 这里面封装的模型是只针对英文的吗?我使用中文发现完全不对,比如: words = '这是一个不错的服务' judge1 = classifier(words) words = '这是一个很好的服务' judge2 = classifier(words) print(judge1,judge2) 结果展示: [{'label': 'NEGATIVE', 'score': 0.9500061273574829}] [{'label': 'NEGATIVE', 'score': 0.92015540599823}]
08-25-2021 08:22:14
08-25-2021 08:22:14
This is a bit intimidating to me haha. I'll use deep learning to translate your issue (😮 ): DeepL says: classifier = pipeline('sentiment-analysis') Is the model encapsulated in this only for English? I found it completely wrong using Chinese, e.g. words = 'this is a good service' judge1 = classifier(words) words = 'This is a good service' judge2 = classifier(words) print(judge1,judge2) => answer: yes, the default sentiment analysis pipeline is English-only, as it uses a `DistilBertForSequenceClassification` model fine-tuned on English data (I'm not sure, Ii's a bummer that it's difficult to know what the default model is that is used for each pipeline, see #12845). You can indeed, as Patrick mentions below, use a custom model from the hub.<|||||>You could use the model hub to find sentiment analysis models in Chinese as follows: https://huggingface.co/models?language=zh&pipeline_tag=text-classification&sort=downloads<|||||>And then do: ```python classifier = pipeline('sentiment-analysis', model="uer/roberta-base-finetuned-chinanews-chinese") ```<|||||>> And then do: > > ```python > classifier = pipeline('sentiment-analysis', model="uer/roberta-base-finetuned-chinanews-chinese") > ``` thank you for your answer,that's help a lot<|||||>> This is a bit intimidating to me haha. I'll use deep learning to translate your issue (😮 ): > > DeepL says: > > classifier = pipeline('sentiment-analysis') > Is the model encapsulated in this only for English? I found it completely wrong using Chinese, e.g. > words = 'this is a good service' > judge1 = classifier(words) > words = 'This is a good service' > judge2 = classifier(words) > print(judge1,judge2) > > => answer: yes, the default sentiment analysis pipeline is English-only, as it uses a `DistilBertForSequenceClassification` model fine-tuned on English data (I'm not sure, Ii's a bummer that it's difficult to know what the default model is that is used for each pipeline, see #12845). You can indeed, as Patrick mentions below, use a custom model from the hub. sorry i use chinese, and thank you for your answer,that's help a lot
transformers
13,253
closed
Cannot use RemBert
![Screenshot (1508)](https://user-images.githubusercontent.com/65230225/130734536-0d498812-2ee4-45fe-96a0-6c33fc007916.png) When I am trying to use the AutoTokenizer for the newly added RemBert model it's giving me this error ![Screenshot (1509)](https://user-images.githubusercontent.com/65230225/130736371-869e52d9-d4cb-4dc1-a480-8234d658420c.png) When I try importing RemBertTokenizer then it's giving me this error.
08-25-2021 05:59:32
08-25-2021 05:59:32
Make sure to install Transformers from master: `pip install git+https://github.com/huggingface/transformers.git`<|||||>Thanks @NielsRogge It's working fine now
transformers
13,252
closed
Add `--max_length` argument in seq2seq trainer.
# 🚀 Feature request Currently [seq2seq Trainer](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_seq2seq.py) uses `--max_length` for prediction step. However in the trainer there is no argument `--max_length` in [here](https://huggingface.co/transformers/main_classes/trainer.html?highlight=trainingargument#transformers.Seq2SeqTrainingArguments) and [here](https://huggingface.co/transformers/_modules/transformers/training_args_seq2seq.html#Seq2SeqTrainingArguments). During training (with `--predict_with_generate`) when the evaluate function is called, it performs [prediction step ](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_seq2seq.py#L128) with `model.config.max_length` by this [line](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_seq2seq.py#L166). Unless you call the `trainer.evaluate(eval_dataset = eval_dataset, max_length=max_target_length)` manually, in the training time it uses `model.config.max_length`. Also without reviewing the source code, it is difficult to grasp this. So in the training time, for `prediction_loop`, the model performs evaluation based on [this](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_seq2seq.py#L166). It uses `self.model.config.max_length` for doing prediction. It is kind of confusing I would say. Let's look into this, ``` >>> import transformers >>> transformers.__version__ '4.10.0.dev0' >>> model = transformers.AutoModel.from_pretrained("google/mt5-large") Some weights of the model checkpoint at google/mt5-large were not used when initializing MT5Model: ['lm_head.weight'] - This IS expected if you are initializing MT5Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing MT5Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). >>> model.config.max_length 20 ``` A user who is not careful about this argument would totally miss this. Personally I spent quite a few time on this. My `compute_metrics()` values at the training time on dev set was not good but at the end of training prediction on the test dataset score (using my own call `trainer.evaluate()`) was high. ## Motivation Adding `--max_length` in [Seq2SeqTrainer](https://github.com/huggingface/transformers/blob/master/src/transformers/training_args_seq2seq.py#L27) would help the user to be-aware of this parameter. @sgugger
08-25-2021 05:40:14
08-25-2021 05:40:14
This is added by the PR mentioned above.<|||||>Thanks a lot for the new feature. Closing the issue.
transformers
13,251
closed
fix `tokenizer_class_from_name` for models with `-` in the name
https://github.com/huggingface/transformers/pull/13023 breaks for some models with `-` in its name. e.g. `xlm-roberta`, For example: ``` Traceback (most recent call last): File "/mnt/nvme1/code/huggingface/transformers-master/examples/pytorch/language-modeling/run_mlm.py", line 550, in <module> main() File "/mnt/nvme1/code/huggingface/transformers-master/examples/pytorch/language-modeling/run_mlm.py", line 337, in main tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, **tokenizer_kwargs) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/auto/tokenization_auto.py", line 432, in from_pretrained tokenizer_class = tokenizer_class_from_name(tokenizer_class_candidate) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/auto/tokenization_auto.py", line 226, in tokenizer_class_from_name module = importlib.import_module(f".{module_name}", "transformers.models") File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked ModuleNotFoundError: No module named 'transformers.models.xlm-roberta' ``` as you can see it tries to import "`transformers.models.xlm-roberta`", to reproduce: ``` RUN_SLOW=1 pytest tests/deepspeed -k clm_xlm_roberta ``` ``` # module_name, tokenizers debug print: xlm-roberta ('XLMRobertaTokenizer', 'XLMRobertaTokenizerFast') ``` This PR fixes it: ``` module = importlib.import_module(f".{module_name.replace('-', '_')}", "transformers.models") ``` Oddly enough I don't get this problem if I run `xlm-roberta-base`, so this is an edge case. As the core models seems to not trigger the problem. Not sure why. In the deepspeed test suite the 2 failing tests were: ``` RUN_SLOW=1 pytest tests/deepspeed -k clm_xlm_roberta ``` Now it has a core test - thanks @LysandreJik @LysandreJik also pushed a fix for model names that mismatch model files which is the case with `openai-gpt` @LysandreJik, @sgugger
08-25-2021 04:15:47
08-25-2021 04:15:47
Should we turn your code snippet into a test? It's almost complete - just needs an assert.<|||||>Yes, that would be great!<|||||>OK, done. It's not a perfect test as it'll fail on the first invalid entry rather than test them all, but it's probably good enough in this situation. Thank you for doing the heavy lifting for adding this test, @LysandreJik <|||||>Hi there! Thanks a lot for fixing this while I was away. There is a `model_type_to_module_name` function defined in `configuration_auto` that already does all what you added. Will make a PR to switch to that. The tests should make sure it doesn't break anything.
transformers
13,250
closed
Check None before going through iteration
# What does this PR do? This PR fixes the error mentioned in #13234. This is a quick solution. For long-term development, should we change the default value of `_keys_to_ignore_on_xxx` from `None` to `list`, so we can dismiss the check of whether it is `None` before running any iteration. https://github.com/huggingface/transformers/blob/b1198a8440cc05f569b0bc22038993a1e5e707ab/src/transformers/modeling_utils.py#L444-L450 https://github.com/huggingface/transformers/blob/b1198a8440cc05f569b0bc22038993a1e5e707ab/src/transformers/modeling_tf_utils.py#L626-L629 ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-25-2021 04:14:33
08-25-2021 04:14:33
transformers
13,249
closed
how to finetune mT5 on XGLUE-NTG task
# 📚 Migration ## Information <!-- Important information --> Model I am using (Bert, XLNet ...): google/mt5-base Language I am using the model on (English, Chinese ...): multi-language The problem arises when using: * [ ] the official example scripts: (give details below) * my own modified scripts: (give details below) Just a little change in ./examples/pytorch/summarization/run_summarization_no_trainer.py to suit for NTG task and bleu evaluation metric. The tasks I am working on is: * an official GLUE/SQUaD task: (give the name): XGLUE-NTG * [ ] my own task or dataset: (give details below) ## Details <!-- A clear and concise description of the migration issue. If you have code snippets, please provide it here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code. --> When training MT5 with multilingual data, do I need to add the "--source_prefix" argument like T5? If so, " --source_prefix=' Summarize: ' " Is that right? But when this was added, the results were poor in all language but English. Is there a problem with my parameter setting? ![image](https://user-images.githubusercontent.com/30341159/130722965-4911b07b-aee5-4651-8516-be3b8d4a8d0a.png) Also, the result with the parameter "--source_prefix" above is actually the same as the result without the parameter below: ![image](https://user-images.githubusercontent.com/30341159/130723118-a50ae07e-c541-48dc-ab34-839562b7d309.png) Should we set different --source_prefix for different languages, and how to set that? ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: 3.6 - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: <!-- IMPORTANT: which version of the former library do you use? --> * `pytorch-transformers` or `pytorch-pretrained-bert` version (or branch): ## Checklist - I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - I checked if a related official extension example runs on my machine.
08-25-2021 03:48:57
08-25-2021 03:48:57
From the T5 author (I asked him): > since mT5 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix. Hence, no prefix should be used. However, the performance you get without prefix is similar, you say?<|||||>> From the T5 author (I asked him): > > > since mT5 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix. > > Hence, no prefix should be used. However, the performance you get without prefix is similar, you say? Thank you very much for your reply. Does MT5 have any finetuning scripts on multilingual title generation task? Why is it so bad in other languages? Does MT5 have any special hyperparameters that need to be set? here is my command: python -u -m torch.distributed.launch --nproc_per_node 4 --use_env examples/pytorch/summarization/run_xglue_no_trainer.py --model_name_or_path=google/mt5-base --dataset_name=ntg - --per_device_train_batch_size=2 --per_device_eval_batch_size=4" Thansks! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@koukoulala @NielsRogge I had also similar doubt, instead of MT5, I want to finetune M2M100 on more than one language pair. Any leads on how to achieve that? I am able to finetune on single language pair, but how to finetune on more than one pair simultaneously?
transformers
13,248
closed
[doc] correct TP implementation resources
This PR fixes a few implementation links: removes incorrect one, adds a new one. @LysandreJik
08-25-2021 03:10:32
08-25-2021 03:10:32
transformers
13,247
closed
Why do we need to use `Loss.repeat(eval_batch_size)` in accelerator gather loop?
https://github.com/huggingface/transformers/blob/b1198a8440cc05f569b0bc22038993a1e5e707ab/examples/pytorch/language-modeling/run_mlm_no_trainer.py#L510 If I do not use this, and simple do acclerator.gather(loss) my code is stuck at this point. But if I repeat the loss it seems to work. Can you explain why is this the case ? Why do we also later use `losses = losses[: len(eval_dataset)]` ?
08-25-2021 00:25:09
08-25-2021 00:25:09
@sgugger Can you please help ?<|||||>Hi @thakursc1, Sylvain is currently off until next week - he'll answer your query when he's back from his break. Thanks for your understanding.<|||||>I'll look at why it does not work for a 0d tensor when I have a bit of bandwith (lots to do when coming back from vacation!) The main reason we do it this way is to be able to compute the true average of the loss across all the processes at the end (otherwise we won't get the exact value of the loss). That being said, there is no reason why `accelerator.gather(loss)` in your code should be stuck. <|||||>https://github.com/huggingface/accelerate/pull/152 should fix the problem on the Accelerate side.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,246
open
[model loading] framework-agnostic dtype parameter
This is a split off from one of the discussions at https://github.com/huggingface/transformers/pull/13209: 1. It all started with trying to load torch models under either the desired dtype or the the dtype of the pretrained model - and thus avoid 2x memory usage needs e.g. if the model needs to be just fp16. So we added `torch_dtype` to `from_pretrained` and `from_config`. 2. Then we started storing `torch_dtype` in the config file for future possibly automatic loading model in the optimal "regime". 3. This resulted in a discrepancy where the same symbol sometimes means `torch.dtype` at other times a string like "float32" as we can't store `torch.dtype` in json. 4. then in https://github.com/huggingface/transformers/pull/13209#discussion_r693292542 we started discussing how `dtype` is really the same across pt/tf/flux and perhaps we should just use `dtype` in the config and variables and have it consistently to be a string ("float32") and convert it to the right dtype object of the desired framework at the point of use, e.g. `getattr(torch, "float32")` A possible solution is to deprecate `torch_dtype` and replace it with `dtype` string both in config and in the function argument. Possible conflicts with the naming: 1. we already have the `dtype` attribute in modeling_utils, which returns `torch.dtype` based on the first param's dtype. https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_utils.py#L205 The context is different, but still this is something to consider to avoid ambiguity. I may have missed some other areas. So please share if something else needs to be added. Additional notes: - wrt flux: https://github.com/huggingface/transformers/pull/13209#discussion_r694511759 > #13098 - the idea of the PR is exactly to disentangle parameter dtype from matmul/computation dtype. In Flax, it's common practice that the dtype parameter defines the matmul/computation dtype, see: https://flax.readthedocs.io/en/latest/_autosummary/flax.linen.Dense.html#flax.linen.Dense.dtype instead of the parameter dtype and not the parameter dtype. > So for Flax, I don't really think it would make sense to use a config.dtype to define weights dtype as it would be quite confusing with Flax's computation dtype parameter. @LysandreJik, @sgugger, @patrickvonplaten
08-24-2021 22:37:45
08-24-2021 22:37:45
Would like to ping @Rocketknight1 regarding the TensorFlow management of types, and @patil-suraj for flax<|||||>This should work in Tensorflow too - you can use `tf.dtypes.as_dtype(dtype_string)` to turn strings into TF dtype objects.<|||||>@Rocketknight1 Sorry, but can you please elaborate on how to load the model in Tensorflow or point me in the right direction? I am new to hugging face and I have been looking all over for instructions on how to do it. Thank you.
transformers
13,245
closed
Add ability to include additional model card info within Trainer
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> Allow users to optionally provide model description, intended use, ethical considerations, caveats and recommendations, etc. when calling `trainer.push_to_hub` and/or `trainer.create_model_card`. ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> Right now, when you do `trainer.push_to_hub`, it runs `trainer.create_model_card`, which calls `transformers.modelcard.TrainingSummary.to_model_card` behind the scenes to put together the model card's text. It does not allow you to pass the sections mentioned above. https://github.com/huggingface/transformers/blob/b1198a8440cc05f569b0bc22038993a1e5e707ab/src/transformers/trainer.py#L2479-L2502 Thus, whenever someone pushes to the hub with `Trainer`, there are a bunch of sections that say "More information needed". --- I see that `transformers.modelcard.ModelCard` has these options, but there's a note about it being deprecated. https://github.com/huggingface/transformers/blob/b1198a8440cc05f569b0bc22038993a1e5e707ab/src/transformers/modelcard.py#L74-L100 ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
08-24-2021 22:19:55
08-24-2021 22:19:55
Yes the `ModelCard` is deprecated as it does not make any sense to deal with this programmatically (the class was there for more than a year and I haven't seen it used once). The Trainer drafts the model card, it's then the responsibility of the user to edit it in a followup commit, but the easiest way to do this, is through the user's preferred text editor IMO.<|||||> > but the easiest way to do this, is through the user's preferred text editor IMO or directly on the hub website 👍 we should remove the current `ModelCard` implementation to make room for new helpers IMO (which will be in `huggingface_hub`) <|||||>It will disappear in v5 as it's a breaking change.<|||||>closing and will address in `huggingface_hub`
transformers
13,244
open
Tapas tokenization Different from Tensorflow Code
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.1 ### Who can help @LysandreJik @sgugger @NielsRogge ## Information Model I am using (Bert, XLNet ...): Tapas When I am trying to replicate the TAPAS table retrieval results using Huggingface Tapas implementation, I find that [Tapas tokenization in Huggingface](https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py#L1314) is different from the original [Tensorflow code ](https://github.com/google-research/tapas/blob/master/tapas/utils/tf_example_utils.py#L391). The original code first checks whether the table cell is "n/a", "?" or empty. If so, it would return "[EMPTY]" token. The Huggingface code has implemented [the same tokenization](https://github.com/huggingface/transformers/blob/master/src/transformers/models/tapas/tokenization_tapas.py#L370) with the tensorflow code, but it is not used to tokenize the tables. It could be easily fixed by changing all the calls of function `self.tokenize` to `self._tokenize` in the `_tokenize_table` function. After fixing this, I could use the released table retrieval model to replicate their results on NQ dataset with Huggingface Tapas.
08-24-2021 20:19:40
08-24-2021 20:19:40
Hi, Thanks for your interest in TAPAS. However, I do think the `_tokenize()` method is effectively used by TapasTokenizer. This is because `TapasTokenizer` itself inherits from `PreTrainedTokenizer`, which defines the `tokenize()` method [here](https://github.com/huggingface/transformers/blob/b1198a8440cc05f569b0bc22038993a1e5e707ab/src/transformers/tokenization_utils.py#L249). This method will in turn call `_tokenize()` as can be seen [here](https://github.com/huggingface/transformers/blob/b1198a8440cc05f569b0bc22038993a1e5e707ab/src/transformers/tokenization_utils.py#L339). You can also verify this using a simple example: ``` import pandas as pd from transformers import TapasTokenizer tokenizer = TapasTokenizer.from_pretrained("google/tapas-base") data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "n/a"], 'Number of movies': ["?", "53", "69"]} queries = ["What is the name of the first actor?", "How many movies has George Clooney played in?", "What is the total number of movies?"] table = pd.DataFrame.from_dict(data) inputs = tokenizer(table=table, queries=queries) print(tokenizer.decode(inputs.input_ids[0])) ``` As you can see, I've replaced two cell values by n/a and ?, i.e. there are some empty cells in the table. This returns: `[CLS] what is the name of the first actor? [SEP] actors number of movies brad pitt [EMPTY] leondardi di caprio 53 [EMPTY] 69` The empty cells are correctly replaced by the [EMPTY] token.<|||||>> Hi, > > Thanks for your interest in TAPAS. However, I do think the `_tokenize()` method is effectively used by TapasTokenizer. This is because `TapasTokenizer` itself inherits from `PreTrainedTokenizer`, which defines the `tokenize()` method [here](https://github.com/huggingface/transformers/blob/b1198a8440cc05f569b0bc22038993a1e5e707ab/src/transformers/tokenization_utils.py#L249). This method will in turn call `_tokenize()` as can be seen [here](https://github.com/huggingface/transformers/blob/b1198a8440cc05f569b0bc22038993a1e5e707ab/src/transformers/tokenization_utils.py#L339). > > You can also verify this using a simple example: > > ``` > import pandas as pd > from transformers import TapasTokenizer > > tokenizer = TapasTokenizer.from_pretrained("google/tapas-base") > > data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "n/a"], 'Number of movies': ["?", "53", "69"]} > queries = ["What is the name of the first actor?", "How many movies has George Clooney played in?", "What is the total number of movies?"] > table = pd.DataFrame.from_dict(data) > inputs = tokenizer(table=table, queries=queries) > print(tokenizer.decode(inputs.input_ids[0])) > ``` > > As you can see, I've replaced two cell values by n/a and ?, i.e. there are some empty cells in the table. This returns: > > `[CLS] what is the name of the first actor? [SEP] actors number of movies brad pitt [EMPTY] leondardi di caprio 53 [EMPTY] 69` > > The empty cells are correctly replaced by the [EMPTY] token. Thank you very much for your reply! It seems that "n/a" and "?" are tokenized into [EMPTY] token, but if the cell is an empty string, then it is ignored by the tokenizer. For this example, `data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "n/a"], 'Number of movies': ["", "53", "69"]}` the tokenization result is `[CLS] what is the name of the first actor? [SEP] actors number of movies brad pitt leonardo di caprio 53 [EMPTY] 69`. If I directly call `self._tokenize`, it is tokenized into `[CLS] what is the name of the first actor? [SEP] actors number of movies brad pitt [EMPTY] leonardo di caprio 53 [EMPTY] 69` I guess it is because the [tokenize function](https://github.com/huggingface/transformers/blob/b1198a8440cc05f569b0bc22038993a1e5e707ab/src/transformers/tokenization_utils.py#L336) returns empty list when the token is an empty string before it is passed to the `_tokenize` function, which is different from that of the Tapas tensorflow implementation.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>That's interesting @Doreenruirui, are you interested in making a PR to fix this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>unstale<|||||>Hi @Doreenruirui, > After fixing this, I could use the released table retrieval model to replicate their results on NQ dataset with Huggingface Tapas. This is very interesting, thanks for letting me know. Are you interested in opening a PR that includes the fix? We could perhaps also add the table retrieval models to the hub. Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @NielsRogge > This is very interesting, thanks for letting me know. Are you interested in opening a PR that includes the fix? I would like to work on this, i can start if nobody else is working on this. Thanks<|||||>@NielsRogge @Doreenruirui This issue seems to fixed. We can close this issue.
transformers
13,243
closed
Validate onnx model
Test ONNX model export with different input shapes when exporting with dynamic axis.
08-24-2021 20:07:55
08-24-2021 20:07:55
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,242
closed
[Tentative] Adds support for exporting TransformerXL-based models to ONNX
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Hello everyone! I hope everything is going well with you. This PR adds a naïve support when trying to export a TransformerXL model with ONNX. Nevertheless, this is a tentative PR as PyTorch still does not support the newest ONNX opset versions (13+), which includes `triu` and `tril` operators (required by TransformerXL). As soon as PyTorch updates its API, any reflected change will also be updated in this PR. Meanwhile, this tentative PR serves as a guide to anyone that needs to export their model. **Note:** there is a trick to leverage `triu` and `tril` operators by hand-implementing them with a compatible format to ONNX, instead of using the ones PyTorch's provides. Best regards, Gustavo. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-24-2021 19:29:02
08-24-2021 19:29:02
> It seems I'm facing the issue you mention about `tril` when trying to export a Transfo-XL model using your PR: > > ``` > RuntimeError: Exporting the operator triu to ONNX opset version 12 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub. > ``` > > Is this due to a mismatch in `onnxruntime` version? cc @mfuntowicz Exactly! The main problem is that onnx/onnxruntime added the `triu/tril` operator on opset version 13 (if I am not mistaken), but PyTorch has not released a version which supports such opset yet (current PyTorch supports up to opset version 12). In my humble opinion, I guess this PR should be staled until PyTorch releases their new version, hopefully by the end of the month. With that in mind, we do not need to leverage the `triu/tril` operator or even implement a hacked-version that allows to be exported to ONNX. What do you think?<|||||>I see - thank you for the explanation. I'll add the `WIP` label to the PR so that the stalebot does not close it, and let's revisit once pytorch 1.10 comes out.<|||||>Torch nightly now has opset 14 with support for both triu/tril, however my testing had these issues: **1) both triu and tril want a constant for the diagonal parameter** A work-around hack is to: use model config: same_length=False and this code change in modeling_transfo_xl.py (line 908 and 945): > if mems is None and False: > mems = self.init_mems(bsz) > ... > if mems is not None: > print("WARNING: if using mems no onnx export") > dec_attn_mask = torch.triu(word_emb.new_ones((qlen, klen), dtype=torch.uint8), diagonal=1 + mlen)[:, :, None] > else: > dec_attn_mask = torch.triu(word_emb.new_ones((qlen, klen), dtype=torch.uint8), diagonal=1)[:, :, None] **2) The export command python -m transformers.onnx --model models --feature causal-lm --opset 14 models_onnx still gives an error:** > File "/home/craig/envcond/transformers/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 378, in _create_inference_session > sess.initialize_session(providers, provider_options, disabled_optimizers) > onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for Trilu(14) node with name 'Trilu_64'
transformers
13,241
closed
Can we use trainer api with custom model layer?
I am trying to use huggin face trainer API, For example, my model looks like this: ``` class BERTClass(torch.nn.Module): def __init__(self): super(BERTClass, self).__init__() self.l1 = AutoModel.from_pretrained("distilbert-base-uncased",return_dict=False) self.l2 = torch.nn.Dropout(0.2) self.l3 = torch.nn.Linear(768, 50) def forward(self, ids, mask, token_type_ids): _, output_1= self.l1(ids, attention_mask = mask, token_type_ids = token_type_ids) output_2 = self.l2(output_1) output = self.l3(output_2) return output model = BERTClass() ``` Is it possible to utilize the trainer API for custom models? such as : trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=val_dataset # evaluation dataset ) trainer.train()
08-24-2021 17:16:04
08-24-2021 17:16:04
Yes you can. As mentioned in the [docs](https://huggingface.co/transformers/main_classes/trainer.html): > When using it on your own model, make sure: > your model always return tuples or subclasses of ModelOutput. > your model can compute the loss if a labels argument is provided and that loss is returned as the first element of the tuple (if your model returns tuples) > your model can accept multiple label arguments (use the label_names in your TrainingArguments to indicate their name to the Trainer) but none of them should be named "label".
transformers
13,240
closed
Improve T5 docs
# What does this PR do? This PR aims to clarify and explain some of the magic that's happening when using T5 for training/inference. It includes: - explaining that the model automatically creates the `decoder_input_ids` based on the `labels` (a lot of people were still confused by this, see e.g. #11977, #13213 - added code examples, to show a basic forward pass that includes the fact that padding token ids of the labels should be replaced by -100 (at least, for PyTorch, I see that for FLAX one uses the `decoder_attention_mask` to skip padding tokens), and code examples for inference (both batched/not batched) - adding info about T5's variants, including T5v1.1, mT5 and byT5, with links to their docs. - additional tips & tricks, based on what I found on the forum (learning rate, training on TPU, etc). In addition, I've added T5v1.1 to the main README as well making it have its own documentation page, and some more info to the mT5 docs.
08-24-2021 13:18:11
08-24-2021 13:18:11
transformers
13,239
closed
BERT finetuning “index out of range in self”
0 I am trying to build a Multiclass Classifier with a pretrained BERT model. I am completely new to the topic. I have 8 classes and use Huggingface’s Dataset infrastructure to finetune a pretrained model for the german language: ``` from transformers import AutoModelForSequenceClassification from transformers import Trainer, TrainingArguments from sklearn.metrics import accuracy_score, f1_score num_labels_cla = 8 model_name_cla = "bert-base-german-dbmdz-uncased" batch_size_cla = 8 model = AutoModelForSequenceClassification.from_pretrained(model_name_cla, num_labels=num_labels_cla) def tokenize(batch): return tokenizer(batch['text'], padding=True, truncation=True,max_length=260) def compute_metrics(pred): labels = pred.label_ids preds = pred.predictions.argmax(-1) f1 = f1_score(labels, preds, average="weighted") acc = accuracy_score(labels,preds) return {"accuracy":acc, "f1":f1} ``` My model shouldn’t be a sentiment classifier but a multilabel classifier which classifies customer reviews based on different label (e.g customer support etc.). When I train/finetune my model with the Huggingface Trainer() instance: ``` #Encoding the data data_encoded = data_dict.map(tokenize, batched=True, batch_size=None) data_encoded.set_format("torch", columns=["input_ids", "attention_mask", "label"]) #Specify training arguments logging_steps=len(data_encoded["train"]) training_args = TrainingArguments(output_dir='./results', num_train_epochs=3, learning_rate=2e-5, per_device_train_batch_size=batch_size_cla, per_device_eval_batch_size=batch_size_cla, load_best_model_at_end=True, metric_for_best_model="f1", weight_decay=0.01, evaluation_strategy="steps", eval_steps = 2, disable_tqdm=False, logging_steps=logging_steps) #Specify trainer trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=data_encoded['train'], eval_dataset=data_encoded['test'] ) #Train trainer.train() ``` After 6 steps I get the following error: ``` ~/miniconda3/envs/textmallet/lib/python3.9/site-packages/torch/nn/modules/sparse.py in forward(self, input) 156 157 def forward(self, input: Tensor) -> Tensor: --> 158 return F.embedding( 159 input, self.weight, self.padding_idx, self.max_norm, 160 self.norm_type, self.scale_grad_by_freq, self.sparse) ~/miniconda3/envs/textmallet/lib/python3.9/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 2041 # remove once script supports set_grad_enabled 2042 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 2043 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 2044 2045 IndexError: index out of range in self ``` Does anyone have any idea what I could change in my code? Cheers
08-24-2021 13:09:19
08-24-2021 13:09:19
Hello! It seems your model and tokenizer are mismatched: the tokenizer generated an ID that the model doesn't understand. I don't see your tokenizer initialization in your code, do you mind showing me how you initialize it?<|||||>Yeah you are right! Thank you<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,238
closed
Correct way to use pre-trained models - Any document on this?
I want to do `Multiclass-Multilabel ( MLMC)` **classification** problem using `Conv-BERT` model. **Steps that I have taken is:** I downloaded the Conv-Bert model from this link: https://huggingface.co/YituTech/conv-bert-base << **YituTech/conv-bert-base**>> ``` from pytorch_pretrained_bert import BertTokenizer, BertForSequenceClassification, BertAdam tokenizer = **BertTokenizer.from_pretrained**("path_to_Conv-Bert_model", do_lower_case = True) model = **BertForSequenceClassification.from_pretrained**("path_to_Conv-Bert_model", num_labels = 240) model.cuda() ``` I want to understand can we call any classification module from Hugging face and pass any pre-trained models to it like `Roberta, Conv-BERT.. so on`. ? << As in above example>> Is it mandatory to use Conv-Bert classification pre-trained model ?
08-24-2021 12:57:48
08-24-2021 12:57:48
Hello! We have several documents that can help you get started! First of all, the [quicktour](https://huggingface.co/transformers/quicktour.html), and the [free course](https://huggingface.co/course/chapter1) of the HF ecosystem may help you out.<|||||>> Hello! We have several documents that can help you get started! First of all, the [quicktour](https://huggingface.co/transformers/quicktour.html), and the [free course](https://huggingface.co/course/chapter1) of the HF ecosystem may help you out. What about my code above. Is this correct way of doing things?<|||||>@pratikchhapolika Hi Pratik, yes you can use most models for sequence classification. You can do the following ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("name_of_base_model") model = AutoModelForSequenceClassification("name_of_base_model") //name_of_base_model can be bert-base-cased, albert-base-v2, roberta-large etc. ``` The full list is [here](https://huggingface.co/transformers/pretrained_models.html) You can then use the model & finetune it on the desired classification task (e.g. GLUE / SUPERGLUE) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,237
closed
Fix broken links in Splinter documentation
# What does this PR do? Fixed two broken links in Splinter documentation: Before fix: `https://huggingface.co/transformers/model_doc/master/splinter.html` After fix: `https://huggingface.co/transformers/master/model_doc/splinter.html`
08-24-2021 11:27:25
08-24-2021 11:27:25
transformers
13,236
closed
Upgrade `os.path` to use `pathlib.Path` API for `from_pretrained` internals
# 🚀 Feature request Use `pathlib.Path` instead of `os.path` in places like `src/transformers/file_utils.py` and `src/transformers/configuration_utils.py` ## Motivation I am using [cloudpathlib](https://github.com/drivendataorg/cloudpathlib), a library that wraps remote path as `pathlib.Path` like object. But I cannot use a cloudpathlib's remote directory for `from_pretrained` because the existing code uses `os.path`. Using `pathlib`'s API would solve this. ## Your contribution I can submit a PR for this. I see pathlib used elsewhere inside `src/transformers`, so I think it's ok.
08-24-2021 11:26:16
08-24-2021 11:26:16
This requires more code than I anticipated, and the caching isn't as fast as local. I think I will just download to local and call `from_pretrained` with local path.<|||||>Thanks for exploring > I think I will just download to local and call from_pretrained with local path. Yes it's the way we've been recommending 👍
transformers
13,235
closed
BeitForMaskedImageModeling forward not using bool_masked_pos
### Who can help @NielsRogge ## Information I am reading the code of BEIT, trying to use the `BeitForMaskedImageModeling`, while I found the `bool_masked_pos` is defined but not used in `BeitForMaskedImageModeling`'s forward. In my understanding, `bool_masked_pos` is used to mask out the input image tokens, thus should be passed to the `self.beit` forward here. https://github.com/huggingface/transformers/blob/5c6eca71a983bae2589eed01e5c04fcf88ba5690/src/transformers/models/beit/modeling_beit.py#L723 Furthermore, the documentation of the `BeitForMaskedImageModeling`'s `forward` lack the description of the `bool_masked_pos`.
08-24-2021 10:52:49
08-24-2021 10:52:49
Hi, Thanks for looking into this model. Looking at it now, it's indeed a mistake, not sure why I forgot to pass `bool_masked_pos` to `BeitModel`. I have a Colab notebook, in which I tried to reconstruct DALL-E's visual tokens using `BeitForMaskedImageModeling`: https://colab.research.google.com/drive/1Mjt-3jHw9HYMXECmSdDlbiG59ZAw-Z0T?usp=sharing#scrollTo=ZwTO9fbhPOxi It was not working as expected, so probably that'll be the mistake. > Furthermore, the documentation of the BeitForMaskedImageModeling's forward lack the description of the bool_masked_pos. This is a good point and should be added. I will fix both in a PR. Thanks!
transformers
13,234
closed
Error while trying to run run_wwm_mlm.py using my saved model: TypeError: ‘NoneType’ object is not iterable
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.7.0 - Platform: Linux-4.18.0-25-generic-x86_64-with-debian-buster-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.8.1+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh @LysandreJik ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [x] the official example scripts: (give details below) The tasks I am working on is: * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. I have trained a BertForSequenceClassification model, saved the model and tokenizer: ``` model.save_pretrained('output_mlm_cls') tokenizer.save_pretrained('output_mlm_cls') ``` 2. I tried to run run_mlm_wwm.py, giving the the saved model above as the input model: python run_mlm_wwm.py \ --model_name_or_path /path/to/output_mlm_cls \ --train_file /path/to/my_data.txt \ --do_train \ --output_dir /output_dir <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> I got this error message: Traceback (most recent call last): File “run_mlm_wwm.py”, line 408, in main() File “run_mlm_wwm.py”, line 367, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File “/home/cl/jungmin-c/.pyenv/versions/anaconda3-5.1.0/envs/jp/lib/python3.7/site-packages/transformers/trainer.py”, line 1066, in train self._load_state_dict_in_model(state_dict) File “/home/cl/jungmin-c/.pyenv/versions/anaconda3-5.1.0/envs/jp/lib/python3.7/site-packages/transformers/trainer.py”, line 1387, in _load_state_dict_in_model if set(load_result.missing_keys) == set(self.model._keys_to_ignore_on_save): TypeError: ‘NoneType’ object is not iterable ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> It should run and train the input model on the whole word masking MLM task. When I run the same thing only changing --model_name_or_path to one of the HuggingFace provided pretrained models (cl-tohoku/bert-base-japanese-whole-word-masking), it runs without a problem, so it's not the problem with the dataset.
08-24-2021 08:04:57
08-24-2021 08:04:57
Hi, since your case is a mlm task, you should probably use `BertForMaskedLM` instead of `BertForSequenceClassification ` to train your model first, and then feed it into `run_wwm_mlm.py` script.<|||||>@qqaatw Thank you for your suggestion! > Hi, since your case is a mlm task, you should probably use `BertForMaskedLM` instead of `BertForSequenceClassification ` to train your model first, and then feed it into `run_wwm_mlm.py` script. My objective is to see the effect of training BERT on different tasks. I am wondering if training on MLM task after training on classification yields better results. Is there a way to do this using the script?<|||||>I got your point. You can use `BertForPreTraining`, which includes two prediction heads (MLM, NSP), to train a sentence classification task first, then feed the trained model into `run_wwm_mlm.py` to run MLM task. Because `BertForPreTraining` has two heads already, running mlm afterwards will no longer raise an error regarding mlm head missing.<|||||>@qqaatw That's a neat solution! Thank you! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,233
closed
Some weights of GPT2LMHeadModel were not initialized from the model checkpoint at 'yyy' and are newly initialized
Hi Everyone, I am trying to use a fine tuned GPT Neo Model for my inferencing, am getting the below warning - ### **_Some weights of GPT2LMHeadModel were not initialized from the model checkpoint at 'yyy' and are newly initialized..._** I have used deep speed to fine tune the model. Because of this warning, **_the generated text during inference is giving some random meaningless text_**. Please suggest how to load those weights. ### Inference code is below (highlighted in bold is the line of code) from transformers import GPT2Tokenizer, GPT2LMHeadModel import torch device = 'cuda' if torch.cuda.is_available() else 'cpu' tokenizer = GPT2Tokenizer.from_pretrained('finetuned') tokenizer.padding_side = "left" tokenizer.pad_token = tokenizer.eos_token ### model = GPTNeoForCausalLM.from_pretrained("finetuned").half().to("cuda") Thanks, Balaji
08-24-2021 07:20:26
08-24-2021 07:20:26
Hi, I'm not very familiar with GPT Neo, but should [this](https://huggingface.co/transformers/model_doc/gpt_neo.html#transformers.GPTNeoForCausalLM) be the correct model for fine-tuned GPT Neo?<|||||>Apologies for pasting the wrong code, below is the one. Thanks @qqaatw for pointing it out. This is the line of code where am seeing the warning. ### model = GPTNeoForCausalLM.from_pretrained("finetuned").half().to("cuda")<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,232
closed
run_translation_no_trainer with MBart: unsupported operand type(s) for /: 'dict' and 'int'
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.2 - Platform: Ubuntu 20.04 - Python version: 3.8 - PyTorch version (GPU?): 1.8.2 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes In addition, I add the details of `accelerate` config. ```shell In which compute environment are you running? ([0] This machine, [1] AWS (Amazon SageMaker)): 0 Which type of machine are you using? ([0] No distributed training, [1] multi-CPU, [2] multi-GPU, [3] TPU): 2 How many different machines will you use (use more than 1 for multi-node training)? [1]: 1 Do you want to use DeepSpeed? [yes/NO]: NO How many processes in total will you use? [1]: 1 Do you wish to use FP16 (mixed precision)? [yes/NO]: yes ``` ### Who can help @patrickvonplaten, @patil-suraj ## Information Model I am using (Bert, XLNet ...): facebook/mbart-large-cc25 The problem arises when using: * [x] the official example scripts: examples/pytorch/translation/run_translation_no_trainer.py * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: * [x] my own task or dataset: finetuning en-ro dataset of wmt16 translation ## To reproduce I use the following script: note that i rename the `run_translation_no_trainer.py` with `run.py` ```shell accelerate launch run.py \ --model_name_or_path facebook/mbart-large-cc25 \ --source_lang en \ --target_lang ro \ --dataset_name wmt16 \ --dataset_config_name ro-en \ --output_dir ~/tmp/tst-translation ``` ## Expected behavior ![image](https://user-images.githubusercontent.com/26213546/130571764-92200a1f-222c-4267-a237-3b97e39c5e7c.png) `outputs` is still an object of `Seq2SeqLMOutput`, however, the `outputs.loss` is a `dict` with keys `dict_keys(['loss', 'logits', 'past_key_values', 'encoder_last_hidden_state'])` ## Additional information By the way, I also test another script: `examples/pytorch/text-classification/run_glue_no_trainer.py` It met the same problem. You can view it on colab: https://colab.research.google.com/drive/1BLt5rtHFdHaRliyqj_BKmM1_uzQclQHw?usp=sharing ![image](https://user-images.githubusercontent.com/26213546/130642391-f4f34bb0-e22c-4201-9382-0239b25a28e7.png)
08-24-2021 07:06:21
08-24-2021 07:06:21
I have tested `outputs = accelerator.unwrap_model(model)(**batch)`. It works well, and the `outputs.loss` is a `Tensor` as expected.<|||||>I have checked that it is caused by `convert_to_fp32` in `accelerate/utils.py`. ```python def convert_to_fp32(tensor): if isinstance(tensor, (list, tuple)): return honor_type(tensor, (convert_to_fp32(t) for t in tensor)) elif isinstance(tensor, dict): return type(tensor)({k: convert_to_fp32(v) for k, v in tensor.items()}) elif not hasattr(tensor, "dtype") or tensor.dtype != torch.float16: return tensor return tensor.float() ``` when `return type(tensor)({k: convert_to_fp32(v) for k, v in tensor.items()})`, it's actually implemented as `Seq2SeqLMOutput({k: convert_to_fp32(v) for k, v in tensor.items()})`. The `{k: convert_to_fp32(v) for k, v in tensor.items()}` denotes a `dict` object which has keys = `'loss', 'logits', 'past_key_values', 'encoder_last_hidden_state'`. The result of `Seq2SeqLMOutput(...)` is that the value of `loss` attribute turn to a dict. So the `outputs.loss` is a dict. For example: ```python output = { 'loss': torch.randn(1), 'logits':torch.randn(2,2,2), 'past_key_values': None, 'encoder_last_hidden_state' : torch.randn(2,2,2), } Seq2SeqLMOutput(output) ``` ```python Seq2SeqLMOutput(loss={'loss': tensor([-0.8864]), 'logits': tensor([[[-0.5915, -0.9891], [ 0.5060, -1.2748]], [[ 0.8566, -0.6958], [-0.2949, -0.7065]]]), 'past_key_values': None, 'encoder_last_hidden_state': tensor([[[-0.9881, 0.3471], [-0.3888, 3.0862]], [[ 0.2813, 0.4011], [-0.1960, 1.0331]]])}, logits=None, past_key_values=None, decoder_hidden_states=None, decoder_attentions=None, cross_attentions=None, encoder_last_hidden_state=None, encoder_hidden_states=None, encoder_attentions=None) ``` <|||||>To fix it,I modified `run_translation_no_trainer.py` with the following snippet in the training loop, and it works well. ```python outputs = model(**batch, return_dict=False) loss = outputs[0] loss = loss / args.gradient_accumulation_steps accelerator.backward(loss) ```<|||||>For my solution, i think the `convert_to_fp32` should be corrected as follows: ```python def convert_to_fp32(tensor): if isinstance(tensor, (list, tuple)): return honor_type(tensor, (convert_to_fp32(t) for t in tensor)) elif isinstance(tensor, dict): return type(tensor)(**{k: convert_to_fp32(v) for k, v in tensor.items()}) # add ** elif not hasattr(tensor, "dtype") or tensor.dtype != torch.float16: return tensor return tensor.float() ``` Do you think so? @patil-suraj
transformers
13,231
closed
codecarbon plugin issues
https://github.com/huggingface/transformers/pull/12304 added the `codecarbon` plugin and there are multiple issues with it: 1. it needs to be documented in https://github.com/huggingface/transformers/blob/2772d3e79d66925cf4adeaffd8be610f0ab177b6/src/transformers/training_args.py#L316-L319 along all the other plugins 2. It doesn't respect user's logging level. It needs to read the set for the current rank log-level and pass it explicitly to the object instance here: https://github.com/huggingface/transformers/blob/2772d3e79d66925cf4adeaffd8be610f0ab177b6/src/transformers/integrations.py#L782 via the `log_level` argument (but which expects a string like "warning" and not the real `logging.WARNING` which is normally used, so one needs to remap from real `logging` level to the string CC expects. 3. same logs are logged more than once in different formats: ``` [codecarbon INFO @ 19:33:14] Tracking Nvidia GPU via pynvml [codecarbon INFO @ 19:33:14] Tracking Intel CPU via RAPL interface 08/23/2021 19:33:14 - INFO - codecarbon - Tracking Nvidia GPU via pynvml 08/23/2021 19:33:14 - INFO - codecarbon - Tracking Intel CPU via RAPL interface ``` 4. it breaks the training as it can't find some server. ``` Traceback (most recent call last): File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/requests/adapters.py", line 439, in send resp = conn.urlopen( File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/urllib3/connectionpool.py", line 755, in urlopen retries = retries.increment( File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/urllib3/util/retry.py", line 574, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='get.geojs.io', port=443): Max retries exceeded with url: /v1/ip/geo.json (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fcd95312700>: Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/mnt/nvme1/code/huggingface/codecarbon/codecarbon/core/util.py", line 10, in suppress yield File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/contextlib.py", line 75, in inner return func(*args, **kwds) File "/mnt/nvme1/code/huggingface/codecarbon/codecarbon/emissions_tracker.py", line 348, in stop emissions_data = self._prepare_emissions_data() File "/mnt/nvme1/code/huggingface/codecarbon/codecarbon/emissions_tracker.py", line 367, in _prepare_emissions_data geo: GeoMetadata = self._get_geo_metadata() File "/mnt/nvme1/code/huggingface/codecarbon/codecarbon/emissions_tracker.py", line 612, in _get_geo_metadata return GeoMetadata.from_geo_js(self._data_source.geo_js_url) File "/mnt/nvme1/code/huggingface/codecarbon/codecarbon/external/geography.py", line 83, in from_geo_js response: Dict = requests.get(url, timeout=0.5).json() File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/requests/api.py", line 76, in get return request('get', url, params=params, **kwargs) File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/requests/sessions.py", line 542, in request resp = self.send(prep, **send_kwargs) File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/requests/sessions.py", line 655, in send r = adapter.send(request, **kwargs) File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/requests/adapters.py", line 516, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPSConnectionPool(host='get.geojs.io', port=443): Max retries exceeded with url: /v1/ip/geo.json (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fcd95312700>: Failed to establish a new connection: [Errno 111] Connection refused')) 08/23/2021 19:33:20 - WARNING - codecarbon - stopping. ``` This part of CC never worked for me. It always fails here for me and not just in this integration. Only Offline version of the tracker works w/o this failure. Could we use the offline tracker by default instead? ---------------------- To reproduce - I run: ``` python examples/pytorch/translation/run_translation.py --model_name_or_path google/mt5-small --do_train --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config_name ro-en --output_dir output_dir --per_device_train_batch_size=4 --logging_step 2 --save_steps 0 --fp16 --max_train_samples 10 --save_total_limit 0 --overwrite_output_dir --save_strategy no ``` Thank you. @JetRunner, @sgugger
08-24-2021 02:52:57
08-24-2021 02:52:57
Thanks for the feedback @stas00! I don't think it runs automatically - it needs to be activated by setting a flag, just like other integrations. I will double check what's going on there. For other issues, I'll investigate as well.<|||||>Investigating it some more it's like all the other plugins. So it runs automatically unless the user explicitly turns it off. Not my favorite Trainer feature. I had to add `--report_to none` to disable it and everything else. https://github.com/huggingface/transformers/blob/2772d3e79d66925cf4adeaffd8be610f0ab177b6/src/transformers/training_args.py#L316-L319 That doc is outdated. I will update the Issue to remove the incorrect part of the report, as I can turn it off explicitly. Now that it can be disabled there is absolutely no rush with the rest.<|||||>@stas00 I 100% agree with you that the default value of `--report_to` should be `none` instead of `all`. Detecting what's installed then enabling everything is a weird and aggressive behavior. cc @sgugger <|||||>This is planned to be so in v5. But for back-compat reasons it remains "all" - I don't remember all the history, but it probably needs to be on by default for `tensorboard` and none of the other plugins (or perhaps several plugins). i.e. most likely the default shouldn't be 'all' but only the list of plugins that are required for back-compat.<|||||>Sounds good. @stas00 For the online tracker, could you `ping get.geojs.io` so I can determine what's wrong there? Since it works fine on my side.<|||||>``` $ ping get.geojs.io PING get.geojs.io (127.0.0.1) 56(84) bytes of data. 64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.036 ms 64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.061 ms 64 bytes from localhost (127.0.0.1): icmp_seq=3 ttl=64 time=0.038 ms ^C --- get.geojs.io ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2041ms rtt min/avg/max/mdev = 0.036/0.045/0.061/0.011 ms ``` but it doesn't respond to 80 or 443 ports: ``` $ HEAD https://get.geojs.io 500 Can't connect to get.geojs.io:443 (Connection refused) Content-Type: text/plain Client-Date: Tue, 24 Aug 2021 06:05:27 GMT Client-Warning: Internal response $ HEAD http://get.geojs.io 500 Can't connect to get.geojs.io:80 (Connection refused) Content-Type: text/plain Client-Date: Tue, 24 Aug 2021 06:05:36 GMT Client-Warning: Internal response ``` As I mentioned in OP this is not a new problem, has been like this for at least one month.<|||||>Got it. My guess is the `geojs` service is not available in some countries. Let me see how we can add an offline option.<|||||>It will also break on instances without internet, like JZ.<|||||>pinging @JetRunner <|||||>> pinging @JetRunner will add when I'm back from the leave<|||||>Unfortunately it doesn't look like this is ever going to be fixed. Could we please disable this plugin from loading by default if it has no maintainer that takes care of making it work? @sgugger, @LysandreJik <|||||>Switching to `codecarbon.OfflineEmissionsTracker` should at least solve this particular issue.<|||||>I'm happy to review any PR @stas00 :-)<|||||>The problem is that this feature was added without tests. So it's far from just changing its main class. Moreover we dropped `codecarbon` from the BigScience since it has multiple issues and was causing more issues than it was solving. After investing so much time into trying to make this module work, I'm not keen on giving it any more of my energy. So my vote is to disable it for now until it becomes more mature. To keep it, at the very least @JetRunner (who added it) or someone else needs to write a test for it.<|||||>Hey @stas00 As you know I have already left HF for a full-time PhD and I don't have any bandwidth to do it. I don't want to comment on the quality of codecarbon but feel free to remove it if you think it's buggy. For BigScience, our WG will switch to another way for carbon emission estimation.<|||||>Thank you for letting us know that you have no resources to work on this, @JetRunner <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,230
closed
Bert Loses Patience - Batch Inference Doubt
In PABEE, with evaluate every layer output for a given patience threshold. This can be done with if else when batch size is 1. How does this code work when batch_size > 1: https://github.com/huggingface/transformers/blob/2772d3e79d66925cf4adeaffd8be610f0ab177b6/examples/research_projects/bert-loses-patience/pabee/modeling_pabee_bert.py#L224 When batch size is greater than 1, I believe every input in the batch will have its own patience outcome. But torch.all would wait for the longest patience outcome. @JetRunner Won't the patience counter dimension be equal to batch size ?
08-24-2021 00:54:20
08-24-2021 00:54:20
Thanks for your interest in our work Tanmay! The code only considers the scenario when `batch_size=1`. Theoretically, we can do batch inference but as you noted we have to wait for the longest patience outcome anyway so it's not very useful. <|||||>@JetRunner Thanks for the prompt response! Is there any better way to handle batch size greater than 1? <|||||>I don't think so. If you really want to fully use the GPU, you can try multi-threads but I don't really recommend that. `bs=1` is fast enough to me.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,229
closed
Can't install transformers in Conda environment with python 3.9
## Environment info - `transformers` version: 4.8.1 - Platform: macOS 11.5.2 - Python version: 3.9.6 - Conda Version: 4.10.3 ## Information I'm trying to install transformers in a condo environment and get and error "conda.exceptions.UnsatisfiableError: The following specifications were found to be incompatible with each other:" with blank lines beneath ## To reproduce Steps to reproduce the behavior: ``` shell conda create -n py39 python=3.9 conda activate py39 conda install -c huggingface transformers ``` Verbose conda output of last command: ``` Collecting package metadata (current_repodata.json): ...working... Unable to retrieve repodata (response: 404) for https://conda.anaconda.org/HuggingFace/osx-64/current_repodata.json done Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve. Solving environment: ...working... failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve. Solving environment: ...working... Found conflicts! Looking for incompatible packages. This can take several minutes. Press CTRL-C to abort. failed Traceback (most recent call last): File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/cli/install.py", line 261, in install unlink_link_transaction = solver.solve_for_transaction( File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 114, in solve_for_transaction unlink_precs, link_precs = self.solve_for_diff(update_modifier, deps_modifier, File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 157, in solve_for_diff final_precs = self.solve_final_state(update_modifier, deps_modifier, prune, ignore_pinned, File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 275, in solve_final_state ssc = self._add_specs(ssc) File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 706, in _add_specs raise UnsatisfiableError({}) conda.exceptions.UnsatisfiableError: Did not find conflicting dependencies. If you would like to know which packages conflict ensure that you have enabled unsatisfiable hints. conda config --set unsatisfiable_hints True During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/exceptions.py", line 1079, in __call__ return func(*args, **kwargs) File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/cli/main.py", line 84, in _main exit_code = do_call(args, p) File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/cli/conda_argparse.py", line 83, in do_call return getattr(module, func_name)(args, parser) File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/cli/main_install.py", line 20, in execute install(args, parser, 'install') File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/cli/install.py", line 308, in install raise e File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/cli/install.py", line 295, in install unlink_link_transaction = solver.solve_for_transaction( File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 114, in solve_for_transaction unlink_precs, link_precs = self.solve_for_diff(update_modifier, deps_modifier, File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 157, in solve_for_diff final_precs = self.solve_final_state(update_modifier, deps_modifier, prune, ignore_pinned, File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 275, in solve_final_state ssc = self._add_specs(ssc) File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/core/solve.py", line 704, in _add_specs ssc.r.find_conflicts(spec_set) File "/usr/local/anaconda3/lib/python3.8/site-packages/conda/resolve.py", line 352, in find_conflicts raise UnsatisfiableError(bad_deps, strict=strict_channel_priority) conda.exceptions.UnsatisfiableError: The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versions ``` ## Expected behavior Installs without error
08-24-2021 00:03:22
08-24-2021 00:03:22
It seems that its works with python 3.8. Still, I don't get why it wouldn't with 3.9<|||||>Hello! Could you add the unsatisfiable hints with `conda config --set unsatisfiable_hints True` so that we may see what packages are incompatible in python 3.9? Thank you!<|||||>Set the config, nothing has changed<|||||>No this won't change anything, this is so that we may understand what's happening. Please set the config and copy the logs here so that we may help, thank you.<|||||>I know, I meant that the logs stayed absolutely the same. Weird 🤨<|||||>I am having the same issue ```bash $ conda install -c huggingface transformers Solving environment: failed UnsatisfiableError: The following specifications were found to be in conflict: - python=3.9 - transformers Use "conda info <package>" to see the dependencies for each package. ``` This is a fresh anaconda environment. I did install PyTorch 1.9 with conda, which installs python 3.9.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale, this is an actual issue<|||||>I'm able to install latest transformers from the huggingface channel with Python 3.9.7.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.