repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
3,292
closed
NER Pipeline returns null
# 🐛 Bug ## Information Model I am using (NER Pipeline): Language I am using the model on (English): The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ``` from transformers import pipeline # Allocate a pipeline for named entity recognition nlp = pipeline('ner') nlp(['We are very happy to include pipeline into the transformers repository.']) ``` **returns null.** <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Expect to have the named entity label for each token. But it returns null. ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.5.1 - Platform: Jupyter lab - Python version: 3.6.1 - PyTorch version (GPU?): CPU-1.4.0 - Tensorflow version (GPU?): CPU-1.15.0 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
03-16-2020 02:54:03
03-16-2020 02:54:03
By "it returns null" you mean it returns an empty array? That's because it didn't identify any named entity in your sequence.
transformers
3,291
closed
a lot of examples in doc can't run successful
I used example in doc, but a lot of example can't run successful, what's wrong?
03-16-2020 02:41:07
03-16-2020 02:41:07
@patrickvonplaten my OS is mac OS 10.14.6, Python 3.6.10, tensorflow 2.0, transformers version is the resoruce code in github<|||||>Hi @policeme, which example did you use? <|||||>![image](https://user-images.githubusercontent.com/30991932/76741738-050c9280-67ab-11ea-98a2-b3612b6695a9.png) like this<|||||>@patrickvonplaten <|||||>your example like this ![image](https://user-images.githubusercontent.com/30991932/76741937-5288ff80-67ab-11ea-8405-b30c191869fb.png) <|||||>Please don't paste screenshots of your code on issues. Copy and Paste the code in code format. We can't copy the code this way. Typing your code from a screenshot is very time-consuming. Considering the problem you have. How did you train the model that was saved in `...ckpt.index`? Did you use this library? From your error message, it seems like your Bert TF model saved in `.ckpt.index` does not have the correct form. The example you mention should be used if you trained your model with this library.<|||||>sorry about it, this is my code `config = AutoConfig.from_pretrained(r'/Users/maxiong/Workpace/Code/transformers/pre_model/bert_config.json') tokenizer = AutoTokenizer.from_pretrained(r'/Users/maxiong/Workpace/Code/transformers/pre_model/bert_config.json') model = AutoModel.from_pretrained(r'/Users/maxiong/Workpace/Code/transformers/pre_model/bert_model.ckpt.index', from_tf=True, config=config) ` this bert model is from google official chinese bert base model, in your answer, the model can be used with this libary, which was trained by this library, if that, if I want use own pretraning bert model, how to use it with this library
transformers
3,290
closed
[WIP] Lightning glue example
This PR adds an example of using Pytorch Lightning to run the GLUE benchmark. Additionally, I altered the `transformer_base.py` to use auto models and moved it to the example directory so it could be copied in by any script that wishes to use it. Preferably, the base transformer would have subclasses for the different types of tasks, but I just used a dictionary with a key passed on init instead. (i.e. NER uses `AutoModelForTokenClassification` and GLUE uses `AutoModelForSequenceClassification`).
03-16-2020 00:30:16
03-16-2020 00:30:16
@srush Can you please take a look?<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3290?src=pr&el=h1) Report > Merging [#3290](https://codecov.io/gh/huggingface/transformers/pull/3290?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8320feec09309a94f673e1e7ce2a93da81eb3366&el=desc) will **increase** coverage by `0.18%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3290/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3290?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3290 +/- ## ========================================== + Coverage 77.81% 77.99% +0.18% ========================================== Files 98 98 Lines 16666 16666 ========================================== + Hits 12969 12999 +30 + Misses 3697 3667 -30 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3290?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3290/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.84% <0.00%> (+0.27%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3290/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.47% <0.00%> (+5.00%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3290?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3290?src=pr&el=footer). Last update [8320fee...dd1b783](https://codecov.io/gh/huggingface/transformers/pull/3290?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Looks excellent. I will let @LysandreJik merge tomorrow, and confirm multi-gpu / TPU work. Want to try SQuAD next?<|||||>> Want to try SQuAD next? Sure, I'll give it a go.
transformers
3,289
closed
GPT-2 attention_mask reshaping uses input_ids first dimension
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): GPT-2 ## To reproduce Use attention_mask in **GPT2LMHead**, while feeding **inputs_embeds** instead of **input_ids** The code fails because line 427 of modeling_gpt2.py uses input_ids first dimension to reshape the mask ``` if attention_mask is not None: batch_size = input_ids.shape[0] attention_mask = attention_mask.view(batch_size, -1) ``` I fix it by changing **input_ids.shape[0]** to **attention_mask.shape[0]**, but I think it would be more correct to obtain single batch_size from one of the available input formats. **Update** I think `batch_size = input_shape[0]` is the best way - `transformers` version: **master** - Python version: 3.7 - PyTorch version (GPU?):1.4
03-15-2020 18:31:12
03-15-2020 18:31:12
Hey @lazarevskiVsg, thanks a lot for pointing this out! It should be fixed now :-)
transformers
3,288
closed
Dockerhub images huggingface/transformers_cpu for version 2.5.1 has version 2.5.0 installed
# 🐛 Bug ## Information Model I am using any model introduced in 2.5.1 The problem arises when using: pulling `huggingface/transformers_cpu:2.5.1` from dockerhub ## To reproduce Steps to reproduce the behavior: 1. pull docker image from dockerhub 2. run docker container 3. run `pip freeze` to see `transformers==2.5.0` ## Expected behavior `pip freeze` in the docker container should show: `transformers==2.5.1`
03-15-2020 14:44:06
03-15-2020 14:44:06
Should be fixed, I'll push updated images for all the others Dockerfile in the next hours. Thanks for reporting @edwardcqian <|||||>I'm closing, feel free to reopen if I missed something 👍
transformers
3,287
closed
Unexpected output from feature extraction pipeline
Hi everyone, I'm not sure if it is a bug or if I simply overlooking something so I did not want to submit a bug report yet. I have the following example code: ``` from transformers import pipeline, AutoTokenizer import numpy as np tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', add_special_tokens=False) #initialize pipeline nlp = pipeline('feature-extraction', model='bert-base-uncased', config='bert-base-uncased', tokenizer=tokenizer, device=1) features = nlp("Why is Howard asking questions about the food after Leonard gives him a carton ?") features = np.squeeze(features) print(features.shape) ``` I expect the output: (15,768) But I receive the output: (18,768) I think there are only 15 tokens but somehow the shape is 18. What am I missing here? Is this output expected and am I simply missing something or is there more to it?
03-15-2020 13:55:11
03-15-2020 13:55:11
The tokenizer adds special tokens (here, specific to BERT) at the beginning and end of the sentence. You can check that with: ```python tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') len(tokenizer.encode(TEXT)) == 18 ```<|||||>@julien-c is there any way to avoid the special tokens when extracting the features? I expected "add_special_tokens=False" would prevent this from happening?<|||||>I would suggest not using the Pipeline and just doing `tokenizer.encode()` then `outputs = model(input_ids)`
transformers
3,286
closed
Adding LM Head to Transfo-XL and first step to fixing problem with Adaptive Embeddings in TransfoXL
This PR adds LM generation capabilitiies to the TF transfo-xl model. The integration tests for language generation pass, so generation from a pretrained model works now in TF as well. What does definitely not work yet is running the both PT and TF models with `self.sample_softmax > 0`: - Transfo-XL uses adaptive word embeddings -> the word embeddings ale broken down into 4 Embeddings of different shapes: `[20000, 1024], [20000, 1024], [160000, 64]` and `[67735, 16]` . When `self.sample_softmax > 0` though, it seems like the model expects the `normal` word embeddings with just a single weight matrix. When trying to tie the weights then as done in line 831 (see comment below), the logic breaks. This problem seems to be more complex though and I'd suggest to solve it in another PR (add possible have a call before to make things clear).
03-15-2020 12:19:54
03-15-2020 12:19:54
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3286?src=pr&el=h1) Report > Merging [#3286](https://codecov.io/gh/huggingface/transformers/pull/3286?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/68ef0a111f8740f06ca4e5a00374ec4e2adb0a6d&el=desc) will **increase** coverage by `0.01%`. > The diff coverage is `68.29%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3286/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3286?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3286 +/- ## ========================================== + Coverage 77.48% 77.50% +0.01% ========================================== Files 99 99 Lines 16799 16768 -31 ========================================== - Hits 13017 12996 -21 + Misses 3782 3772 -10 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3286?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/3286/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.91% <ø> (ø)` | | | [src/transformers/modeling\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/3286/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `64.61% <ø> (+11.28%)` | :arrow_up: | | [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3286/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `89.15% <55.00%> (-2.04%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3286/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `77.00% <80.95%> (+1.37%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3286/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `82.46% <0.00%> (-3.76%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3286?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3286?src=pr&el=footer). Last update [68ef0a1...f2cc11a](https://codecov.io/gh/huggingface/transformers/pull/3286?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>> Ok good job. > > I feel like we could remove all the dead code related to sampling softmax (see my comments) Sound good, will do that then!<|||||>Dead code is now removed. This removed a lot of code. To not re-invent the wheel the code is kept in the branch `add_sampling_and_training_to_transfo_xl_models` and documented by the feature request: #3310 , if someone wants to pick up implementing sample softmax again. This PR still adds language modeling capabilities to TF transfoXL.
transformers
3,285
closed
Is there a way to evaluate GPT-2 model during fine-tuning process for accuracy and fluency?
# ❓ Questions & Help I'm trying to evaluate GPT-2 model during fine tuning process, and I'm able to calculate the loss at each epoch, but do not know how accuracy can be calculated or how to give a score to the model. Would like to get some suggestions as help. ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**: https://stackoverflow.com/questions/60483956/how-to-perform-accuracy-testing-on-text-generation-task
03-15-2020 10:28:14
03-15-2020 10:28:14
A common way of evaluating LMs is to measure their Perplexity. Say you want to finetune GPT2 on your dataset D. Define train, val and test datasets (maybe something around 75%, 10%, 15%. Measure the [perplexity](https://towardsdatascience.com/perplexity-intuition-and-derivation-105dd481c8f3) on train and val after each epoch. Compare train and eval curves for overfitting. There are a ton of other evaluation measures that might be better for your task - Google will be your best friend :-) <|||||>@patrickvonplaten can you provide example/code implementation?
transformers
3,284
closed
Return token span from NerPipeline
# 🚀 Feature request I would like to suggest that NerPipeline should return the span in the original text where the matched entity exists. Instead of: ```python [ {"word": "New", "score": 0.9995751976966858, "entity": "LOC"}, {"word": "York", "score": 0.9996403455734253, "entity": "LOC"} ] ``` I would like to see this: ```python [ {"word": "New", "score": 0.9995751976966858, "entity": "LOC", "span": (0, 3)}, {"word": "York", "score": 0.9996403455734253, "entity": "LOC", "span": (4, 8)} ] ``` ## Motivation I'm trying to use transformers for NER, and I specifically want to return multi word entities as one phrase. With the above example, I would like to return "New York". With spans added, I would be able to merge nearby tokens into one. This makes it possible to differentiate the result of "A place called New, and a place called York" from "A place called New York". With the current scheme, they both return the same thing. ## Your contribution I think I understand NerPipeline enough to make a PR, if this is something you would be open to.
03-15-2020 10:27:24
03-15-2020 10:27:24
Oh, I guess a workaround is to pass in `ignore_labels=[]` when creating the pipeline. This makes the nlp call return all tokens, including the ones that are not part of NER. Then I can just chunk two tokens together if their label is the same and they are nearby. Does this make sense, or am I missing something fundamental?<|||||>Hi again, would you be open to a PR fixing this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,283
closed
What is the most effective way to use BERT , ROBERTA , GPT-2 architectures as frozen feature extractors ?
We use pretrained self-supervised learning (SSL) models for NLP as feature extractors for downstream tasks like sentiment analysis. In most of such cases, we add a simple new classification layer and **fine-tune the whole model**. With the SSL models getting bigger and the amount of unsupervised training data is huge it would be nice if we can use the problem agnostic behavior of SSL embeddings. In other words if we use them as **Frozen Feature extractors**, we can save lot of time and computational cost. **Have anyone seen a good review on using SSL networks as frozen feature extractors?**
03-15-2020 09:06:20
03-15-2020 09:06:20
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,282
closed
Install error , Win10,anaconda3,python3.5,pytorch
When I pip install the transformers,its not successfull,my environment is Win10,anaconda3,python3.5,the error is as follows,what is wrong with it,thank you!. ## Details Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\sjh\AppData\Local\Temp\pip-install-fu05dcfq\sentencepiece\setup.py", line 29, in <module> with codecs.open(os.path.join('..', 'VERSION'), 'r', 'utf-8') as f: File "C:\Users\sjh\Anaconda3\Lib\codecs.py", line 895, in open file = builtins.open(filename, mode, buffering) FileNotFoundError: [Errno 2] No such file or directory: '..\\VERSION' ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in C:\Users\sjh\AppData\Local\Temp\pip-install-fu05dcfq\sentencepiece\
03-15-2020 08:19:29
03-15-2020 08:19:29
seems to be a sentencepiece issue, please open an issue at https://github.com/google/sentencepiece
transformers
3,281
closed
how to use TFBertModel to load a Bert, which the path is from own computer.
my model's path is on my computer, so, I want to load it, but when I use TFBertModel to load it, it appeared this error. `model = TFBertModel.from_pretrained('/Users/maxiong/Workpace/Code/transformers/pre_model',config=config)` error: ![image](https://user-images.githubusercontent.com/30991932/76696213-dcf63400-66c3-11ea-8345-2d7780470709.png) this is my model files ![image](https://user-images.githubusercontent.com/30991932/76696214-e384ab80-66c3-11ea-9b97-6a0a685f67d1.png)
03-15-2020 05:50:31
03-15-2020 05:50:31
and how to use AutoTokenizer to load a local vocab file, instead of download a vocab file from server
transformers
3,280
closed
how to finetune with PreTrainedEncoderDecoder
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> I am trying to run seq2seq within the same language (say it is English). I was trying to use PreTrainedEncoderDecoder (BERT, BERT), was trying to use BERT and GPT2 however, looks like it does not support this combination yet. I am trying to understand what the forward function does in the class PreTrainedEncoderDecoder, and how can we use it training my dataset. Also how can we use it at prediction time since forward need to have both decode_input_ids and encode_input_ids. I do not think we will have decode_input_ids at prediction time. Thank you <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
03-15-2020 05:33:30
03-15-2020 05:33:30
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,279
closed
[BART] Remove unused kwargs
This doesn't change anything, - k_dim and v_dim kwargs are there for other models in fairseq but we don't need them. - attention weights are returned by the AttentionModule (and ignored later) no matter what
03-14-2020 23:09:38
03-14-2020 23:09:38
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3279?src=pr&el=h1) Report > Merging [#3279](https://codecov.io/gh/huggingface/transformers/pull/3279?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3814e167d99c4b2e135b250d73deaa3f63ebef0c&el=desc) will **decrease** coverage by `0.07%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3279/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3279?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3279 +/- ## ========================================== - Coverage 78.02% 77.94% -0.08% ========================================== Files 98 98 Lines 16670 16666 -4 ========================================== - Hits 13007 12991 -16 - Misses 3663 3675 +12 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3279?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.26% <100.00%> (-0.04%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.22% <0.00%> (-1.97%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3279/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.72% <0.00%> (-0.14%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3279?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3279?src=pr&el=footer). Last update [3814e16...1b8aa30](https://codecov.io/gh/huggingface/transformers/pull/3279?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I found one more `forward` in bertabs, these others are not obviously wrong. `git grep "\.forward("` ``` examples/ner/run_pl_ner.py: outputs = self.forward(**inputs) examples/ner/run_pl_ner.py: outputs = self.forward(**inputs) ``` on a lightning module so OK ``` examples/summarization/bertabs/modeling_bertabs.py: See :obj:`onmt.modules.RNNDecoderBase.forward()` ``` In documentation so OK ``` src/transformers/modeling_bart.py: return super().forward(positions) src/transformers/modeling_roberta.py: return super().forward( ``` tried to change and got "super() is not callable". Merging!
transformers
3,278
closed
[BART] generation_mode as a kwarg not a class attribute
Currently, we set it to `BartModel.decoder.generation_mode = True` and then never unset it, which is confusing in the the rare case where you try to finetune or extract features after generating. We can encapsulate bart specific logic to modeling_bart.py by just using a kwarg.
03-14-2020 22:24:56
03-14-2020 22:24:56
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3278?src=pr&el=h1) Report > Merging [#3278](https://codecov.io/gh/huggingface/transformers/pull/3278?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3814e167d99c4b2e135b250d73deaa3f63ebef0c?src=pr&el=desc) will **decrease** coverage by `<.01%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3278/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3278?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3278 +/- ## ========================================== - Coverage 78.02% 78.02% -0.01% ========================================== Files 98 98 Lines 16670 16667 -3 ========================================== - Hits 13007 13004 -3 Misses 3663 3663 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3278?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.7% <ø> (-0.16%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.28% <100%> (-0.01%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3278/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.37% <0%> (+0.17%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3278?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3278?src=pr&el=footer). Last update [3814e16...473dab8](https://codecov.io/gh/huggingface/transformers/pull/3278?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>### Summary: `generation_mode` is a flag that - tells the decoder NOT to make decoder_attn_mask (ignored pad tokens and causal tokens) - keeps position_embeds correct even though we only decode one new token at a time. - tells the decoder The easiest way to get rid of it in `modeling_utils.py`: pass a kwarg from `BartModel.prepare_inputs_from_generation` (then it never needs to be unset, and modeling_utils.py doesn't need to know about it) I don't know how to get rid of the logic entirely. It's tough to know whether you're in generation mode at step 0 because the cache is empty.<|||||>I updated this PR to implement the solution I proposed.<|||||>Merging, but feel free to ask further questions!
transformers
3,277
closed
Add missing token classification for XLM
The current [modeling_xlm.py](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_xlm.py) did not have alike `ForTokenClassification` class like others, which helps for NER task comparison across all existing models. now `XLMForTokenClassification` can be called via: ```python from transformers import XLMForTokenClassification model = XLMForTokenClassification.from_pretrained('xlm-mlm-100-1280') ```
03-14-2020 17:31:27
03-14-2020 17:31:27
transformers
3,276
closed
Model fail to revert to generation_mode=False after generation
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): BART Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name): CNN/DM * [ ] my own task or dataset: (give details below) ## To reproduce Hi @sshleifer, Thanks for the amazing model! I found a bug when alternatively trying to use **forward** to train the BartForConditionalGeneration and use **generate** to inference and evaluate the trained model. As shown in the code below, if I first use generate function then call forward function, the generation_mode attribute of decoder is set to True, and the shape of the decoder_output seems incorrect. ``` model = BartForConditionalGeneration.from_pretrained('bart-large-cnn') tokenizer = BartTokenizer.from_pretrained('bart-large-cnn') # input sequence input_seq = "Bart is a deep-learning pretrained model implemented in pytorch. It can smoothly handle summarization. It is a big model pretrained for generation tasks, especially summarizaition." input_ids = torch.LongTensor([tokenizer.encode(input_seq)]) # expected output sequence decoder_input_seq = "Bart is a big pretrained deep model in pytorch for summarization." decoder_input_ids = torch.LongTensor([tokenizer.encode(decoder_input_seq)]) # using generate method to inference with torch.no_grad(): result = model.generate(input_ids=input_ids, eos_token_ids=tokenizer.eos_token_id, num_beams=4, max_length=20) print(tokenizer.decode(result[0])) # 'B. It is a big model pretrained for generation tasks, especially summarizaition. It' # NOW use forward to train result = model(input_ids, decoder_input_ids=decoder_input_ids) # the shape of decoder_output and encoder_output # what expected is: <1, 18, 50264> and <40, 1, 1024> # but actual output is: torch.Size([1, 1, 50264]) torch.Size([40, 1, 1024]) print(result[0].shape, result[2].shape) ``` Such issue can **seemingly** be addressed by mannually setting the generation_mode. ``` # mannually set the generation mode to False **seemingly** fix the issue model.model.decoder.generation_mode=False result = model(input_ids, decoder_input_ids=decoder_input_ids) print(result[0].shape, result[2].shape) # output is: torch.Size([1, 18, 50264]) torch.Size([40, 1, 1024]) and make sense ``` However, now the output of the forward function doesn't make sense, as shown below. ``` with torch.no_grad(): result = model.generate(input_ids=input_ids, eos_token_ids=tokenizer.eos_token_id, num_beams=4, max_length=20) model.model.decoder.generation_mode=False new_result = model(input_ids, decoder_input_ids=result) print(tokenizer.decode(torch.argmax(new_result[0][0], dim=1))) # ' and and and and and and and and and and and and and and and and and and and' ``` In expectation, when we feed the generated decoder sequence to the decoder and remain the input of the encoder unchanged, the output of the decoder would at least resemble the decoder_input. However, the actual output of the model is `' and and and and and and and and and and and and and and and and and and and'`, which is definately not a reasonable output of a trained decoder. To go deeper, I tested different samples, all feeding the generated output of a encoder_input to the decoder and inspect the decoder_output (which "should" resemble decoder_input, i.e. generated sequence), the same happens everytime for me (most output is the duplication of a single token for many times, like "the the the ...", "and and and...", ". . . . "). Given all these results, I think there is a bug in either the implementation of **generate** or that of **forward**, the revert of generation_mode is simple, but I don't think the output of forward is a reasonable result currently. Could you please look into the issue? @sshleifer If I'm not using those functions in a correct manner, any advise or instruction is welcomed! Many thanks for the help! <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: master-branch - Platform: windows 10 - Python version: 3.7.0 - PyTorch version (GPU?): 1.4.0 - Tensorflow version (GPU?): / - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
03-14-2020 15:35:35
03-14-2020 15:35:35
I don't follow your workflow. Why do you want to run forward after you generate using the generated_ids as decoder_input_ids? It would be easier to follow if you: - checked whether the behavior you expect is achieved by the authors' implementation in fairseq - made the example smaller Thanks for contributing! <|||||>@sshleifer Thanks for the reply! Your latest PR already fixed the generation_mode, thanks! The rest of the problem (that you probably failed to follow) is like a sanity check, but the result failed to meet the expectation for me. Basically, if I feed sequence s1 to the encoder to let the model to generate, suppose sequence s2 is generated, then if I directly feed <encoder_input_ids=s1, decoder_input_ids=s2> to the model, the output of the decoder would be something that resembles s2. However, as shown in the code below, the actual output of the decoder given input <s1,s2> is a sequence "and and ... and", which is largly different from s2, and that's where I'm confused. ``` with torch.no_grad(): result = model.generate(input_ids=input_ids, eos_token_ids=tokenizer.eos_token_id, num_beams=4, max_length=20) model.model.decoder.generation_mode=False new_result = model(input_ids, decoder_input_ids=result) print(tokenizer.decode(torch.argmax(new_result[0][0], dim=1))) # ' and and and and and and and and and and and and and and and and and and and' ``` Of course we won't use this code in practice (since it's just a sanity check), but I post this because I'm wondering if it's my incorrect way of using `forward` function when training that caused this confusion. As for me, if I want to train the summarization model using <paragraph=s1, summary=s2> pairs, I formerly feed <encoder_input=s1, decoder_input=< bos >+s2> and train the model with decoder_output=s2+< eos >. May I ask if it is the correct way? Thanks again for the kind help!<|||||>For summarization finetuning, I'd recommend: - prepend a space to s1 and s2, - then use `tokenizer.batch_encode_plus(s1, max_length=1024)` for `input_ids` and `attention_mask` - then `tokenizer.batch_encode_plus(s2, max_length=1024)['input_ids']` to get `decoder_input_ids` <|||||>@sshleifer Thanks for the update! I'll try out you method and report the result I get for the problem above tomorrow.<|||||>The underlying issue here is fixed, closing!
transformers
3,275
closed
Cannot Achieve Reproducability with Tensorflow Transformer Models
I've been experimenting with the roberta-large model with both PyTorch and TensorFlow for a sentiment analysis task. With the PyTorch model, I am able to achieve 100% reproducability, however, this is not the case with the Tensorflow model. I have set all the necessary seeds as follows: ``` seed_val = 3 os.environ['PYTHONHASHSEED'] = str(seed_val) np.random.seed(seed_val) random.seed(seed_val) tf.random.set_seed(seed_val) ``` and I am even using the fix described at https://github.com/NVIDIA/tensorflow-determinism: ```os.environ['TF_DETERMINISTIC_OPS'] = '1'``` I am not sure if reproducability is fully supported with the TensorFlow transformer models yet, or if I am doing something wrong. Here is the link to the **CoLab notebook** containing my code: https://drive.google.com/open?id=1xPTYPl8LyRrMgkiXUNtSbxxJ7pAslD2x Thanks in advance
03-14-2020 15:23:34
03-14-2020 15:23:34
I have the same issue and I get (sometimes wildly) different results from run to run. Here's my code: https://github.com/dmitriydligach/Thyme/blob/master/RelKeras/et.py Does anybody have a solution yet?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,274
closed
Tremendous slowdown in multi-node distributed training
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Bert Finetuning a bert-base model on language modeling for a particular domain Language I am using the model on (English, Chinese ...): The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Training is happening on Azure NC24s_v3 nodes (4 V100s each) with NCCL as the backend. I'm comparing the performance on a single node scenario vs a 2 node scenario. Note that there is no infiniband networking between the nodes, only 40Gbps ethernet. 2. Use torch.distributed.launch to launch `run_language_modeling.py` in single node (mult-gpu) and multi node (mult-gpu) scenarios <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> In the single node scenario, I'm getting about 2 iteration/sec during training. In the multi-node scenario, it drops to 4 sec/iteration. Theorizing that the network was the issue, I reduced the model size significantly (down to 1 layer from 12 and the other hyperparameters also scaled down appropriately) and ran the test again. The same slowdown in performance was observed even then. Am I missing something here? Is it possible to perform multi-node training of bert models without infiniband? ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.5.1 - Platform: Ubuntu 18.04 - Python version: 3.6.10 - PyTorch version (GPU?): 1.5.0 - Tensorflow version (GPU?): - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes
03-14-2020 15:10:06
03-14-2020 15:10:06
Did you find an answer to what the cause is?<|||||>Well, I guess that you need infiniband between the nodes to train a model as large as bert. The ethernet interface seems to be a bottleneck during the gradient synchronization process. To test this out, I tried out a much smaller bert model on the existing setup without infiniband. Less gradient information to exchange, so I was able to observe a speedup. I tried out a normal sized model on a setup with infiniband and I was observing some speedup (albeit not perfect scaling, only like a 50-60% improvement). I concluded that not having infiniband was the issue. Maybe this could be updated in the readme where the instructions for `run_language_modeling.py` are given.<|||||>I also encounter this. And I found the `fairseq` code base scales linearly on the same ethernet hardware I have than `run_language_modeling.py`<|||||>Could you share some details on what model you trained on `fairseq` and that model's size?<|||||>Since `run_language_modeling.py` uses only 1 GPU per node in the code, could you share what changes need to be made to the file in order to work with Multi-GPU, Multi-Node settings like `Azure NC24s_v3 nodes`? In the code in `examples/run_language_modeling.py`, `1` GPU per node is hard-coded (in the last line) if args.local_rank == -1 or args.no_cuda: device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu") args.n_gpu = 0 if args.no_cuda else torch.cuda.device_count() else: # Initializes the distributed backend which will take care of sychronizing nodes/GPUs torch.cuda.set_device(args.local_rank) device = torch.device("cuda", args.local_rank) torch.distributed.init_process_group(backend="nccl") args.n_gpu = 1 <|||||>You do not need any modifications. You will have to launch the script via `torch.distributed.launch`. It'll be something like this ``` python -m torch.distributed.launch --nproc_per_node 4 --nnodes $NODE_COUNT --node_rank $RANK --master_addr $MASTER_ADDR run_lm_finetuning.py .... ``` `$NODE_COUNT` will be your number of nodes. You'll have to find a way to obtain `$RANK` and `$MASTER_ADDR` depending on your cluster configuration. Since when you run via DistributedDataParallel, one process has only one gpu associated with it, that line does something to ensure that.<|||||>Is there any update on this? I suppose that not having infiniband interconnect was the only limiting factor? <|||||>Yes. Ran faster on a system with infiniband. On Thu, Nov 19, 2020 at 6:08 PM gvijqb <[email protected]> wrote: > Is there any update on this? I suppose that not having infiniband > interconnect was the only limiting factor? > > — > You are receiving this because you modified the open/close state. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/3274#issuecomment-730347349>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ADZB3Y3PKZBAOFOP5SSA3BLSQUGUXANCNFSM4LJHN2QA> > . > -- Regards Anirudh Srinivasan Research Fellow Microsoft Research, India
transformers
3,273
closed
add XLMForTokenClassification
Firstly, I want to experiment with NER task across all available architecture with Thai language pretrained model. Turned out I found not TokenClassification class for XLM yet.
03-14-2020 14:58:12
03-14-2020 14:58:12
transformers
3,272
closed
how can i distill xlm-roberta model , just like distill roberta model , any suggestion ?thanks a lot
# ❓ Questions & Help how can i distill xlm-roberta model , maybe i can distill it ,just like distill roberta model , any suggestion ? thanks a lot
03-14-2020 14:36:00
03-14-2020 14:36:00
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Any answer on this?
transformers
3,271
closed
Finetuning before feature extraction
Hi, Currently I am using the pipeline feature extraction to extract features with my own dataset as input. I was wondering if it is possible to finetune different models using my own dataset before using the pipeline for feature extraction and if so what will be the easiest way to do so? In the past there used to be a lm_finetuning script example but this one is no longer available. I cannot find any examples or guides how to finetune different models on a personal dataset. tl;dr is it possible to finetune different models on a personal dataset in just a few lines of code and if so how? Thanks in advance, A clueless person trying to learn more about the world of NLP.
03-14-2020 11:46:24
03-14-2020 11:46:24
The script has been renamed [`run_language_modeling.py`](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) to better reflect the fact that it can also be used to train a new model from scratch. Let us know if it works.<|||||>Hi @julien-c thanks for the quick reply! That makes sense, I must have missed the fact it got renamed. I have one more question about the "new" script: How difficult is it make to script compatible for finetuning a new model like BART? Is it as simple as adding the model to the list inside the script or will it need a lot of workarounds to get it working?
transformers
3,270
closed
train model from scratch with big data
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation I would like to train a new model from scratch with some big data sets and the tutorial suggests to load and tokenize examples on the fly. It would be great if that feature is readily integrated in the example scripts (e.g. `run_language_modeling.py`). It is not entirely clear to me how to implement this in the most efficient way. My data set does not fit into memory and I cannot train out-of-the box with the existing scripts. <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
03-14-2020 09:50:27
03-14-2020 09:50:27
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,269
closed
when I install transformers, it appeared this error
ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly
03-14-2020 04:08:15
03-14-2020 04:08:15
You should add more information about your environment like OS, Python Version etc? I just installed it on Windows 10, WSL etc Python 3.6.9 :: Anaconda, Inc.; It worked..<|||||>I installed it on Mac, Python 3.6.10 Anaconda
transformers
3,268
closed
add gpt2-xl for tf
TF GPT2-XL is now added to AWS and can be loaded via: ``` from transformers import TFGPT2LMHeadModel model = TFGPT2LMHeadModel.from_pretrained('gpt2-xl') ``` Thanks @bkkaggle for pointing this out!
03-13-2020 20:09:44
03-13-2020 20:09:44
Good to merge for me. Tested whether model can generate text and everything seems fine. @julien-c <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3268?src=pr&el=h1) Report > Merging [#3268](https://codecov.io/gh/huggingface/transformers/pull/3268?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cc4c37952a961f2d13e83f3d5ba6dab811d0bbfd&el=desc) will **increase** coverage by `0.19%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3268/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3268?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3268 +/- ## ========================================== + Coverage 77.82% 78.01% +0.19% ========================================== Files 98 98 Lines 16666 16666 ========================================== + Hits 12970 13002 +32 + Misses 3696 3664 -32 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3268?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3268/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `96.16% <ø> (ø)` | | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3268/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.19% <0.00%> (+5.72%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3268?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3268?src=pr&el=footer). Last update [cc4c379...501291a](https://codecov.io/gh/huggingface/transformers/pull/3268?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,267
closed
removing torch.cuda.empty_cache() from TF function
torch.cuda.empty_cache() was being called from a TF function (even when torch is unavailable) not sure any replacement is needed if TF OOMs simply running the benchmarks on a GPU with lower HBM will reproduce this error
03-13-2020 18:22:04
03-13-2020 18:22:04
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3267?src=pr&el=h1) Report > Merging [#3267](https://codecov.io/gh/huggingface/transformers/pull/3267?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cc4c37952a961f2d13e83f3d5ba6dab811d0bbfd&el=desc) will **not change** coverage by `%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3267/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3267?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3267 +/- ## ======================================= Coverage 77.82% 77.82% ======================================= Files 98 98 Lines 16666 16666 ======================================= Hits 12970 12970 Misses 3696 3696 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3267?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3267?src=pr&el=footer). Last update [cc4c379...4bb2bc3](https://codecov.io/gh/huggingface/transformers/pull/3267?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Indeed, thanks!
transformers
3,266
closed
[BART] FP16 testing fixes
closes #3249: fp16 forward pass failing when no `decoder_attention_mask` provided. Adds test coverage. closes #3265: test_generate_fp16 was failing since #3140 (by sending proper kwargs to `BartForConditionalGenerate.generate`)
03-13-2020 16:44:08
03-13-2020 16:44:08
I have a suspicion that this will also fix the GPU test runner.
transformers
3,265
closed
[BART] test_generate_fp16 fails after PR#3140
passes after `git checkout d6de6423` (commit preceding #3140) Traceback: ``` unfinished_sents.mul_((~eos_in_sents).long()) # stop when there is a </s> in each sentence, or if we exceed the maximul length > if unfinished_sents.max() == 0: E RuntimeError: cuda runtime error (716) : misaligned address at /pytorch/aten/src/THC/THCReduceAll.cuh:327 src/transformers/modeling_utils.py:992: RuntimeError ``` @thomwolf @patrickvonplaten
03-13-2020 15:43:57
03-13-2020 15:43:57
Interesting - will investigate. Probably something with the `device` of `unfinished_sents` then! @julien-c <|||||>Caused by previously unexposed kwargs changing behavior: passing ``` do_sample=False, early_stopping=True ``` in the unit test fixes them.<|||||>I'm getting this error @sshleifer <|||||>https://github.com/huggingface/transformers/issues/5221<|||||>I have early stopping = False, but do_sample = True
transformers
3,264
closed
Clean special token init in modeling_....py
#### INTRO: This PR is a follow-up from PR #3011. After discussion with @thomwolf today, we decided that the variable `eos_token_ids` in all models causes more confusion and ugly code than it helps. #### BACKGROUND: All models now have `pad_token_id`, `bos_token_id` and `eos_token_id` as default values. The reasons are discussed and explained in #3011. Originally, we had the `list` variable `eos_token_ids`. The idea behind was that a model could have multiple `eos_token_ids` if the user wants to finish at certain tokens besides the standard EOS token. But this caused a lot of unclean code AND is not consistent with `tokenizers` which all has a `tokenizer.eos_token_id` int variable. So, we return to `eos_token_id` for moders as well and might in the future have a variable `forbidden_tokens` or `special_stop_tokens`. #### THIS PR DOES: - Replace all list `eos_token_ids` with `eos_token_id` - Add default `eos_token_id, pad_token_id, bos_token_id` to all models #### TESTS: I tested that the `pretrained Config` has now the same special tokens as the `pretrained Tokenizer` for all model identifier names (e.g. `gpt2-large`) with the following code: ``` for model_id_name in ALL_PRETRAINED_MODEL_ARCHIVE_MAP.keys(): tok = AutoTokenizer.from_pretrained(model_id_name) conf = AutoConfig.from_pretrained(model_id_name) pad_equal = tok.pad_token_id == conf.pad_token_id eos_equal = tok.eos_token_id == conf.eos_token_id bos_equal = tok.bos_token_id == conf.bos_token_id if not pad_equal: print("PAD not equal for {}!".format(model_id_name)) print("TOK: {} | CONF: {}".format(tok.pad_token_id, conf.pad_token_id)) if not eos_equal: print("EOS not equal for {}!".format(model_id_name)) print("TOK: {} | CONF: {}".format(tok.eos_token_id, conf.eos_token_id)) if not bos_equal: print("BOS not equal for {}!".format(model_id_name)) print("TOK: {} | CONF: {}".format(tok.bos_token_id, conf.bos_token_id)) ``` which gives the following result: ``` PAD not equal for bert-base-dutch-cased! TOK: 3 | CONF: 0 BOS not equal for distilbert-base-cased! TOK: None | CONF: 0 BOS not equal for distilbert-base-cased-distilled-squad! TOK: None | CONF: 0 ``` This means that: - `bert-base-dutch-cased` has a different `pad_token_id` in its tokenizer config than the `pad_token_id` in default Bert tokenizer, so that we will have to update the `bert-base-dutch-cased-config.json` file on AWS (Best option in my opinion). - `distilbert-base-cased` and `distilbert-base-cased-distilled-squad` have hard coded `bos_token_id` in their config.json file on AWS (I checked), but the distilbert tokenizer doesn`t even have it -> is that correct? @VictorSanh #### TODO: - [x] Is the approach good for you? @thomwolf @julien-c @LysandreJik @mfuntowicz @sshleifer - [ ] Should we also check all community models whether their tokenizer differs from the default one? - [ ] I think the test I wrote is quite useful, but it uses Config and Tokenizer Classes in the same file, which is not in line with the current test files, which is why I didn't add it. Should we add a test like this? If yes, how?
03-13-2020 14:28:24
03-13-2020 14:28:24
Is good to merge for me if you guys are fine with it. Can open a new PR for a test proposal for this one and then also adapt the dutch bert model config on AWS. <|||||>Isn't this breaking other hosted configs than the ones you're listing? Like https://huggingface.co/microsoft/DialoGPT-large for instance? (more generally, we don't control which configs users use the lib with, so adding keys is fine, but renaming keys – or worse, changing their types – is costly)<|||||>PS: I do agree that having a `eos_token_id` is cleaner than having `eos_token_ids`<|||||>> Isn't this breaking other hosted configs than the ones you're listing? > > Like https://huggingface.co/microsoft/DialoGPT-large for instance? (more generally, we don't control which configs users use the lib with, so adding keys is fine, but renaming keys – or worse, changing their types – is costly) That's a very good point! There is one scenario, where it could break other hosted configs: - The user defined an `eos_token_ids` that is different from the default `eos_token_id` the model has. But most of the time (just from browsing through some `config.json` files) the model was trained with HF and then saved and uploaded which means that the `eos_token_ids` was saved and is included in the `config.json`. In this case the values are the same as is the case for `https://huggingface.co/microsoft/DialoGPT-large`. In this case we have still have a dead parameter in the config which should be removed. I propose the following: I can write a script that checks the following for each configs: 1) does eos_token_ids exist ? 2) is eos_token_ids == default config.eos_token_id ? If there are a lot of 1) and 2) then would write a bash script that simply replaces the line "`eos_token_ids` = [ ... ]" with `eos_token_ids`=... Will report the results here<|||||>If you need an exhaustive list of all hosted models (with their config + files), you can do ```python api = HfApi() models = api.model_list() ``` <|||||>Okey here is my analysis of the 308 (not bad actually! ) added community models: 1. **66** can't load either their config (n)or their tokenizer (including 3 facebook bart models because we call them directly by `bart-large-cnn` and not by `facebook/bart-large-cnn` -> should maybe add a new link or change model name online) 2. **79** currently have wrong `pad_token_id`, `eos_token_id`, `bos_token_id` in their configs. IMPORTANT: The reason for this is that we used to have the wrong defaults saved in `PretrainedConfig()` - see e.g. [here](https://github.com/huggingface/transformers/pull/2885/commits/77d958ac7f0b008df17656e3652246f602aef095) the default value for **any** model for `pad_token_id` was 0. People trained a model with the lib, saved it and the resulting config.json now had a `pad_token_id = 0` saved. This was then uploaded. But it's wrong and should be corrected regardless of this PR. 3. For **68** after changing `eos_token_ids` to `eos_token_id` we will have to remove the `eos_token_ids` parameter and possibly adapt the `eos_token_id` parameter - almost all of which we have to change anyway (1 exception) 4. For **162** models everything is fine! Here the full analysis log [here](https://github.com/patrickvonplaten/files_to_link_to/blob/master/results.txt) Here the code that created this log (simple comparison of loaded tokenizer and config with default config): [here](https://github.com/patrickvonplaten/files_to_link_to/blob/master/test_all_community_models.py) Here the 308 models I checked: [here](https://github.com/patrickvonplaten/files_to_link_to/blob/master/all_community_models.txt) **First conclusion:** - I think we can merge this PR as all models for which this PR would change to a "wrong" behavior already have a "wrong" behavior that should be fixed. The sooner we merge the sooner we have the correct API. **Second conclusion:** I think besides the FB models 1) is not really our job to fix. But 2) and 3) I think should be fixed on AWS. I'm happy to do this using some automated bash/python scripting. I would try out that it work on 1,2 community models and then apply it to all other cases (to not screw something up on AWS). Would that be good for you @julien-c @thomwolf @LysandreJik ? In a future PR we could think about some automated testing that tokenizer configs are equal to model configs. <|||||>> If you need an exhaustive list of all hosted models (with their config + files), you can do > > ```python > api = HfApi() > models = api.model_list() > ``` Great, this will be very helpful when writing an automated script to corret the config.json of the community models! <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3264?src=pr&el=h1) Report > Merging [#3264](https://codecov.io/gh/huggingface/transformers/pull/3264?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8becb732931bbab5dd75cca5f5e7c75b2516d10b&el=desc) will **decrease** coverage by `0.09%`. > The diff coverage is `97.91%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3264/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3264?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3264 +/- ## ========================================== - Coverage 77.64% 77.55% -0.10% ========================================== Files 100 100 Lines 16979 16970 -9 ========================================== - Hits 13184 13161 -23 - Misses 3795 3809 +14 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3264?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/3264/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.92% <ø> (ø)` | | | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3264/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `100.00% <ø> (ø)` | | | [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3264/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.55% <ø> (ø)` | | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3264/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.58% <94.11%> (+0.21%)` | :arrow_up: | | [src/transformers/configuration\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/3264/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <100.00%> (ø)` | | | [src/transformers/configuration\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3264/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JlcnQucHk=) | `100.00% <100.00%> (ø)` | | | [src/transformers/configuration\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/3264/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100.00% <100.00%> (ø)` | | | [src/transformers/configuration\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/3264/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2ZsYXViZXJ0LnB5) | `100.00% <100.00%> (ø)` | | | [src/transformers/configuration\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3264/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.29% <100.00%> (ø)` | | | [src/transformers/configuration\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3264/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JvYmVydGEucHk=) | `100.00% <100.00%> (ø)` | | | ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/3264/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3264?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3264?src=pr&el=footer). Last update [8becb73...8296647](https://codecov.io/gh/huggingface/transformers/pull/3264?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>More in-detail analysis why **67** (actually one more now) models can't be loaded - log files are updated and use `api = HfApi(); models = api.model_list()` now. A) **34** models can't even load their config file. The reasons for this are either: 1. **11/34**: Model identifier is wrong, e.g. `albert-large` does not exist anymore, it seems like it was renamed to `albert-large-v1`. These models have saved the wrong name online that how it is saved on AWS. Can easily be corrected. *e.g.* The model_identifier: b`ertabs-finetuned-xsum-extractive-abstractive-summarization` does not exist, but `remi/bertabs-finetuned-xsum-extractive-abstractive-summarization` does exist -> wrong model identifier. Just 11 cases, so easy to correct. 2. **23/34**: There is an unrecognized `model_type` in the config.json, `e.g.` > "Error: Message: Unrecognized model in hfl/rbtl3. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: t5, distilbert, albert, camembert, xlm-roberta, bart, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl > " Here I think we should add a `model_type` to the config (probably `bert` most of the time) B) **33** models can load their config, but cannot load their tokenizers. The error message is almost always the same **32/33**: > TOK ERROR: clue/roberta_chinese_base tokenizer can not be loaded > Message: Model name 'clue/roberta_chinese_base' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-larg > e-openai-detector). We assumed 'clue/roberta_chinese_base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocab > ulary files at this path or url. Here: the model has neither of: - `vocab_file` - `added_tokens_file` - `special_tokens_map_file` - `tokenizer_config_file` -> So we would have to upload one of those files to make it work. Not sure how time-consuming this is! and we got one tokenizer which does not even have a path: `Message: stat: path should be string, bytes, os.PathLike or integer, not NoneType` So I think it's mostly just renaming the model identifiers to their correct names and adding some tokenizer names. @julien-c
transformers
3,263
closed
Create camembert-base-README.md
First version of our model_card for the original uploaded CamemBERT. @louismartin @pjox
03-13-2020 12:11:16
03-13-2020 12:11:16
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3263?src=pr&el=h1) Report > Merging [#3263](https://codecov.io/gh/huggingface/transformers/pull/3263?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/afea70c01c7d2a844662a4d66b9f9d933cc6449c?src=pr&el=desc) will **increase** coverage by `0.11%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3263/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3263?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3263 +/- ## ========================================== + Coverage 77.82% 77.93% +0.11% ========================================== Files 98 98 Lines 16666 16666 ========================================== + Hits 12970 12989 +19 + Misses 3696 3677 -19 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3263?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3263/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.84% <0%> (+0.13%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3263/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.68% <0%> (+3.22%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3263?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3263?src=pr&el=footer). Last update [afea70c...14537eb](https://codecov.io/gh/huggingface/transformers/pull/3263?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Merged it (not sure it was ready yet, but feel free to update in another PR). **[Model page](https://huggingface.co/camembert-base)** Let us know if we can help in any way @benjamin-mlr @louismartin @pjox You can also add ``` --- language: french --- ``` on top of the README for the model to pop up when looking for FR models
transformers
3,262
closed
TFAlbertMainLayer cannot be imported from the transformers library.
Unlike the TFBertMainLayer class which can be imported from transformers, TFAlbertMainLayer cannot be imported. Locally I made the changes to init.py: Import TFAlbertMainLayer from modeling_tf_albert Similar to tensorflow version, pytorch version should also be implemented.
03-13-2020 10:19:35
03-13-2020 10:19:35
This seems like a valid point, what do you think @LysandreJik?
transformers
3,261
closed
Update examples/ner/run_ner.py
Update the example file by changing the name of AlbertForTokenClassification to AlbertForSequenceClassification.
03-13-2020 09:35:07
03-13-2020 09:35:07
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3261?src=pr&el=h1) Report > Merging [#3261](https://codecov.io/gh/huggingface/transformers/pull/3261?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/afea70c01c7d2a844662a4d66b9f9d933cc6449c&el=desc) will **increase** coverage by `0.12%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3261/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3261?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3261 +/- ## ========================================== + Coverage 77.82% 77.94% +0.12% ========================================== Files 98 98 Lines 16666 16666 ========================================== + Hits 12970 12990 +20 + Misses 3696 3676 -20 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3261?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3261/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.56% <0.00%> (-0.14%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3261/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.22% <0.00%> (+3.75%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3261?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3261?src=pr&el=footer). Last update [afea70c...547efb9](https://codecov.io/gh/huggingface/transformers/pull/3261?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Well, I don't think the `SequenceClassification" head is not the right one, as it is supposed for sequence classification/regression tasks (see https://huggingface.co/transformers/model_doc/albert.html#transformers.AlbertForSequenceClassification). NER requires a per token classification implementation: https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_albert.py#L791 <|||||>Will this be fixed by switching to AutoModel? I think we are doing that here for `run_pl_ner.py` https://github.com/huggingface/transformers/pull/3290<|||||> I knew what the problem was. In version 2.5.1, there is no definition for `AlbertForTokenClassification` in `run_ner.py` However, it is included in the master branch. I'll close this request.<|||||>@srush I think using AutoModelForTokenClassification is better than calling each model class. How about making a change to `run_ner.py`?<|||||>Yes, if you can send that PR and add me as a reviewers
transformers
3,260
closed
Model name 'distilbert-base-german-cased' was not found in model name list.
This code is ok: `from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("distilbert-base-german-cased") model = AutoModel.from_pretrained("distilbert-base-german-cased") ` But the following fails: `from transformers import BertForSequenceClassification, AdamW, BertConfig model = BertForSequenceClassification.from_pretrained( "distilbert-base-german-cased", num_labels = 2, output_attentions = False, output_hidden_states = False, ) ` `OSError: Model name 'distilbert-base-german-cased' was not found in model name list. We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-german-cased/config.json' was a path, a model identifier, or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url.`
03-13-2020 08:45:46
03-13-2020 08:45:46
Hi @woiza, try to use: `DistilBertForSequenceClassification` instead of `BertForSequenceClassification` :) <|||||>@stefan-it that worked, thank you!
transformers
3,259
closed
Great job advice
First of all this is a very perfect job,I shared it with multiple colleagues,also please provide an implementation of the ElMo model and more tensorflow examples. thank you very much~
03-13-2020 02:25:17
03-13-2020 02:25:17
@DenceChen thanks for sharing :-) Feel free trying to add a model if you need it!
transformers
3,258
closed
very slow performance on transformer 2.5.0 versus 2.3.0
Hi I am running the run_glue version with last version of transformer 2.5.0 python=3.5 and this is at least 10 times slower than running the same code with transformer 2.3.0 with python 3.6.9. I use the BERT model. The difference between speed of these versions is extremely high, could you have a look and check why the performance of latest version of the transformer is very low. thank you.
03-13-2020 01:39:01
03-13-2020 01:39:01
I met the same issue when running the run_squad.py using the 2.5.0 version. <|||||>Which PyTorch version are you using for your benchmarks? Is it the same for both, or is it different?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,257
closed
ELECTRA
Adds ELECTRA to the library. The script I'm using to compare the different models is this [Github gist](https://gist.github.com/LysandreJik/db4c948f6b4483960de5cbac598ad4ed), coupled to [a modified version of the ELECTRA repository](https://github.com/LysandreJik/electra). - [x] add model/configuration/tokenization classes - [x] add conversion scripts - [x] add tests - [x] finalize Let's detail what should be done at each step ## Adding model/configuration/tokenization classes Here is the workflow for adding model/configuration/tokenization classes: - [x] copy the python files from the present folder to the main folder and rename them, replacing `xxx` with your model name, - [x] edit the files to replace `XXX` (with various casing) with your model name - [x] copy-paste or create a simple configuration class for your model in the `configuration_...` file - [x] copy-paste or create the code for your model in the `modeling_...` files (PyTorch) - [x] copy-paste or create the code for your model in the `modeling_...` files (TF 2.0) - [x] copy-paste or create a tokenizer class for your model in the `tokenization_...` file # Adding conversion scripts Here is the workflow for the conversion scripts: - [x] copy the conversion script (`convert_...`) from the present folder to the main folder. - [x] edit this script to convert your original checkpoint weights to the current pytorch ones. # Adding tests: Here is the workflow for the adding tests: - [x] copy the python files from the `tests` sub-folder of the present folder to the `tests` subfolder of the main folder and rename them, replacing `xxx` with your model name, - [x] edit the tests files to replace `XXX` (with various casing) with your model name - [x] edit the tests code as needed # Final steps You can then finish the addition step by adding imports for your classes in the common files: - [x] add import for all the relevant classes in `__init__.py` - [x] add your configuration in `configuration_auto.py` - [x] add your PyTorch and TF 2.0 model respectively in `modeling_auto.py` and `modeling_tf_auto.py` - [x] add your tokenizer in `tokenization_auto.py` - [x] add your models and tokenizer to `pipeline.py` - [x] add a link to your conversion script in the main conversion utility (in `commands/convert.py`) - [x] edit the PyTorch to TF 2.0 conversion script to add your model in the `convert_pytorch_checkpoint_to_tf2.py` file - [x] add a mention of your model in the doc: `README.md` and the documentation itself at `docs/source/pretrained_models.rst`. - [x] upload the pretrained weigths, configurations and vocabulary files.
03-12-2020 22:51:13
03-12-2020 22:51:13
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3257?src=pr&el=h1) Report > Merging [#3257](https://codecov.io/gh/huggingface/transformers/pull/3257?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/012d775b14d1ab673aab7eae151823a74a8525a6&el=desc) will **not change** coverage by `%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3257/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3257?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3257 +/- ## ======================================= Coverage 77.54% 77.54% ======================================= Files 103 103 Lines 17268 17268 ======================================= Hits 13390 13390 Misses 3878 3878 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3257?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3257?src=pr&el=footer). Last update [012d775...012d775](https://codecov.io/gh/huggingface/transformers/pull/3257?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Does this add the ability to train a language model using method in ELECTRA? (Thanks!)<|||||>It doesn't, we're currently working on a pre-training script incorporating the ELECTRA method but it's still a few weeks out.<|||||>> It doesn't, we're currently working on a pre-training script incorporating the ELECTRA method but it's still a few weeks out. Is this an active branch/issue? I'm interested in contributing if so, but I can't find it<|||||>Not public yet, will let you know when it is @shoarora!
transformers
3,256
closed
Implement Electra
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> https://github.com/google-research/electra ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> Getting better results for example in QA tasks. ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md --> I have no idea how to implement, caused by the experience with this library
03-12-2020 21:27:03
03-12-2020 21:27:03
We are currently working on it :-) Check out PR: #3257
transformers
3,255
closed
add BART to README
03-12-2020 18:30:19
03-12-2020 18:30:19
transformers
3,254
closed
Bump psutil from 5.6.3 to 5.6.6 in /examples/distillation
Bumps [psutil](https://github.com/giampaolo/psutil) from 5.6.3 to 5.6.6. <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/giampaolo/psutil/blob/master/HISTORY.rst">psutil's changelog</a>.</em></p> <blockquote> <h1>5.6.6</h1> <p>2019-11-25</p> <p><strong>Bug fixes</strong></p> <ul> <li>1179_: [Linux] Process cmdline() now takes into account misbehaving processes renaming the command line and using inappropriate chars to separate args.</li> <li>1616_: use of Py_DECREF instead of Py_CLEAR will result in double free and segfault (<code>CVE-2019-18874 &lt;https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18874&gt;</code>__). (patch by Riccardo Schirone)</li> <li>1619_: [OpenBSD] compilation fails due to C syntax error. (patch by Nathan Houghton)</li> </ul> <h1>5.6.5</h1> <p>2019-11-06</p> <p><strong>Bug fixes</strong></p> <ul> <li>1615_: remove pyproject.toml as it was causing installation issues.</li> </ul> <h1>5.6.4</h1> <p>2019-11-04</p> <p><strong>Enhancements</strong></p> <ul> <li>1527_: [Linux] added Process.cpu_times().iowait counter, which is the time spent waiting for blocking I/O to complete.</li> <li>1565_: add PEP 517/8 build backend and requirements specification for better pip integration. (patch by Bernát Gábor)</li> </ul> <p><strong>Bug fixes</strong></p> <ul> <li>875_: [Windows] Process' cmdline(), environ() or cwd() may occasionally fail with ERROR_PARTIAL_COPY which now gets translated to AccessDenied.</li> <li>1126_: [Linux] cpu_affinity() segfaults on CentOS 5 / manylinux. cpu_affinity() support for CentOS 5 was removed.</li> <li>1528_: [AIX] compilation error on AIX 7.2 due to 32 vs 64 bit differences. (patch by Arnon Yaari)</li> <li>1535_: 'type' and 'family' fields returned by net_connections() are not always turned into enums.</li> <li>1536_: [NetBSD] process cmdline() erroneously raise ZombieProcess error if cmdline has non encodable chars.</li> <li>1546_: usage percent may be rounded to 0 on Python 2.</li> </ul> </tr></table> ... (truncated) </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/giampaolo/psutil/commit/c6cd256da95ffe9599792759b1c2586ba24fa047"><code>c6cd256</code></a> pre release</li> <li><a href="https://github.com/giampaolo/psutil/commit/b2414b83d3d728ec34ea0e35bfb21517ee231401"><code>b2414b8</code></a> revert <a href="https://github-redirect.dependabot.com/giampaolo/psutil/issues/1595">#1595</a></li> <li><a href="https://github.com/giampaolo/psutil/commit/c63369e999b458ecbd559bdde895c344b4db2841"><code>c63369e</code></a> updat HISTORY</li> <li><a href="https://github.com/giampaolo/psutil/commit/edb20f664f28653dcdd24f0bf0191984738dca6e"><code>edb20f6</code></a> linux, cmdline(), fix for <a href="https://github-redirect.dependabot.com/giampaolo/psutil/issues/1179">#1179</a>, comment 552984549: sometimes string ends wit...</li> <li><a href="https://github.com/giampaolo/psutil/commit/d739cbb1a5b207212d467b219dfc25b017911530"><code>d739cbb</code></a> use PROCESS_QUERY_LIMITED_INFORMATION</li> <li><a href="https://github.com/giampaolo/psutil/commit/f7e898b0987f97352c7551bdd9b29b594e1236f6"><code>f7e898b</code></a> <a href="https://github-redirect.dependabot.com/giampaolo/psutil/issues/1595">#1595</a>: use psutil_pid_is_running() instead of GetExitCodeProcess</li> <li><a href="https://github.com/giampaolo/psutil/commit/72c84cb4edb5c0968a83c1f45ad5cc51235e0af3"><code>72c84cb</code></a> #fix <a href="https://github-redirect.dependabot.com/giampaolo/psutil/issues/1595">#1595</a> / windows: kill() may not raise AccessDenied</li> <li><a href="https://github.com/giampaolo/psutil/commit/1f8d432db12a907544ac533b66a5a61ba25321fb"><code>1f8d432</code></a> Merge branch 'master' of github.com:giampaolo/psutil</li> <li><a href="https://github.com/giampaolo/psutil/commit/e6faebcd7adaa327d1ce57385cbebe7724d02350"><code>e6faebc</code></a> release gil around users()/BSD (<a href="https://github-redirect.dependabot.com/giampaolo/psutil/issues/1425">#1425</a>)</li> <li><a href="https://github.com/giampaolo/psutil/commit/5cb1b0b526765720253fdb2e8eff0bf380bbe0a8"><code>5cb1b0b</code></a> Merge branch 'master' of github.com:giampaolo/psutil</li> <li>Additional commits viewable in <a href="https://github.com/giampaolo/psutil/compare/release-5.6.3...release-5.6.6">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=psutil&package-manager=pip&previous-version=5.6.3&new-version=5.6.6)](https://help.github.com/articles/configuring-automated-security-fixes) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
03-12-2020 18:16:07
03-12-2020 18:16:07
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3254?src=pr&el=h1) Report > Merging [#3254](https://codecov.io/gh/huggingface/transformers/pull/3254?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2e81b9d8d76a4d41a13f74eb5e0f4a65d8143cab?src=pr&el=desc) will **decrease** coverage by `1.1%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3254/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3254?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3254 +/- ## ========================================== - Coverage 77.93% 76.82% -1.11% ========================================== Files 98 98 Lines 16666 16666 ========================================== - Hits 12988 12804 -184 - Misses 3678 3862 +184 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3254?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `82.46% <0%> (-3.23%)` | :arrow_down: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96% <0%> (-2.23%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3254/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.56% <0%> (-0.14%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3254?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3254?src=pr&el=footer). Last update [2e81b9d...28fce2c](https://codecov.io/gh/huggingface/transformers/pull/3254?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,253
closed
bug in run_glue
Hi I got this error when running the run_glue.py from transformers import glue_compute_metrics as compute_metrics ImportError: cannot import name 'glue_compute_metrics'
03-12-2020 16:24:37
03-12-2020 16:24:37
If you could include your environment and steps to replicate the issue, that would help. Its working for me on version 2.5.1 (from master branch).<|||||>Hi. Sorry then I must have used the old version of transformer, I will close the issue and reopen if needed. thanks. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,252
closed
batch_encode_plus cannot work properly
I have data like below: ``` >>> ds[0][0:3] ["John was writing lyrics for his new album.He started experiencing writer 's block.He tried to force himself to write but it would n't do anythingHe tried to force himself to write but it would n't do anything.He took a walk , hung out with some friends , and looked at natureHe took a walk , hung out with some friends , and looked at natureHe took a walk , hung out with some friends , and looked at nature", 'Franny did not particularly like all of the immigration happening.She thought immigrants were coming to cause social problemsShe thought immigrants were coming to cause social problems.Franny was upset when an immigrant moved in next door.The immigrant , Sal , was kind and became friends with FrannyThe immigrant , Sal , was kind and became friends with Franny', 'Ari spends $ 20 a day on pickles.He decides to make his own to save money.He puts the pickles in brine.Ari waits 2 weeks for his pickles to get sour'] >>> ds[1][0:3] ['He felt inspiration and then went back home to write', 'When he finished his paper he went to bedWhen he finished his paper he went to bed', 'Trudey hoped self-publishing would be more profitable'] ``` I was trying to do next sentence prediction using BERT model, it is necessary to use text pair to finish this problem.However,when I was tring to encode ds[0] and ds[1] as a batched text pair,I have met following problem. ``` >>> input_wsrq1ids=tokenizer.batch_encode_plus([(ds[0][0:3],ds[1][0:3])],add_special_tokens=True,return_tensors='pt') >>> input_wsrq1ids Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'inp_wsrq1ids' is not defined >>> input_wsrq1ids {'input_ids': tensor([[101, 100, 100, 100, 102, 100, 100, 100, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 1, 1, 1, 1]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1]])} ``` The answer is definitely not the right encoding fot this data. I did not quite understand this documentation of batch_encod_plus method,it says for text_pair,look for encode_plus for details.However,the encode_plus only works for non-batched data, and there is no clue and example code to show us how to use batch_enocde_plus properly.
03-12-2020 16:16:47
03-12-2020 16:16:47
This is quite a weird way to encode your data with `ds[0][0:3]`. There are two cases: 1) You have a single string you want to encode: ``` input_str = 'hello, what time is it?' input_ids_dict = tokenizer.encode_plus(input_str) ``` 2) You have a batch of input string data that you want to encode: ``` input_str_batch = ['hello what time is it' , 'hello, how are you?', 'Hey, I'm Peter'] input_ids_dict = tokenizer.batch_encode_plus(input_str_batch, pad_to_max_length=True) ``` Also see #3237
transformers
3,251
closed
Why is the seq_len dimension hard coded to be the first dimension of BERT's input?
According to your [code](https://github.com/huggingface/transformers/blob/2e81b9d8d76a4d41a13f74eb5e0f4a65d8143cab/src/transformers/modeling_bert.py#L164), the `seq_len` dimension always corresponds to the first dimension of the input tensor. I found this problem when I tried to feed a tensor of shape `(batch_size, num_seq, seq_len)` and error occurred. Is there any specific reason for you to adopt this setting? Given that normally a PyTorch` Embedding` layer can take an input of any shape, it doesn't look so natural to me to have this setting. Any suggestions? Many thanks!
03-12-2020 16:11:46
03-12-2020 16:11:46
My wild guess is maybe there is no specific reason for this. It's just an implementation issue. If I really want do feed a tensor of different number of dimensions I can just do reshape twice (i.e., one before forward one after).
transformers
3,250
closed
UnicodeDecodeError when loading BART from fairseq checkpoint
# 🐛 Bug When trying to load a checpoint from the fariseq library I'm getting an UnicodeError ## Information Model I am using (Bert, XLNet ...): BART Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Train a checkpoint with the fairseq library 2. Load it using BartForMaskedLM.from_pretrained <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Get the checpoint loaded into a BART model. <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: master - Platform: linux - Python version: 3.6.7 - PyTorch version (GPU?): 1.4.0 / Yes - Tensorflow version (GPU?): - Using GPU in script?: No - Using distributed or parallel set-up in script?: - `fairseq` version: master
03-12-2020 15:41:33
03-12-2020 15:41:33
Unfortunately, I don't think the conversion you are describing is currently supported. If I were to attempt this on my own, I would try to modify https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bart_original_pytorch_checkpoint_to_pytorch.py to take in a path to a checkpoint, rather than a `torch.hub` alias.<|||||>This function converts from saved fairseq checkpoints: https://github.com/huggingface/transformers/blob/7a7fdf71f80452fcae064bd016f06e9a0f0f19ed/src/transformers/convert_bart_original_pytorch_checkpoint_to_pytorch.py#L81 Let me know if that helps!
transformers
3,249
closed
Using FP16 on BartModel
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): BART Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: CNN/DM * [ ] my own task or dataset: (give details below) ## To reproduce I've installed the master branch of transformers but I still encountered the same issue as #3117 when using FP16 BartModel. I just initialized the model without loading the pretarined weights, but I guess the model should still be able to correctly forward the input LongTensor(batch, seq_length). The code is shown below, simply initialize a model and forward an input: ``` model = BartModel(BartConfig()) model = model.cuda().half() cur_inputs = torch.zeros(4,16,dtype=torch.long).cuda() cur_res = model(cur_inputs) ``` The error is: >~\Anaconda3\envs\pytorch\lib\site-packages\transformers\modeling_bart.py in forward(self, query, key, value, key_padding_mask, layer_state, need_weights, static_kv, attn_mask) assert v is not None --> attn_output = torch.bmm(attn_probs, v) assert attn_output.size() == (bsz * self.num_heads, tgt_len, self.head_dim) attn_output = attn_output.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim) RuntimeError: Expected object of scalar type Float but got scalar type Half for argument #2 'mat2' in call to _th_bmm @sshleifer The model is quite novel to me, so am I using it incorrectly or there's still a bug in BertModel class? Thanks in advance for the help! <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: master branch - Platform: Windows - Python version: 3.7.0 - PyTorch version (GPU?): 1.4.0 - Tensorflow version (GPU?): / - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
03-12-2020 15:10:15
03-12-2020 15:10:15
@sshleifer May I ask could you reproduce the error in your machine? I ran the same code on a Linux machine with master-branch of transformers, but still got the same error. I'm planning to use BartModel these days so please notify me at your earliest convenience if there're any updates. Many thanks!<|||||>Yes, will try to fix it today! Thanks for reporting!<|||||>> Yes, will try to fix it today! Thanks for reporting! Thanks Sam, The code works well this time! Thanks again for the contribution.
transformers
3,248
closed
[model_cards] polbert: simplify usage example with pipelines
Co-Authored-By: Darek Kłeczek
03-12-2020 14:04:49
03-12-2020 14:04:49
transformers
3,247
closed
Improved Error message when loading config/model with .from_pretrained()
Given the previous error message, it can be quite time-consuming to find out that the only problem was that the /path/to/model/dir was incorrect :D
03-12-2020 13:03:18
03-12-2020 13:03:18
Thanks to @mariamabarham for pointing this out :-) <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3247?src=pr&el=h1) Report > Merging [#3247](https://codecov.io/gh/huggingface/transformers/pull/3247?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/dc848c29944265e04f1473cd0312eeffc1842276&el=desc) will **decrease** coverage by `0.36%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3247/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3247?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3247 +/- ## ========================================== - Coverage 78.32% 77.95% -0.37% ========================================== Files 98 98 Lines 16665 16665 ========================================== - Hits 13053 12992 -61 - Misses 3612 3673 +61 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3247?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3247/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.82% <ø> (ø)` | | | [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/3247/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.89% <0.00%> (-27.60%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3247/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.84% <0.00%> (+0.13%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3247/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.40% <0.00%> (+4.11%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3247?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3247?src=pr&el=footer). Last update [dc848c2...cd5998e](https://codecov.io/gh/huggingface/transformers/pull/3247?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,246
closed
How do you do inference in production?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> I was wondering how do you guys do inference in production? I tried to convert this model to tensorflow model but failed. <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> This is what I tried: ``` tf_model = TFGPT2LMHeadModel.from_pretrained("tmp/", from_pt=True) tf.saved_model.save(tf_model,"tmp/saved") loaded = tf.saved_model.load("tmp/saved") print(list(loaded.signatures.keys())) ``` And it returns an empty list **A link to original question on Stack Overflow**: https://stackoverflow.com/questions/52826134/keras-model-subclassing-examples
03-12-2020 10:06:11
03-12-2020 10:06:11
Did you try out to just use this `save_...` function: https://github.com/huggingface/transformers/blob/2e81b9d8d76a4d41a13f74eb5e0f4a65d8143cab/src/transformers/modeling_tf_utils.py#L232 ? -> ``` tf_model = TFGPT2LMHeadModel.from_pretrained("tmp/", from_pt=True) tf_model.save_pretrained("./tf_model") tf_model = TFGPT2LMHeadModel.from_pretrained("./tf_model") ```<|||||>> Did you try out to just use this `save_...` function: > > https://github.com/huggingface/transformers/blob/2e81b9d8d76a4d41a13f74eb5e0f4a65d8143cab/src/transformers/modeling_tf_utils.py#L232 > > ? > -> > > ``` > tf_model = TFGPT2LMHeadModel.from_pretrained("tmp/", from_pt=True) > tf_model.save_pretrained("./tf_model") > tf_model = TFGPT2LMHeadModel.from_pretrained("./tf_model") > ``` Hi, thanks for the reply. But what I want to do is to save it as a pb file in order to serve the model using tensorflow-serving.<|||||>Can we re-open this? It's still an issue.<|||||>> Can we re-open this? It's still an issue. How to open this issue?<|||||>Sure, sorry I guess I closed this too early!<|||||>Any progress on this issue? How to save the model for production? <|||||>Hmm, I am not really familiar with tensorflow protobuf saving -> @LysandreJik @jplu do you know more about this maybe?<|||||>Hello ! To create a saved model you have to run something like the following lines: ```python import tensorflow as tf from transformers import TFXXXModel, XXXTokenizer hf_model = TFXXXModel.from_pretrained('model/location/path') tokenizer = XXXTokenizer.from_pretrained("tokenizer/location/path") features = tokenizer.encode_plus("Sentence to featurize", add_special_tokens=True, return_tensors="tf") hf_model._set_inputs(features) tf.saved_model.save(hf_model, "saved_model/location/path") ``` Replace XXX by the model name you plan to save. It is also planed to add a `to_saved_model()` method in the trainer, to allow anybody to autimatically create a saved model without to run those lines.<|||||>Hi! Sorry. I misunderstood it. I thought all TF models were saved by TF Trainer and all TF trainer saved models would have a hard time with inference in production. So I thought this post is similar to mine: https://github.com/huggingface/transformers/issues/4758 After finishing with the sample code and sample data, I checked the "output_dir/saved_model" folder, it is empty. Then I restarted the code to save the model to a new directory. ``` model = TFAutoModelForTokenClassification.from_pretrained( model_args.model_name_or_path, from_pt=bool(".bin" in model_args.model_name_or_path), config=config, cache_dir=model_args.cache_dir, ) model.save('saved_model/my_model') newmodel = tf.keras.models.load_model('saved_model/my_model') ``` I get the message that the model is not compiled: `WARNING:tensorflow:No training configuration found in save file, so the model was *not* compiled. Compile it manually.` I am wondering how to extract the fine-tuned local model for inference. Thanks.<|||||>Look at the piece of code I have done, it is totally different :) Also you are not using the load and save from the lib, the error message is normal.<|||||>``` hf_model = TFXXXModel.from_pretrained('model/location/path') tokenizer = XXXTokenizer.from_pretrained("tokenizer/location/path") ``` Are these the official models like 'bert-base-uncased'? If yes, then it's not trained. If it is local model, I don't know where the local model is because the "saved_model" folder is empty. <|||||>'you are not using the load and save from the lib, the error message is normal.' --- which lib are you referring? I followed only the official tensorflow manual: https://www.tensorflow.org/guide/saved_model<|||||>Ok then sorry I didn't get what you meant. If I recall well, what you are looking for is to load a trained model and run an inference with it? Right?<|||||>Right. I also wish to serve the model through TF serving. <|||||>Ok then at first try the following piece of code and tell me if it works for you: ```python from transformers import BertTokenizer, TFBertForTokenClassification import tensorflow as tf model = TFBertForTokenClassification.from_pretrained("bert-base-uncased") tf.saved_model.save(model, "saved_model") loaded_model = tf.saved_model.load("saved_model") tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") features = {"input_ids": tokenizer.encode("it is me", add_special_tokens=True, return_tensors="tf")} print(loaded_model(features, training=False)) ``` If this works you can do the same for your trained model, just specify your output dir in `.from_pretrained()` function. If you want to create a more elaborate signature than the default one, you have to follow this part of the [documentation](https://www.tensorflow.org/guide/saved_model#specifying_signatures_during_export) Later the TF Trainer will create a saved model in same time than the usual h5 file. Therefore it will be more user friendly to have its own saved model and then use it in production with TF serving.<|||||>Yes, the above code works. I still have some doubts on how TFTrainer loads the saved model. When it is set to the prediction mode, even if I changed the output_dir to nonsense, it still can do the prediction. I also noticed the output_dir/saved_model folder is empty. If so, how can TF Trainer load the model? I asked these still with the intention to make sure I save my fine-tuned model to a right place, then load, and serve it. `python3 run_tf_ner.py --data_dir ./ \ --labels ./labels.txt \ --model_name_or_path $BERT_MODEL \ --output_dir $OUTPUT_DIR \ --max_seq_length $MAX_LENGTH \ --num_train_epochs $NUM_EPOCHS \ --per_device_train_batch_size $BATCH_SIZE \ --save_steps $SAVE_STEPS \ --seed $SEED \ --do_predict` If I train my model this way and would like to save the model, I need to set the code to prediction mode, with the trainer initialized, save the model through `tf.saved_model.save(model, "saved_model")`. correct?<|||||>I tested it. That way would not be able to save the model. https://colab.research.google.com/drive/1uPCpR31U5VRMT3dArGyDK9WT6hKQa0bv?usp=sharing Then I am still wondering how to save the pb model through TF Trainer trained model. <|||||>> If I train my model this way and would like to save the model, I need to set the code to prediction mode, with the trainer initialized, save the model through tf.saved_model.save(model, "saved_model"). correct? No, you have just have to open your Python prompt and run these three lines: 1. ```from transformers import TFAutoModelForTokenClassification``` 2. ```model = TFAutoModelForTokenClassification.from_pretrained("<OUTPUT_DIR>")``` 3. ```tf.saved_model.save(model, "saved_model")``` And of course replace `<OUTPUT_DIR>` with the propoer localtion of where your model is. The trainer is only here to train a model and not to serve a model :) That's why it is called trainer ;) If you want a saved model you have to create it yourself with the piece of code I gave you. I suggest you to create also your own signature (as indicated in the TF documentation linked above) and then run it as detailed in this [documentation section](https://www.tensorflow.org/guide/saved_model#details_of_the_savedmodel_command_line_interface). For now the models saved by the TF trainer are not compliant with served models, you have to do it yourself manually but this will change in a near future.<|||||>1. If trainer is just used for training, why in _run_tf_ner.py_ line 246, there is a prediction done with the trainer: `predictions, label_ids, metrics = trainer.predict(test_dataset.get_dataset())` If I set the mode to prediction, initialize the trainer with a nonsense output_dir, replace `test_dataset.get_dataset()`, with my own data, I can actually get the predictions. I guess it is initiated through checkpoints dir. It seems that rather than `model.predict(sentence)`, with the logic written in _run_tf_ner.py,_ we need to do prediction through Trainer `trainer.predict(sentence)`. I am not sure if I am right, but line 246 is there, and I can succeed in getting predicted results with the initiated trainer in prediction mode. 2. If I use the code discussed in this post to save and load the model, the _loaded model_ would not convert the sentence to features. ``` from transformers import TFAutoModelForTokenClassification, BertTokenizer, TFBertForTokenClassification import tensorflow as tf output_dir = "model" saved_model_dir = "tf2_0606_german" model = TFAutoModelForTokenClassification.from_pretrained(output_dir) tf.saved_model.save(model, saved_model_dir) loaded_nodel = tf.saved_model.load(saved_model_dir) tokenizer = BertTokenizer.from_pretrained("bert-base-multilingual-cased") sentence = "1951 bis 1953 wurde der nördliche Teil als Jugendburg des Kolpingwerkes gebaut ." features = {"input_ids": tokenizer.encode(sentence, add_special_tokens=True, return_tensors="tf")} print(model(features, training=False)) print(loaded_model(features, training=False)) ``` Error message can be found https://colab.research.google.com/drive/1uPCpR31U5VRMT3dArGyDK9WT6hKQa0bv?usp=sharing#scrollTo=SBCchEi-qlnA My suspicion is "output_dir" does not save all the information it needs, and "checkpoint" directory is where the trainer get initialized when it is set to the prediction mode. But I am not sure how to recover the model information for production with these two directories. ``` 06/06/2020 07:53:52 - INFO - transformers.trainer_tf - Saving checkpoint for step 1500 at checkpoint/ckpt-3 06/06/2020 07:53:55 - INFO - transformers.trainer_tf - Saving model in model 06/06/2020 07:53:55 - INFO - transformers.trainer_tf - Saving model in model/saved_model ``` <|||||>I also found one more complication. The code you showed works only for sentences containing three words or less. If "it is me" is changed to "it is me again", the code will return the same argument error message I mentioned in the last response. ``` from transformers import BertTokenizer, TFBertForTokenClassification import tensorflow as tf model = TFBertForTokenClassification.from_pretrained("bert-base-uncased") tf.saved_model.save(model, "saved_model") loaded_model = tf.saved_model.load("saved_model") tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") features = {"input_ids": tokenizer.encode("it is me again", add_special_tokens=True, return_tensors="tf")} print(loaded_model(features, training=False)) ```<|||||>> If trainer is just used for training, why in run_tf_ner.py line 246, there is a prediction done with the trainer: This part is only here to evaluate the model and output the predictions on the test set into a file and not for inference in production. It is two distinct cases. > If I set the mode to prediction, initialize the trainer with a nonsense output_dir, replace test_dataset.get_dataset(), with my own data, I can actually get the predictions. I guess it is initiated through checkpoints dir. Yes, it is normal because the predict is just here to evaluate your model on a dataset, and it is not initatied from the checkpoint dir but from the `.h5` file in your model folder only. > If I use the code discussed in this post to save and load the model, the saved model can convert the sentence to features, but it cannot do any prediction; the loaded model would not convert the sentence to features. This is normal because your input doesn't correspond to the signature. The big picture is that from the `loaded_model(...)` line you don't get features, you get the real output of the model, this is what does a saved model. A tensor of values for each token where each value is the prob of the corresponding label. Hence once you get your saved model, run the command: ``` tensorflow_model_server \ --rest_api_port=8501 \ --model_name=ner \ --model_base_path="tf2_0606_german" >server.log 2>&1 ``` Now, you have an API that wraps your model. Finally, in a Python script you can do: ```python import json import numpy import requests my_features = # call here the tokenizer data = json.dumps({"signature_name": "serving_default", "instances": my_features}) headers = {"content-type": "application/json"} json_response = requests.post('http://localhost:8501/v1/models/ner:predict', data=data, headers=headers) predictions = numpy.array(json.loads(json_response.text)["predictions"]) ``` Finally, you get your predictions and you have to code the translation preds -> text. > I also found one more complication. The code you showed works only for sentences containing three words or less. If "it is me" is changed to "it is me again", the code will return the same argument error message I mentioned in the last response. This is totally normal, as I told you, you have to code your own signature as it is showed in the TF documentation that I linked you in my previous post. For now, nothing is implemented in the `transformers` lib to do what you are looking for with a saved model. It means that, to do inference in production with a saved model you have to code all the logic I explained above by yourself. It is planned to integrate this part in a near future, it is even an ongoing work, but far to be finished.<|||||>Thanks so much for your elaborate response! I did not fully appreciate what signature means... Thanks!!! <|||||>@jplu thanks for the great answer. I was wondering if it is possible to include the tokenizer inside the saved model (or something similar in order to make the tokenization inside TF serving ) ? Or do we have to use the tokenizer before doing the request ?<|||||>It is currently not possible to integrate the tokenizers in a saved model as preprocessing, you have to do that by yourself before to use the saved model.<|||||>@jplu Thanks for your great answer. But I have a question, in this part ``` import json import numpy import requests my_features = # call here the tokenizer data = json.dumps({"signature_name": "serving_default", "instances": my_features}) headers = {"content-type": "application/json"} json_response = requests.post('http://localhost:8501/v1/models/ner:predict', data=data, headers=headers) predictions = numpy.array(json.loads(json_response.text)["predictions"]) ``` can you give an example about how to do `# call here the tokenizer` part?<|||||>You have plenty of examples on how to use the tokenizers, such as in the examples [folder](https://github.com/huggingface/transformers/tree/master/examples) or inside the [source code](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_bert.py#L800).<|||||>hi, @jplu thank you for your answer. I forgot to remove `return_tensor=tf` in tokenizer before so it is failing. I have been working based on your answer on this issue and this [reference](https://colab.research.google.com/drive/1kEg0SnYNtw_IJwu_kl5y3qRVs-BKBmNO#scrollTo=9wilS_mw6wPk) to do inference with Tensorflow Serving Saved Model on Sentiment Analysis task. Please see here for my complete attempt [link to the collab](https://colab.research.google.com/drive/1cQx28aD2GpR_GUwzQfbdZyZSuUz-vh7W?usp=sharing) > This is totally normal, as I told you, you have to code your own signature as it is showed in the TF documentation that I linked you in my previous post. I try to do this by making it like this ``` import tensorflow as tf from transformers import * tf.config.optimizer.set_jit(True) class WrappedModel(tf.Module): def __init__(self): super(WrappedModel, self).__init__() self.model = TFAutoModelForSequenceClassification.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english') @tf.function def __call__(self, x): return self.model(x) model = WrappedModel() call = model.__call__.get_concrete_function(tf.TensorSpec([None, None], tf.int32, name='input_ids')) tf.saved_model.save(model, saved_model_path, signatures=call, ) ``` it is working fine I try to predict one example or couple examples with the same length of sequences ``` import json import numpy as np import requests my_features = {"input_ids": tokenizer.encode("it is really great, I don't think I will use this", add_special_tokens=True)} my_instances = [my_features, my_features] print(my_instances) data = json.dumps({"signature_name": "serving_default", "instances": [my_features, my_features]}) headers = {"content-type": "application/json"} json_response = requests.post('http://localhost:8503/v1/models/sentiment_analysis2:predict', data=data, headers=headers) print(json_response) predictions = numpy.array(json.loads(json_response.text)["predictions"]) for prediction in predictions: print(np.argmax(prediction)) ``` but when there is more than 1 variation of sequence length, it is not working. So I think this is because the tensor shape for every example must be the same so I try to do padding into `max_seq_length`. But something weird happens, the prediction result for the same sentence are different between the [padding](https://colab.research.google.com/drive/1cQx28aD2GpR_GUwzQfbdZyZSuUz-vh7W?authuser=1#scrollTo=bRnLQlyPyTPo&line=2&uniqifier=1) and the [non-padding version](https://colab.research.google.com/drive/1cQx28aD2GpR_GUwzQfbdZyZSuUz-vh7W?authuser=1#scrollTo=jgYV1TJ3jQeV&line=16&uniqifier=1). The more padding tokens added the more model thinks that the sentence is having negative sentiment (probability for label 0 is increasing and for label 1 is decreasing). Can you please tell me what that I did wrong? Also, I am looking to integrate the preprocessing step, inference into Tensorflow Serving and prediction result in step so it can be done automatically instead of manually running separate code. Can you please tell me what option I have regarding this? Thank you in advance! @jplu <|||||>> Can you please tell me what that I did wrong? Nothing, the results depends of the model itself, so you should ask to the person who has uploaded the model. > Can you please tell me what option I have regarding this? Currently no options, you cannot do this.<|||||>@jplu Thank you very much for your quick reply. > Nothing, the results depends of the model itself, so you should ask to the person who has uploaded the model. So if I understand correctly there is no mistake in my code but it is because of the model I use right? I will try with other models then, thank you. > Currently no options, you cannot do this. Ok, thank you.<|||||>@jplu @kevin-yauris to be able to perform the same task with batch_encoding_plus, how should we modify the callback function to achieve that? with existing piece of code, for an instance, input to the model looks like <tf.Tensor: shape=(1, 8), dtype=int32, numpy=array([[ 101, 7592, 1010, 2049, 1037, 4408, 2154, 102]], dtype=int32)> with batch encoding, it might look something like {'input_ids': <tf.Tensor: shape=(2, 6), dtype=int32, numpy= array([[ 101, 7592, 102, 0, 0, 0], [ 101, 2054, 1037, 2204, 2154, 102]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 6), dtype=int32, numpy= array([[1, 1, 1, 0, 0, 0], [1, 1, 1, 1, 1, 1]], dtype=int32)>} In which case how should this call function look like? call = model.__call__.get_concrete_function(tf.TensorSpec([None, None], tf.int32, name='input_ids')) Thanks in advance<|||||>> @jplu @kevin-yauris to be able to perform the same task with batch_encoding_plus, > how should we modify the callback function to achieve that? > > with existing piece of code, > for an instance, input to the model looks like > <tf.Tensor: shape=(1, 8), dtype=int32, numpy=array([[ 101, 7592, 1010, 2049, 1037, 4408, 2154, 102]], dtype=int32)> > > with batch encoding, > it might look something like > {'input_ids': <tf.Tensor: shape=(2, 6), dtype=int32, numpy= > array([[ 101, 7592, 102, 0, 0, 0], > [ 101, 2054, 1037, 2204, 2154, 102]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 6), dtype=int32, numpy= > array([[1, 1, 1, 0, 0, 0], > [1, 1, 1, 1, 1, 1]], dtype=int32)>} > > In which case how should this call function look like? > call = model.**call**.get_concrete_function(tf.TensorSpec([None, None], tf.int32, name='input_ids')) > > Thanks in advance Its done, thanks.<|||||>> ``` > from transformers import TFAutoModelForTokenClassification, BertTokenizer, TFBertForTokenClassification > import tensorflow as tf > > output_dir = "model" > saved_model_dir = "tf2_0606_german" > > model = TFAutoModelForTokenClassification.from_pretrained(output_dir) > tf.saved_model.save(model, saved_model_dir) > loaded_nodel = tf.saved_model.load(saved_model_dir) > > tokenizer = BertTokenizer.from_pretrained("bert-base-multilingual-cased") > sentence = "1951 bis 1953 wurde der nördliche Teil als Jugendburg des Kolpingwerkes gebaut ." > features = {"input_ids": tokenizer.encode(sentence, add_special_tokens=True, return_tensors="tf")} > > print(model(features, training=False)) > print(loaded_model(features, training=False)) > ``` > > Error message can be found > https://colab.research.google.com/drive/1uPCpR31U5VRMT3dArGyDK9WT6hKQa0bv?usp=sharing#scrollTo=SBCchEi-qlnA @jx669 Were you able to solve the error in the cell in this notebook ```print(loaded_model(features, training=False)) #not working``` where it asks for the ```input_ids``` to be of shape ```(None, 5)``` ? Have been facing the exact same issue and no clue how to solve this. <|||||>Is there any example of how to query the huggingface T5 model from tensorflow_model_server (grpc)?
transformers
3,245
closed
pad error in BertTokenizer.batch_encode_plus
As the document writes,the value that bert do not attend should have attention value of 0.However, I was using the BertTokenizer,the result is different. This is my code: ``` input_wsrq1ids=tokenizer.batch_encode_plus(d[0],text_pair=d[1],add_special_tokens=True,return_tensors='pt') >>> input_wsrq1ids['input_ids'][0] tensor([ 101, 2198, 2001, 3015, 4581, 2005, 2010, 2047, 2201, 1012, 2002, 2318, 13417, 3213, 1005, 1055, 3796, 1012, 2002, 2699, 2000, 2486, 2370, 2000, 4339, 2021, 2009, 2052, 1050, 1005, 1056, 2079, 2505, 5369, 2699, 2000, 2486, 2370, 2000, 4339, 2021, 2009, 2052, 1050, 1005, 1056, 2079, 2505, 1012, 2002, 2165, 1037, 3328, 1010, 5112, 2041, 2007, 2070, 2814, 1010, 1998, 2246, 2012, 3267, 5369, 2165, 1037, 3328, 1010, 5112, 2041, 2007, 2070, 2814, 1010, 1998, 2246, 2012, 3267, 5369, 2165, 1037, 3328, 1010, 5112, 2041, 2007, 2070, 2814, 1010, 1998, 2246, 2012, 3267, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) >>> input_wsrq1ids['attention_mask'][0] tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) ``` The attention_mask are all ones,however, the input_ids are padded with zero,which seems do not match the attention.
03-12-2020 08:03:44
03-12-2020 08:03:44
This will greatly effect the output of BERT model,I do not quite know whether this is the problem of the code or the problem of the package. d[0] and d[1] are lists of sentences,which is used to train the model.<|||||>When I was trying to decode the input_ids, the result shows that the text_pair is not encoded properly. `tokenizer.encode(input_wsrq1ids['input_ids'][0])` `"[CLS] john was writing lyrics for his new album. he started experiencing writer's block. he tried to force himself to write but it wouldn't do anythinghe tried to force himself to write but it wouldn't do anything. he took a walk, hung out with some friends, and looked at naturehe took a walk, hung out with some friends, and looked at naturehe took a walk, hung out with some friends, and looked at nature [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]"` which I should expect the following answer: `[CLS] john was writing lyrics for his new album. he started experiencing writer's block. he tried to force himself to write but it wouldn't do anythinghe tried to force himself to write but it wouldn't do anything. he took a walk, hung out with some friends, and looked at naturehe took a walk, hung out with some friends, and looked at naturehe took a walk, hung out with some friends, and looked at nature [SEP] He felt inspiration and then went back home to write [PAD]` Which seems did not concatenate the sentences in d[0] and d[1]<|||||>The `encode_plus` methods creates the attention mask according to the length of the passed input and the max length you're asking it to encode to. It doesn't look into the list to see which tokens are padding tokens, as it expects to perform the padding itself. Instead of padding the sequences yourself, you could use a combination of the `pad_to_max_length` and `max_length` flags for `encode_plus`/`batch_encode_pus`. The attention mask will be correct then.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,244
closed
Get word seperator char for tokenization
# 🚀 Feature request When using some of tokenization models (e.g. : CamemBERT) you can find the char used to separate words after some investigation ([start](https://github.com/huggingface/transformers/blob/a4c75f149269099a98613f51b76cd0b579a109ee/src/transformers/tokenization_camembert.py#L274), [first jump](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_camembert.py#L27), [last jump](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_xlnet.py#L43)). Making it a general rule seems interesting. ## Motivation With this you can detect whether a token is a subword or the start of a word (and then use this info for masking). But this is inconsistent across models. After several hours I couldn't find this information for RoBERTa (and GPT2) in the code. We can obviously get this char experimentally but it is not robust to new models or even different versions of the model.
03-12-2020 07:42:07
03-12-2020 07:42:07
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I don't think you should delete the issue since it's kinda useful to have and not having it may be embarassing. But do as you want !<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,243
closed
seems TFBertForSequenceClassification cannot load tf1.x model?
for TFPreTrainedModel model.load_weights(resolved_archive_file, by_name=True) seems by_name is always true.while i see in "tensorflow_core/python/keras/engine/network.py" if save_format == 'tf': status = self._trackable_saver.restore(filepath) if by_name: raise NotImplementedError( 'Weights may only be loaded based on topology into Models when ' 'loading TensorFlow-formatted weights (got by_name=True to ' 'load_weights).') So, it will always be NotImplementedError
03-12-2020 07:40:53
03-12-2020 07:40:53
All of our TensorFlow models are TF2+ only.
transformers
3,242
closed
Update examples/ner/run_ner.py
Update the example file by changing the name of AlbertForTokenClassification to AlbertForSequenceClassification.
03-12-2020 06:31:00
03-12-2020 06:31:00
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3242?src=pr&el=h1) Report > Merging [#3242](https://codecov.io/gh/huggingface/transformers/pull/3242?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a4c75f149269099a98613f51b76cd0b579a109ee?src=pr&el=desc) will **decrease** coverage by `0.01%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3242/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3242?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3242 +/- ## ========================================= - Coverage 77.82% 77.8% -0.02% ========================================= Files 98 98 Lines 16665 16665 ========================================= - Hits 12970 12967 -3 - Misses 3695 3698 +3 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3242?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3242/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.42% <0%> (-0.42%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3242?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3242?src=pr&el=footer). Last update [a4c75f1...340f2a7](https://codecov.io/gh/huggingface/transformers/pull/3242?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I found that it is not suitable for guideline. Close this request.
transformers
3,241
closed
simplify polbert usage example with pipelines
Indeed this is much simpler now! Pipelines look great :)
03-12-2020 04:57:03
03-12-2020 04:57:03
Squashed into #3248 <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3241?src=pr&el=h1) Report > Merging [#3241](https://codecov.io/gh/huggingface/transformers/pull/3241?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a4c75f149269099a98613f51b76cd0b579a109ee?src=pr&el=desc) will **increase** coverage by `0.18%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3241/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3241?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3241 +/- ## ========================================== + Coverage 77.82% 78.01% +0.18% ========================================== Files 98 98 Lines 16665 16665 ========================================== + Hits 12970 13001 +31 + Misses 3695 3664 -31 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3241?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.56% <0%> (-0.28%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3241/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.37% <0%> (+5.9%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3241?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3241?src=pr&el=footer). Last update [a4c75f1...1fd6564](https://codecov.io/gh/huggingface/transformers/pull/3241?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,240
closed
Minor Bug Fix for Running Roberta on Glue
Since `RobertaTokenizer` does not generate `return_type_ids`, running Glue with Roberta throws errors. This fix overwrites the default behaviour of the tokenizers, and forces them to generate `return_type_ids`.
03-12-2020 04:39:52
03-12-2020 04:39:52
transformers
3,239
closed
Minor Bug Fix for Running Roberta on Glue
Since `RobertaTokenizer` does not generate `return_type_ids`, running Glue with Roberta throws errors. This fix overwrites the default behaviour of the tokenizers, and forces them to generate `return_type_ids`.
03-12-2020 04:24:04
03-12-2020 04:24:04
transformers
3,238
closed
add output_past option to BERT class
I need key-value of the present state like GPT class for BERT as well (I'm testing PPLM like architecture but with masked LM), and here I've added the option `output_past` to BERT to enable returning those statistics.
03-12-2020 02:25:53
03-12-2020 02:25:53
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,237
closed
How to encode a batch of sequence?
Hi,I am trying to learn this transformers package. I prepared the data as following format: `(("Mary spends $20 on pizza"),("She likes eating it),("The pizza was great"))` I saw methods like `tokenizer.encode`,`tokenizer.encode_plus`t and `tokenizer.batch_encode_plus`.However, the `tokenizer.encode` seems to only encode single sentence. Because when I input the data below,the answer it gives are like this: ``` >>> d[0][0] 'John was writing lyrics for his new album' >>> d[0][1] 'Franny did not particularly like all of the immigration happening' >>> input_ids = torch.tensor(tokenizer.encode([d[0][0],d[0][1]])) >>> input_ids tensor([101, 100, 100, 102]) ``` Obviously,this is not the rights answer for the encoding. When I was try method tokenizer.encode_plust,it can't even work properly,as the document write > "text (str or List[str]) – The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method)" It can't even work when I only input a single sentence: ``` >>> input_ids = torch.tensor(tokenizer.encode_plus(d[0][0])) Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: Could not infer dtype of dict ``` And the method,tokenizer.batch_encode_plust gives the same error message.
03-12-2020 02:11:04
03-12-2020 02:11:04
`batch_encode_plus` is the correct method :-) ``` from transformers import BertTokenizer batch_input_str = (("Mary spends $20 on pizza"), ("She likes eating it"), ("The pizza was great")) tok = BertTokenizer.from_pretrained('bert-base-uncased') print(tok.batch_encode_plus(batch_input_str, pad_to_max_length=True)) ```
transformers
3,236
closed
[WIP] Add BART for summarization training with CNN/DM using pytorch-lightning
This pull request adds to the example for BART for summarization. I used the [example for NER](https://github.com/huggingface/transformers/tree/master/examples/ner) using pytorch-lightning as guidance. This example will train on CNN/DM and evaluate, and get decent results, though I haven't trained it on the full dataset just yet. I'm sure there are better defaults for the hyperparams but these seem to work. I based this PR on the code I wrote in this [colab](https://colab.research.google.com/drive/1C4jEf0fnLiz6Xdx4TDz1OoO4BRCjCx1m). This would hopefully close https://github.com/huggingface/transformers/issues/3004 ## TODO - [x] Be able to train the model on a GPU. - [x] remove unused args - [x] add test step and save results. Happy to hear any feedback!
03-12-2020 01:08:11
03-12-2020 01:08:11
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3236?src=pr&el=h1) Report > Merging [#3236](https://codecov.io/gh/huggingface/transformers/pull/3236?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9d4a01905fad4f5eed2e6c1037dea9877711427a&el=desc) will **not change** coverage by `%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3236/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3236?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3236 +/- ## ======================================= Coverage 77.56% 77.56% ======================================= Files 100 100 Lines 16970 16970 ======================================= Hits 13162 13162 Misses 3808 3808 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3236?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3236?src=pr&el=footer). Last update [9d4a019...9d4a019](https://codecov.io/gh/huggingface/transformers/pull/3236?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Nice! @yjernite might be interested!<|||||>I made those requested changes. And yes I'm planning to run finetuning this weekend and share results. I only have access to a k80 so it'll take a while 🤷🏽‍♂️<|||||>This looks awesome. Let's coordinate with https://github.com/huggingface/transformers/pull/3290 as well to share whatever code is possible. <|||||>@nateraw can you do a review of this PR as well?<|||||>@acarrera94 I will try to get this working this week. If you are in the pytorch-lightning open slack we can also chat a bit more about the design. <|||||>@nateraw I've made all of those changes and it looks like #3290 has been merged, anything else that needs to change? Thanks!<|||||>It's blocked on me, I should be able to get to it tonight. <|||||>New code looks great. Excited to try it out!<|||||>Thanks for sticking with it @acarrera I'm really impressed how concise this became. Next we can get some numbers. <|||||>@acarrera94 `run_train.sh` is using 19GB on my system. Does your system use less? I am also seeing no memory savings from adding `--fp16`. Thanks! <|||||>@sshleifer I usually ran it using --max_seq_lengt=756. And that used less than 16gb of memory with a batch size of 4, so we might want to change that default. And I haven’t tried it using --fp16. That comes from BaseTransformer right?
transformers
3,235
closed
Directories not found when saving checkpoints
_rotate_checkpoints deletes some checkpoint directory which causes "directories not found" error
03-11-2020 22:09:42
03-11-2020 22:09:42
Do you mind making sure the code quality test runs before we merge? You can see how to do that in the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,234
closed
[model_cards] 🇹🇷 Add new (cased) DistilBERTurk model
Hi, this PR adds a new distilled BERT model for Turkish: DistilBERTurk 🤗 It was trained with the official Hugging Face [implementation](https://github.com/huggingface/transformers/tree/master/examples/distillation) for model distillation. It uses 7GB of the original training data of BERTurk, and uses the cased BERTurk model as teacher model. DistilBERTurk was trained for 5 days on 4 RTX 2080 TI. Performance is really promising: for PoS tagging the model outperforms the 24-layer XLM-RoBERTa and is only 0.69% behind the teacher model. For NER there's a performance diff of 0.44% compared to mBERT and 1.68% compared to the teacher model.
03-11-2020 21:57:18
03-11-2020 21:57:18
Also cc @VictorSanh for the model distillation script. Thanks @stefan-it this is awesome<|||||>I'm supposed to use the same line-breaking options as GitHub for markdown formatting (using marked.js), however this still seems to not render like on GitHub: https://huggingface.co/dbmdz/distilbert-base-turkish-cased will need to investigate.
transformers
3,233
closed
Bart: update example for #3140 compatibility
03-11-2020 21:36:41
03-11-2020 21:36:41
Had to temporarily pause the self-hosted CI runner while I debug while it's been failing, @sshleifer
transformers
3,232
closed
[TorchHub]Repo's layout is not compatible with TorchHub anymore since 2.0
# 🐛 Bug When I try loading a model/tokenizer from the [pytorch hub](https://pytorch.org/hub/huggingface_pytorch-transformers/) page, the hub loading code is not working anymore. Pre transformers 2.0.0, the same loading code is working. On a quick look, I believe it's related to the repo's fold layout since 2.0 where the `transformers` module is moved inside `src` but `hub_conf` is still assuming `transformers` exists in the same level and we get a module not found error. For context, torch hub insert the repo root directory into `sys.path` to enable import. ## To reproduce Run ``` import torch tokenizer = torch.hub.load('huggingface/transformers:v2.5.0', 'tokenizer', 'bert-base-cased') ``` Stack trace: ``` ModuleNotFoundError Traceback (most recent call last) <ipython-input-5-155f4aa294f1> in <module> 1 import torch ----> 2 tokenizer = torch.hub.load('huggingface/transformers:v2.5.0', 'tokenizer', 'bert-base-cased') 3 4 text_1 = "Who was Jim Henson ?" 5 text_2 = "Jim Henson was a puppeteer" ~/miniconda3/envs/poc/lib/python3.7/site-packages/torch/hub.py in load(github, model, *args, **kwargs) 354 sys.path.insert(0, repo_dir) 355 --> 356 hub_module = import_module(MODULE_HUBCONF, repo_dir + '/' + MODULE_HUBCONF) 357 358 entry = _load_entry_from_hubconf(hub_module, model) ~/miniconda3/envs/poc/lib/python3.7/site-packages/torch/hub.py in import_module(name, path) 70 spec = importlib.util.spec_from_file_location(name, path) 71 module = importlib.util.module_from_spec(spec) ---> 72 spec.loader.exec_module(module) 73 return module 74 elif sys.version_info >= (3, 0): ~/miniconda3/envs/poc/lib/python3.7/importlib/_bootstrap_external.py in exec_module(self, module) ~/miniconda3/envs/poc/lib/python3.7/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds) ~/.cache/torch/hub/huggingface_transformers_v2.5.0/hubconf.py in <module> ----> 1 from transformers import ( 2 AutoConfig, 3 AutoModel, 4 AutoModelForQuestionAnswering, 5 AutoModelForSequenceClassification, ModuleNotFoundError: No module named 'transformers' ``` ## Expected behavior This was working before 2.0.0. ``` import torch tokenizer = torch.hub.load('huggingface/transformers:1.2.0', 'tokenizer', 'bert-base-cased') ``` ## Environment info - `transformers` version: >2.0.0 - Platform: all - Python version: 3.7 - PyTorch version (GPU?): 1.3.1 - Tensorflow version (GPU?): N/A - Using GPU in script?: N/A - Using distributed or parallel set-up in script?: N/A
03-11-2020 21:09:16
03-11-2020 21:09:16
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Any plan to fix this?<|||||>I missed that issue, but this was fixed a couple of weeks ago, and it's even covered by CI now: https://github.com/huggingface/transformers/blob/master/.github/workflows/github-torch-hub.yml
transformers
3,231
closed
train dev test split with BERT
Does `run_multiple_choice.py` work on train dev test splits? I need to run BERT on 3 labeled datasets. Train it on my training set, validate it on my validation set (tune hyperparameters and calculate loss), and evaluate it on my test set (report performance). I finally want to do prediction on a forth unlabeled dataset. I am wondering which of the codes in your repository includes these 3 modes. Thank you.
03-11-2020 18:15:02
03-11-2020 18:15:02
You can write or add your own version easily..<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,230
closed
Create README.md for bio+discharge summary BERT
Add Bio+ Discharge Summary BERT from Publicly Available Clinical BERT Embeddings
03-11-2020 16:02:57
03-11-2020 16:02:57
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3230?src=pr&el=h1) Report > Merging [#3230](https://codecov.io/gh/huggingface/transformers/pull/3230?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e43afb1bb87f01470d0bd16cac2d2aac50a76d7a?src=pr&el=desc) will **increase** coverage by `0.08%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3230/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3230?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3230 +/- ## ========================================== + Coverage 77.94% 78.02% +0.08% ========================================== Files 98 98 Lines 16665 16665 ========================================== + Hits 12989 13003 +14 + Misses 3676 3662 -14 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3230?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3230/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.84% <0%> (+0.27%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3230/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.37% <0%> (+2.14%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3230?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3230?src=pr&el=footer). Last update [e43afb1...2ed661c](https://codecov.io/gh/huggingface/transformers/pull/3230?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,229
closed
Add Bio+ Clinical BERT model card
Adding Bio+ Clinical BERT model from Publicly Available Clinical BERT Embeddings paper
03-11-2020 15:57:18
03-11-2020 15:57:18
transformers
3,228
closed
Support T5 Generation
In this PR some first commits are added to make T5 work for generation. the `T5WithLMHeadModel.forward()` has a special API due to its encoder-decoder nature. This is why we need to add a `prepare_inputs_for_generation()` in `t5_modeling_utils.py` to correctly prepare t5's inputs for generation. Some easy translation seems to give alright results (same results for TF): ``` model = T5WithLMHeadModel.from_pretrained('t5-base') tok = T5Tokenizer.from_pretrained('t5-base') text = "translate English to German: How old are you?" input_ids = tok.encode(text, return_tensors='pt') outputs = model.generate(input_ids, bos_token_id=tok.pad_token_id, max_length=22, num_beams=4, do_sample=False, early_stopping=True) print(tok.decode(outputs[0], skip_special_tokens=True)) # prints: # Wie alt bist du?st du?st du?st ``` UPDATE: Updated generate() in both TF and PT to compute the `encoder_outputs` only once for `encoder-decoder` models as discussed below. Tests `RUN_SLOW=1` for `test_modeling_bart.py`, `test_modeling_gpt2.py` and `test_modeling_tf_gpt2.py` all pass. #### **FUTURE PR**: - [ ] add generation integration test for T5 in PT and TF (could be similar to what is done in e.g. OR better compare numbers to original T5 model numbers @craffel. Good for me to merge! @thomwolf @sshleifer @craffel
03-11-2020 14:50:02
03-11-2020 14:50:02
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3228?src=pr&el=h1) Report > Merging [#3228](https://codecov.io/gh/huggingface/transformers/pull/3228?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/68ef0a111f8740f06ca4e5a00374ec4e2adb0a6d&el=desc) will **increase** coverage by `0.12%`. > The diff coverage is `97.56%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3228/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3228?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3228 +/- ## ========================================== + Coverage 77.48% 77.60% +0.12% ========================================== Files 99 99 Lines 16799 16828 +29 ========================================== + Hits 13017 13060 +43 + Misses 3782 3768 -14 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3228?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/3228/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.91% <ø> (ø)` | | | [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3228/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.55% <ø> (ø)` | | | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3228/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `93.28% <ø> (+3.35%)` | :arrow_up: | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3228/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `96.17% <94.20%> (-0.37%)` | :arrow_down: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/3228/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `75.47% <100.00%> (ø)` | | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3228/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.26% <100.00%> (+<0.01%)` | :arrow_up: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3228/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `81.20% <100.00%> (+0.10%)` | :arrow_up: | | [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/3228/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `68.62% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3228/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.82% <100.00%> (+1.60%)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3228/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.99% <100.00%> (+0.28%)` | :arrow_up: | | ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/3228/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3228?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3228?src=pr&el=footer). Last update [68ef0a1...62cf76f](https://codecov.io/gh/huggingface/transformers/pull/3228?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>> I noticed that the T5 tokenizer does not have a BOS token and since we require at the moment to use a BOS token for encoder-decoder generation, I set the bos_token_id to the pad_token_id which is probably not the best way to do it. This is actually the correct thing to do, see e.g. https://github.com/tensorflow/mesh/blob/master/mesh_tensorflow/transformer/transformer.py#L1744<|||||>What is currently happening in `model.generation()` for encoder-decoder models (Bart and T5) is the following: The `input_ids` variable of the generate() is given to the variable `encoder_input_ids`, which is then **always** put into the forward() of `BartLMHeadModel` and `T5LMHeadModel`. The `input_ids` variable is then initialized with the `BOS` token and auto-regressively updated. After the first step the `encoder_output_ids` are calculated and handed to the `past` variable, which from then on is also **always** put into forward() of `BartLMHeadModel` and `T5LMHeadModel`. At the moment, the `encoder_input_ids` are always put in the forward() of Bart and T5 and then ignored there. This is probably not the cleanest way to do it. Other possibilities might be: 1. calculate the encoder_outputs one time before going into the auto-regressive loop and setting them to the `past` variable already on the first step. 2. leave it as it is now, but set `encoder_input_ids` to None in `prepare_inputs_for_generation()` Or other ideas? I think option 1 is clean - I think Bart and T5 let's you calculate only the encoder_outputs @sshleifer , @craffel no? @craffel @thomwolf @sshleifer<|||||>> calculate the encoder_outputs one time before going into the auto-regressive loop and setting them to the past variable already on the first step. This definitely seems best - the model should compute the encoder outputs and then treat them as fixed (to be attended to) as the decoder generates.<|||||>UPDATE 2: I'm pretty happy with the current version now. To summarize: This PR allows `generate()` for TF & PT T5Model. Three important changes to mention: 1. remove the if `decoder_start_token_id` != `bos_token_id` statement in `generate()` to keep generate() generic (for more explanation, see comments above) 2. remove `encoder()` abstraction method in `Bart` and `T5` and replace by `get_encoder()`. 3. **IMPORTANT**: Move responsablitiy to transform `input_ids` to `inputs_embeds` from `T5ForConditionalGeneration.call()` to `encoder.call()` and `decoder.call()` -> Reasons: a) This way, the encoder is a complete model which can transform `input_ids` to `input_embeds` b) cleaner code in `T5ForConditionalGeneration.call()` c) Bart had this behavior already implemented - make API more similar NOTE: this led to some problems with TF scopes, but thanks to @mfuntowicz and @craffel is solved now by injecting the correct absolute scope to the call method and wrapping the Embedding layer (see comments above). This will issue will also be important when translating Bart to T5Bart @sshleifer 4. `T5Models.call()` arguments are renamed to `BartModel` argument names. T5 produces same good translation results (same as results mentioned on the top) and Bart tests all pass. @craffel @thomwolf @sshleifer <|||||>when i use decoder of bart to call generate(), there's mistake of 'has no attribute 'get_encoder',and the decoder is a tensorRT engine Inherited from GenerationMixin. ![截屏2022-01-13 下午5 12 00](https://user-images.githubusercontent.com/13781668/149300420-6c3373dc-528b-412c-849d-863e01e1bc70.png) Is any one knows how to fix it? very many thanks! @patrickvonplaten @craffel @codecov-io @jplu <|||||>> when i use decoder of bart to call generate(), there's mistake of 'has no attribute 'get_encoder',and the decoder is a tensorRT engine Inherited from GenerationMixin. ![截屏2022-01-13 下午5 12 00](https://user-images.githubusercontent.com/13781668/149300420-6c3373dc-528b-412c-849d-863e01e1bc70.png) > > Is any one knows how to fix it? very many thanks! @patrickvonplaten @craffel @codecov-io @jplu @yuanhuachao - could you please open a new issue for this?
transformers
3,227
closed
An Error report about pipeline
# 🐛 Bug ## Information This may be an easy question, but it has been bothering me all day. When I run the code: nlp = pipeline("question-answering") It always tells me: Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-cased-distilled-squad-modelcard.json' to download model card file. Creating an empty model card. If I ignore it and continue to run the rest of the code: nlp({ 'question': 'What is the name of the repository ?', 'context': 'Pipeline have been included in the huggingface/transformers repository' }) The error will appear: KeyError: 'token_type_ids'
03-11-2020 14:23:50
03-11-2020 14:23:50
I have this same issue, but have no problems running: nlp = pipeline("question-answering") Note: To install the library, I had to install tokenizers version 0.6.0 separately, git clone the transformers repo and edit the setup.py file before installing as per @dafraile's answer for issue: https://github.com/huggingface/transformers/issues/2831 Update: This error was fixed when I installed tokenizers==0.5.2<|||||>I sadly have this issue too with the newest transformers 2.6.0 version. Tokenizers is at version 0.5.2. But newest version of tokenizers sadly also doesn't work. And solutions to fix this issue?<|||||>I have the same issue here. I first ran with my own tokenizer, but it failed, and then I tried to run the 03-pipelines.ipynb code with QnA example and I get the following error code. Environment: tensorflow==2.0.0 tensorflow-estimator==2.0.1 tensorflow-gpu==2.0.0 torch==1.4.0 transformers==2.5.1 tokenizers==0.6.0 Code that I ran: nlp_qa = pipeline('question-answering') nlp_qa(context='Hugging Face is a French company based in New-York.', question='Where is based Hugging Face ?') Error code: HBox(children=(FloatProgress(value=0.0, description='Downloading', max=230.0, style=ProgressStyle(description_… convert squad examples to features: 0%| | 0/1 [00:00<?, ?it/s] --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/home/brandon/anaconda3/envs/transformers/lib/python3.7/multiprocessing/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "/home/brandon/anaconda3/envs/transformers/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar return list(map(*args)) File "/home/brandon/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/data/processors/squad.py", line 198, in squad_convert_example_to_features p_mask = np.array(span["token_type_ids"]) KeyError: 'token_type_ids' """ The above exception was the direct cause of the following exception: KeyError Traceback (most recent call last) <ipython-input-6-95614263b54d> in <module>() 1 nlp_qa = pipeline('question-answering') ----> 2 nlp_qa(context='Hugging Face is a French company based in New-York.', question='Where is based Hugging Face ?') ~/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, *texts, **kwargs) 968 False, 969 ) --> 970 for example in examples 971 ] 972 all_answers = [] ~/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/pipelines.py in <listcomp>(.0) 968 False, 969 ) --> 970 for example in examples 971 ] 972 all_answers = [] ~/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/data/processors/squad.py in squad_convert_examples_to_features(examples, tokenizer, max_seq_length, doc_stride, max_query_length, is_training, return_dataset, threads) 314 p.imap(annotate_, examples, chunksize=32), 315 total=len(examples), --> 316 desc="convert squad examples to features", 317 ) 318 ) ~/anaconda3/envs/transformers/lib/python3.7/site-packages/tqdm/std.py in __iter__(self) 1106 fp_write=getattr(self.fp, 'write', sys.stderr.write)) 1107 -> 1108 for obj in iterable: 1109 yield obj 1110 # Update and possibly print the progressbar. ~/anaconda3/envs/transformers/lib/python3.7/multiprocessing/pool.py in <genexpr>(.0) 323 result._set_length 324 )) --> 325 return (item for chunk in result for item in chunk) 326 327 def imap_unordered(self, func, iterable, chunksize=1): ~/anaconda3/envs/transformers/lib/python3.7/multiprocessing/pool.py in next(self, timeout) 746 if success: 747 return value --> 748 raise value 749 750 __next__ = next # XXX KeyError: 'token_type_ids' <|||||>Any help would be greatly appreciated!<|||||>use : pip install transformers==2.5.1 instead of : pip install transformers<|||||>Thank you @paras55. your solution worked for me!<|||||>Installing `v2.7.0` should work as well.<|||||>2.7.0 fails with the same error (at least with tokenizers==0.5.2)
transformers
3,226
closed
Strange behaviour after using BertTokenizer.add_tokens()
When a word is in origin Bert vocabulary, it would not be split. But after adding several its subtokens, it would be split into pieces, which seems to inconsistent with the longest match first algorithm. To reproduce: ``` import transformers tokenizer = transformers.BertTokenizer.from_pretrained('bert-base-uncased) tokenizer.tokenize('involve') tokenizer.add_tokens(['inv','ol','ve']) tokenizer.tokenize('involve') ``` Line 2 returns 'involve' , and line 4 returns ['inv','##ol','##ve']. Is this behaviour expected or is it a bug?
03-11-2020 13:40:18
03-11-2020 13:40:18
No, this is expected behavior. If it was returning `['inv', 'ol', 've']`, then the model would identify each token as being a beginning of word token, whereas the last two are actually part of words following a beginning of a word.<|||||>Thanks for your response. But since both 'involve' and `['inv', 'ol', 've']` are in the vocabulary, shouldn't 'involve' be kept unsplit instead of split into subword? I am expecting the output to be `'involve'` instead of `['inv','##ol','##ve']`.
transformers
3,225
closed
Complete merge Seq-2-Seq generation into default generation
This is a follow-up PR to finalize #3140 . There was still no conclusion on how to handle the fairseq tricks for generation. To summarize: I think we have three options: 1. Remove all faiseq tricks. Here the ROUGE score is: **20.285** 2. Implement the fairseq tricks EXCEPT leaving the starting decoding_inputs_tokens to be the BOS token instead of EOS. Here the ROUGE score is: **19.369** 3. Add all fairseq tricks and maybe add a new argument to `generate()` which is called `decoder_start_token_id=bos_token_id` , but can be overriden to be the `eos_token_id` in the case of Bart. Here the ROUGE score is: **21.072** ROUGE scores from @sshleifer For comparison: ![76256460-5b675780-6226-11ea-8516-28d0427251ca](https://user-images.githubusercontent.com/23423619/76422619-cdb27600-63a5-11ea-82c7-f36addb7fefb.png) UPDATE: Given the above scores, option 1. was chosen for the moment to have the same scores as fairseq. This means that we have to start the decoder ids with a EOS token (which might be weird and fairseq specific). Therefore, a new argument `decoder_start_token_id` was added to the generate function that defaults to the `bos_token_id`. When using Bart generate this argument should be set to the `eos_token_id` to have good results. To see how `Bart.generate()` should be used take a look at: https://github.com/huggingface/transformers/blob/2e81b9d8d76a4d41a13f74eb5e0f4a65d8143cab/tests/test_modeling_bart.py#L470 At the moment option 2. is implemented which seems to give the worst results and is also not the cleanest option. This PR implements option 3. For me either option 1. or option 3. is fine. Up to discuss @thomwolf , @julien-c , @LysandreJik @sshleifer
03-11-2020 13:38:30
03-11-2020 13:38:30
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3225?src=pr&el=h1) Report > Merging [#3225](https://codecov.io/gh/huggingface/transformers/pull/3225?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2e81b9d8d76a4d41a13f74eb5e0f4a65d8143cab?src=pr&el=desc) will **increase** coverage by `0.1%`. > The diff coverage is `100%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3225/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3225?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3225 +/- ## ========================================= + Coverage 77.93% 78.03% +0.1% ========================================= Files 98 98 Lines 16666 16668 +2 ========================================= + Hits 12988 13007 +19 + Misses 3678 3661 -17 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3225?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `93.86% <100%> (+0.15%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3225/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.55% <0%> (+2.86%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3225?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3225?src=pr&el=footer). Last update [2e81b9d...6a82f77](https://codecov.io/gh/huggingface/transformers/pull/3225?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Another issue that came up today from @yjernite, Bart does not support `do_sample=True` before 3140 this was clear and now it is :(<|||||>`Bart.generate()` with `do_sample=True` does not throw any errors. If it should not be done this way then we can just write in the docs that `do_sample` should be set to `False` and show an example. I don't see a problem here. We could think about a `Bart.summarize()` that calls `Bart.generate()` with the correct parameters if it's just for the prettier API. <|||||>I prefer option #3, it looks like very little code and the best metrics. Could you also fix the kwarg in `examples/summarization/bart/evaluate_cnn.py`?<|||||>> I prefer option #3, it looks like very little code and the best metrics. > Could you also fix the kwarg in `examples/summarization/bart/evaluate_cnn.py`? done :-) <|||||>Linked to PR: https://github.com/huggingface/transformers/pull/3264 Might need to fix eos_token_id conflicts when rebasing/merging.<|||||>This one is ok for me. Is it currently option 3 which is implemented? Also, I guess we could default to `do_sample==False` in `generate()`. Seems like the default expectation from the user is simple greedy decoding to me.<|||||>Yeah currently option 3 is implemented. `do_sample` used to default to `False`, but was changed to `True` by @LysandreJik (can't find the issue/PR anymore :-/) Does not matter too much for me what is chosen, just would need to update some tests in `modeling_utils.py`<|||||>Yeah I'd love to merge this! Having trouble connecting to brutasse to run rouge, but afaict it will be the same as pre 3140 :) <|||||>Good to merge for me! changing `do_sample=False` can be done in another PR, I think. IMPORNTANT: the `config.json` files of: - `bart-large-cnn` - `bart-large-mnli` - `bart-large ` have to be updated on AWS to pass all slow tests. All `special_tokens_id` parameters should be deleted there. For the moment, we will go with the solution: #3264 .<|||||>Ok merging. I let you continue in other PRs and ping me.<|||||>> Good to merge for me! > changing `do_sample=False` can be done in another PR, I think. > > IMPORNTANT: > the `config.json` files of: > > * `bart-large-cnn` > * `bart-large-mnli` > * `bart-large ` > > have to be updated on AWS to pass all slow tests. All `special_tokens_id` parameters should be deleted there. For the moment, we will go with the solution: > #3264 . Changed the configs on AWS. All slow tests pass now.
transformers
3,224
closed
Problem with PreTrainedTokenizerFast and return_offsets_mapping
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Bert, bert-large-uncased-whole-word-masking-finetuned-squad Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: * [X] my own modified scripts: The tasks I am working on is: * [X] an official GLUE/SQUaD task: SQUaD * [ ] my own task or dataset: (give details below) ## To reproduce Script to reproduce the behavior: ```python # transformers v2.5.1 (https://github.com/huggingface/transformers/releases) from transformers import AutoTokenizer, AutoModelForQuestionAnswering import torch model_name = "bert-large-uncased-whole-word-masking-finetuned-squad" tokeniser = AutoTokenizer.from_pretrained(model_name, use_fast=True) inputs = tokeniser.encode_plus("Who is Bert?", "Bert is a puppet by Jim Henson", add_special_tokens=True, return_tensors="pt", return_offsets_mapping=True) ``` This script produces the following wrong output: ``` Traceback (most recent call last): File "C:\Program Files\JetBrains\PyCharm 2018.3.5\helpers\pydev\pydevd.py", line 1741, in <module> main() File "C:\Program Files\JetBrains\PyCharm 2018.3.5\helpers\pydev\pydevd.py", line 1735, in main globals = debugger.run(setup['file'], None, None, is_module) File "C:\Program Files\JetBrains\PyCharm 2018.3.5\helpers\pydev\pydevd.py", line 1135, in run pydev_imports.execfile(file, globals, locals) # execute the script File "C:\Program Files\JetBrains\PyCharm 2018.3.5\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "D:/OneDrive/RC/Segovia/DEMO-11/Adam11/_QUESTION_ANSWERER.py", line 8, in <module> inputs = tokeniser.encode_plus("Who is Bert?", "Bert is a puppet by Jim Henson", add_special_tokens=True, return_tensors="pt", return_offsets_mapping=True) File "C:\Users\mary\.conda\envs\adam11\lib\site-packages\transformers\tokenization_utils.py", line 1889, in encode_plus **kwargs, File "C:\Users\mary\.conda\envs\adam11\lib\site-packages\transformers\tokenization_utils.py", line 1843, in batch_encode_plus stack = torch.stack(stack, dim=0) TypeError: expected Tensor as element 0 in argument 0, but got list ``` ## Expected behavior According to the documentation that is available in file "tokenization_utils.py", the behaviour should be as follows: >return_offsets_mapping: >(optional) Set to True to return (char_start, char_end) for each token (default False). >If using Python's tokenizer, this method will raise NotImplementedError. This one is only available on Rust-based tokenizers inheriting from PreTrainedTokenizerFast. ## Environment info - `transformers` version: 2.5.1 - Platform: Windows-10-10.0.18362-SP0 - Python version: 3.7.6 - PyTorch version (GPU?): 1.4.0+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ## My Patch I have isolated the problem in file `tokenization_utils.py`. Take a look at method `batch_encode_plus`. It has the following statements: ```python # Sanitize the output to have dict[list] from list[dict] sanitized = {} for key in tokens[0].keys(): stack = [e for item in tokens for e in item[key]] if return_tensors == "tf": stack = tf.stack(stack, axis=0) elif return_tensors == "pt": stack = torch.stack(stack, dim=0) elif not return_tensors and len(stack) == 1: stack = stack[0] sanitized[key] = stack ``` The problem is that `stack` may be a list with a toch.Tensor, but also a list of tuples with start/end offsets when `return_offsets_mapping=True`. In such cases, `tf.stack` or `torch.stack` will fail because they expect a list of tensors as argument, not a list of tuples. I have patched my local transformers installation as follows, and it seems to work well: ```python if return_tensors and len(stack) == 1 and isinstance(stack[0], torch.Tensor): if return_tensors == "tf": stack = tf.stack(stack, axis=0) elif return_tensors == "pt": stack = torch.stack(stack, dim=0) elif not return_tensors and len(stack) == 1: stack = stack[0] ``` Thanks!
03-11-2020 11:04:27
03-11-2020 11:04:27
I am running into the same issue. Any progress on getting this into a release?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,223
closed
torch.distributed.barrier() have NCCL error
## Details <!-- Description of your issue --> **singularity :** ```singularity build pytorch20.02.simg docker://nvcr.io/nvidia/pytorch:20.02-py3``` I use Slurm and singularity to run run_glue.py but have NCCL error on torch.distributed.barrier() ### test.sh ``` #!/bin/bash #SBATCH --nodes=1 #SBATCH --ntasks=2 #SBATCH --cpus-per-task=4 #SBATCH --gres=gpu:2 #SBATCH --mem=161920 module purge module load compiler/gnu/7.3.0 openmpi3 singularity singularity exec pytorch20.02.simg python -m torch.distributed.launch --nproc_per_node 2 run_glue.py --model_type bert --model_name_or_path bert-base-uncased --task_name MRPC --do_train --do_eval --do_lower_case --data_dir ./glue_data/MRPC --max_seq_length 128 --per_gpu_eval_batch_size=8 --per_gpu_train_batch_size=8 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/MRPC/ ``` ### Output ``` Traceback (most recent call last): Traceback (most recent call last): File "run_glue.py", line 701, in <module> File "run_glue.py", line 701, in <module> main() File "run_glue.py", line 618, in main main() File "run_glue.py", line 641, in main torch.distributed.barrier() # Make sure only the first process in distributed training will download model & vocab File "/opt/conda/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 1489, in barrier torch.distributed.barrier() # Make sure only the first process in distributed training will download model & vocab File "/opt/conda/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 1489, in barrier work = _default_pg.barrier() RuntimeError: NCCL error in: ../torch/lib/c10d/ProcessGroupNCCL.cpp:450, unhandled system error, NCCL version 2.5.6 work = _default_pg.barrier() RuntimeError: NCCL error in: ../torch/lib/c10d/ProcessGroupNCCL.cpp:450, unhandled system error, NCCL version 2.5.6 [E ProcessGroupNCCL.cpp:284] NCCL watchdog thread terminated [E ProcessGroupNCCL.cpp:284] NCCL watchdog thread terminated Traceback (most recent call last): File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/opt/conda/lib/python3.6/site-packages/torch/distributed/launch.py", line 263, in <module> main() File "/opt/conda/lib/python3.6/site-packages/torch/distributed/launch.py", line 259, in main cmd=cmd) subprocess.CalledProcessError: Command '['/opt/conda/bin/python', '-u', 'run_glue.py', '--local_rank=1', '--model_type', 'bert', '--model_name_or_path', 'bert-base-uncased', '--task_name', 'MRPC', '--do_train', '--do_eval', '--do_lower_case', '--data_dir', '.glue_data/MRPC', '--max_seq_length', '128', '--per_gpu_eval_batch_size=8', '--per_gpu_train_batch_size=8', '--learning_rate', '2e-5', '--num_train_epochs', '3.0', '--output_dir', '/tmp/MRPC/']' returned non-zero exit status 1. ***************************************** Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. ***************************************** ``` Sloved ``` #!/bin/bash #SBATCH --nodes=1 #SBATCH --ntasks=2 #SBATCH --cpus-per-task=4 #SBATCH --gres=gpu:2 #SBATCH --mem=161920 module purge module load compiler/gnu/7.3.0 openmpi3 singularity singularity exec --nv pytorch20.02.simg python -m torch.distributed.launch --nproc_per_node 2 run_glue.py --model_type bert --model_name_or_path bert-base-uncased --task_name MRPC --do_train --do_eval --do_lower_case --data_dir ./glue_data/MRPC --max_seq_length 128 --per_gpu_eval_batch_size=8 --per_gpu_train_batch_size=8 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/MRPC/ ```
03-11-2020 08:46:53
03-11-2020 08:46:53
transformers
3,222
closed
Why the pre-trained models will be downloaded each times?
I often use pre-trained models, but when I want to import it (after restart). But each time, I have to download again. So is there a function that can automatically save the model you downloaded in a directory? In Gluon-NLP, it will save the model you used in `.mxnet` in your home directory.
03-11-2020 08:36:27
03-11-2020 08:36:27
They should definitely be cached locally. Do you have a sample code showing the behaviour of re-downloading every time?<|||||>Yes, here is a example (not happen every time but after system restart): ``` import torch from transformers import AlbertTokenizer albert_tokenizer = AlbertTokenizer.from_pretrained("albert-xxlarge-v2") ``` It will show this progress bar each time I run this after shutdown or reboot: **`Downloading: 100%|██████████████████████████████████████| 760k/760k [00:01<00:00, 556kB/s]`** And I search the whole file system, no file match `albert` (even if I didn't restart the system). So which directory should the model be under normal circumstances?<|||||>The models are by default cached to a hidden directory.<|||||>@AdityaSoni19031997 Yes, I searched all the hidden directory as well and there's no eligible model file. Even if there is, it will disappear after restart. And what is the name of the hidden directory?<|||||>@julien-c I found that it did cache in the local, but why it will disappear after **reboot**? ![Screenshot from 2020-03-17 18-01-14](https://user-images.githubusercontent.com/33285394/76915156-a7934700-6879-11ea-92e8-2f7b6199e52f.png) <|||||>Is their now a solution for that? In my case it tries to download the LLAMA-2 Model with almost 10 GB each time.
transformers
3,221
closed
Model card for dkleczek/bert-base-polish-uncased-v1
Model card for dkleczek/bert-base-polish-uncased-v1
03-11-2020 07:26:58
03-11-2020 07:26:58
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3221?src=pr&el=h1) Report > Merging [#3221](https://codecov.io/gh/huggingface/transformers/pull/3221?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d6de6423baf02a971d38ee69824104a1f0f85ad2&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3221/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3221?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3221 +/- ## ========================================== - Coverage 78.14% 78.14% -0.01% ========================================== Files 98 98 Lines 16668 16668 ========================================== - Hits 13026 13025 -1 - Misses 3642 3643 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3221?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3221/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.54% <0.00%> (-0.20%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3221?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3221?src=pr&el=footer). Last update [d6de642...aa7c949](https://codecov.io/gh/huggingface/transformers/pull/3221?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks for sharing => [model page](https://huggingface.co/dkleczek/bert-base-polish-uncased-v1)<|||||>will fix images + lowercase the language tag in the next commit
transformers
3,220
closed
How to tokenize word to characeter
I am studying machine reading comprehension on xlmroberta. My data is korquad. I need to tokenize all word to character. e.g. by english This is a dog -> _T h i s _i s _a _d o g please let me know.
03-11-2020 06:44:37
03-11-2020 06:44:37
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,219
closed
Typo in warning message
`T5Tokenizer` instead of `XLNetTokenizer`
03-11-2020 00:34:32
03-11-2020 00:34:32
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3219?src=pr&el=h1) Report > Merging [#3219](https://codecov.io/gh/huggingface/transformers/pull/3219?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d6de6423baf02a971d38ee69824104a1f0f85ad2&el=desc) will **not change** coverage by `%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3219/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3219?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3219 +/- ## ======================================= Coverage 78.14% 78.14% ======================================= Files 98 98 Lines 16668 16668 ======================================= Hits 13026 13026 Misses 3642 3642 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3219?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.83% <ø> (ø)` | | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.54% <0.00%> (-0.20%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3219/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.56% <0.00%> (+0.15%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3219?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3219?src=pr&el=footer). Last update [d6de642...0a77ca6](https://codecov.io/gh/huggingface/transformers/pull/3219?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,218
closed
Create README.md
03-11-2020 00:03:22
03-11-2020 00:03:22
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3218?src=pr&el=h1) Report > Merging [#3218](https://codecov.io/gh/huggingface/transformers/pull/3218?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d6de6423baf02a971d38ee69824104a1f0f85ad2&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3218/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3218?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3218 +/- ## ========================================== - Coverage 78.14% 78.14% -0.01% ========================================== Files 98 98 Lines 16668 16668 ========================================== - Hits 13026 13025 -1 - Misses 3642 3643 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3218?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3218/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.54% <0.00%> (-0.20%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3218?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3218?src=pr&el=footer). Last update [d6de642...a3ccef9](https://codecov.io/gh/huggingface/transformers/pull/3218?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,217
closed
Create README.md
03-10-2020 23:38:43
03-10-2020 23:38:43
transformers
3,216
closed
Create README.md
03-10-2020 23:38:36
03-10-2020 23:38:36
transformers
3,215
closed
Create README.md
03-10-2020 23:37:11
03-10-2020 23:37:11
transformers
3,214
closed
Create README.md
03-10-2020 23:33:21
03-10-2020 23:33:21
transformers
3,213
closed
fix typo in docstring demonstrating invocation of PreTrainedEncoderDecoder.from_pretrained
03-10-2020 22:41:05
03-10-2020 22:41:05
transformers
3,212
closed
Update README.md
- Update title - Remove metrics
03-10-2020 21:59:35
03-10-2020 21:59:35
transformers
3,211
closed
Update README.md
Change title to clarify the model description
03-10-2020 21:56:27
03-10-2020 21:56:27
transformers
3,210
closed
Update README.md
- Remove metrics until use other benchmarks to test the model
03-10-2020 21:54:34
03-10-2020 21:54:34
transformers
3,209
closed
Update README.md
- Remove metrics until tested on other xquad benchmarks
03-10-2020 21:52:02
03-10-2020 21:52:02
transformers
3,208
closed
Error loading pretrained bert-base-multilingual-cased
# 🐛 Bug ## Information Loading `bert-base-multilingual-cased` from pretrained gives an error: ``` Traceback (most recent call last): File "error.py", line 4, in <module> bert_model = transformers.AutoModel.from_pretrained('bert-base-multilingual-cased', config=config) File "/scratch/gobi1/bai/bai-conda/lib/python3.7/site-packages/transformers/modeling_auto.py", line 380, in from_pretrained return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) File "/scratch/gobi1/bai/bai-conda/lib/python3.7/site-packages/transformers/modeling_utils.py", line 558, in from_pretrained model.__class__.__name__, "\n\t".join(error_msgs) RuntimeError: Error(s) in loading state_dict for BertModel: size mismatch for bert.embeddings.word_embeddings.weight: copying a param with shape torch.Size([119547, 768]) from checkpoint, the shape in current model is torch.Size([30522, 768]). ``` ## To reproduce The following snippet produces the error: ``` import transformers config = transformers.BertConfig(output_hidden_states=True) bert_model = transformers.AutoModel.from_pretrained('bert-base-multilingual-cased', config=config) print(bert_model) ``` ## Expected behavior The model should load. Note that `bert-base-uncased` is able to load properly. ## Environment info - `transformers` version: 2.5.1 - Platform: Linux - Python version: 3.7.5 - PyTorch version (GPU?): 1.4.0 with GPU - Tensorflow version (GPU?): N/A - Using GPU in script?: no - Using distributed or parallel set-up in script?: no
03-10-2020 20:02:53
03-10-2020 20:02:53
You're creating a configuration based on the size of `bert-base-uncased`, so yes, it will work for that checkpoint. For any other checkpoint, however, you would need to change the values which are different (e.g. vocab size, which is what's failing in your case). It is indicated in the [documentation](https://huggingface.co/transformers/model_doc/bert.html#bertconfig): > This is the configuration class to store the configuration of a BertModel. It is used to instantiate an BERT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the BERT bert-base-uncased architecture. In order to load a configuration automatically from a checkpoint, you may use `from_pretrained` on the configuration as well. In your case, that would be: ```py import transformers config = transformers.BertConfig.from_pretrained("bert-base-multilingual-cased", output_hidden_states=True) bert_model = transformers.AutoModel.from_pretrained('bert-base-multilingual-cased', config=config) print(bert_model) ```
transformers
3,207
closed
Pipeline for Question Answering: How to return multiple correct answers output?
Very simple question I'm using transformer pipelines question answering on a very long piece of text. And, it is the cases that the are are multiple cases for the correct answer. I want them all. I was wondering if I could retrieve the 10 best scores of output instead of just one. Many thanks.
03-10-2020 17:27:13
03-10-2020 17:27:13
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I need a solution to this problem as well please help Thanks.<|||||>The solution is to use the `topk` parameter, for example, following the example in https://huggingface.co/transformers/task_summary.html about question answering, if you want 3 answer: ` result = nlp(question="What is extractive question answering?", context=context, topk = 3)`<|||||>@fumpe Can this work for sagemaker? Actually I am trying to deploy this pipeline in sagemaker, how can we customize the topk parameter in that? (As Sagemaker works a bit differently), Sagemaker currently returns only 1 answer, and am unable to modify the parameters to increase the number of returned items. Thank you. This is my model and pipeline setup in Sagemaker: hub = { 'HF_MODEL_ID':'valhalla/t5-base-qa-qg-hl', 'HF_TASK':'text2text-generation' }
transformers
3,206
closed
More details about DistilBERT experiment setting.
NIPS workshop paper (http://arxiv.org/abs/1910.01108) does not provide loss weights and other distillation hyper-paramerters (temperature, learning rate, epoch, step... ). and also (https://medium.com/huggingface/distilbert-8cf3380435b5). So when I try to experiment with this scripts(/transformers/examples/distillation/), it's hard to set hyper parameters. Actually, I tried to train my model with distilation/ scrpts (loss weights are set equally 0.33), and got undesirable result. :sob: I think it would be helpful for others to explicitly write experimental settings of DistilBERT in the readme file. Have a good day.
03-10-2020 17:13:46
03-10-2020 17:13:46
Did you take a look at the [distillation README](https://github.com/huggingface/transformers/tree/master/examples/distillation)? It shows the command that was used to train the distilled model.<|||||>Hey @silencio94, if you need help with distillation, drop me an email at clement [at] huggingface [dot] co.<|||||>Thanks for replying. I'm planing to do some experiments on my DistilBERT after distillation. after Then I'll send an email or leave another issue. thanks!
transformers
3,205
closed
where is the position emdeddings in bert for training a new model from scratch ?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
03-10-2020 13:35:16
03-10-2020 13:35:16
Right here https://github.com/huggingface/transformers/blob/31f2437f07cf014a042789f52fa1519a485e8b2b/src/transformers/modeling_bert.py#L150 In the future, please make an effort to write a decent post that explains exactly what you need. This is very low quality.<|||||>Sry for low quality. But how about its initialization? In the paper, it uses sin and cos functions, while the code seems like using random initialization. | | Wang | | 邮箱:[email protected] | Signature is customized by Netease Mail Master On 03/10/2020 22:15, Bram Vanroy wrote: Right here https://github.com/huggingface/transformers/blob/31f2437f07cf014a042789f52fa1519a485e8b2b/src/transformers/modeling_bert.py#L150 In the future, please make an effort to write a decent post that explains exactly what you need. This is very low quality. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,204
closed
UnboundLocalError: local variable 'tokenizer' referenced before assignment
I am runnnig the example code on the homepage. However,I met this problem. ``` import torch from transformers import * MODELS = [(BertModel, BertTokenizer, 'bert-base-uncased'), (OpenAIGPTModel, OpenAIGPTTokenizer, 'openai-gpt'), (GPT2Model, GPT2Tokenizer, 'gpt2'), (CTRLModel, CTRLTokenizer, 'ctrl'), (TransfoXLModel, TransfoXLTokenizer, 'transfo-xl-wt103'), (XLNetModel, XLNetTokenizer, 'xlnet-base-cased'), (XLMModel, XLMTokenizer, 'xlm-mlm-enfr-1024'), (DistilBertModel, DistilBertTokenizer, 'distilbert-base-cased'), (RobertaModel, RobertaTokenizer, 'roberta-base'), (XLMRobertaModel, XLMRobertaTokenizer, 'xlm-roberta-base'), ] for model_class, tokenizer_class, pretrained_weights in MODELS: tokenizer = tokenizer_class.from_pretrained(pretrained_weights) model = model_class.from_pretrained(pretrained_weights) input_ids = torch.tensor([tokenizer.encode("Here is some text to encode", add_special_tokens=True)]) with torch.no_grad(): last_hidden_states = model(input_ids)[0] `UnboundLocalError: local variable 'tokenizer' referenced before assignmen ``` This happened when the model_class goes to the XLMModel.I do not quite understand why this happen,because this problem only occurs when the model is XLMModel.
03-10-2020 12:25:23
03-10-2020 12:25:23
Plus:I have seen a similiar issue in this project,however the problem in that issue is that he did not input the right pretrain_weights.But I do not think that will be the solution in here<|||||>Similiarly,I aslo tried DistilBert,Roberta,XLMRoberta,these 3 models also cannot work for me,the error message is the same as the one I described above.<|||||>I just tried this and cannot reproduce the behaviour that you indicate. Are you running this from a notebook? Try restarting your kernel and running it again.<|||||>> I just tried this and cannot reproduce the behaviour that you indicate. Are you running this from a notebook? Try restarting your kernel and running it again. I run this programme on the linux GPU server,I tried restarting the python programme,however,the problem is still exsiting.Would this be the problem of downloading the model? <|||||>No. UnboundLocalError simply means that Python hasn't seen this variable before, which cannot occur in your code snippet. If the models were downloaded incorrectly, you'd get another error. Even if the `tokenizer` was initialized as `None` you'd get another error. Are you sure that is your _only_ code that is running? Please pos the full trace.<|||||>> No. UnboundLocalError simply means that Python hasn't seen this variable before, which cannot occur in your code snippet. If the models were downloaded incorrectly, you'd get another error. Even if the `tokenizer` was initialized as `None` you'd get another error. > > Are you sure that is your _only_ code that is running? Please pos the full trace. ``` Traceback (most recent call last): File "<stdin>", line 3, in <module> File "/users4/bwchen/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 302, in from_pretrained return cls._from_pretrained(*inputs, **kwargs) File "/users4/bwchen/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 438, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/users4/bwchen/anaconda3/lib/python3.7/site-packages/transformers/tokenization_bert.py", line 164, in __init__ "model use `tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`".format(vocab_file)) ValueError: Can't find a vocabulary file at path '/users4/bwchen/.cache/torch/transformers/37cc1eaaea18a456726fc28ecb438852f0ca1d9e7d259e6e3747ee33065936f6'. To load the vocabulary from a Google pretrained model use `tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)` ``` I am sure that is the only code I was running at that time , I am tring to reproduce this error.This time it is working properly when the model_class goes the aforementioned 'wrong' model XLMModel. However,when the model continues to run,I met another problem when the model was the DistillBert, does this error means that I have to use BertTokenizer instead of DistillBertTokenizer?<|||||>I can also attest to this error. I am using a Kaggle notebook, and I get this error after running this in my first cell. Most of it is default code, bottom two lines are the key ones. ``` # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load in import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the "../input/" directory. # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk('/kaggle/input'): for filename in filenames: print(os.path.join(dirname, filename)) # Any results you write to the current directory are saved as output. print(os.getcwd(), os.listdir()) from transformers import RobertaTokenizer tknzr = RobertaTokenizer.from_pretrained('roberta-large') ``` Error thrown ``` UnboundLocalError Traceback (most recent call last) <ipython-input-1-7957db35f110> in <module> 19 from transformers import RobertaTokenizer 20 ---> 21 tknzr = RobertaTokenizer.from_pretrained('roberta-large') /opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils.py in from_pretrained(cls, *inputs, **kwargs) 300 301 """ --> 302 return cls._from_pretrained(*inputs, **kwargs) 303 304 /opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 442 443 # Save inputs and kwargs for saving and re-loading with ``save_pretrained`` --> 444 tokenizer.init_inputs = init_inputs 445 tokenizer.init_kwargs = init_kwargs 446 UnboundLocalError: local variable 'tokenizer' referenced before assignment ``` Kaggle runs transformers version 2.3.0 by default. After updating to 2.5.1 it worked just fine. To update on Kaggle, turn the internet option on in the settings in the right side. Then do `!pip install -U transformers`<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,203
closed
Attention mask always returns array of ones for CamembertTokenizer.batch_encode_plus
# 🐛 Bug ## Information The model i'm using is CamebertTokenizer for the French language The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: - First init the models with the following code: ```python camembert_tokenizer = CamembertTokenizer.from_pretrained('../models/', cache_dir='./models') camembert_tf_model = TFCamembertModel.from_pretrained('../models/', output_hidden_states=True, cache_dir='./models' ) camembert_tf_model.trainable = False ``` - Prepare the input data : ```python text = ' '.join(['je' for i in range(25)]) texts = [ text, "je suis cool"] input_ids = camembert_tokenizer.batch_encode_plus(texts, add_special_tokens=True, max_length=8, return_tensors='tf') print(input_ids) ``` ## Expected behavior What should happen is that the padded tokens should have a mask value of 0 if I've correctly understood the doc. so the output of the snippet should be : ``` {'input_ids': <tf.Tensor: shape=(2, 8), dtype=int32, numpy= array([[ 5, 50, 50, 50, 50, 50, 50, 6], [ 5, 50, 146, 4261, 6, 1, 1, 1]], dtype=int32)>, 'token_type_ids': <tf.Tensor: shape=(2, 8), dtype=int32, numpy= array([[0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 1, 1]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 8), dtype=int32, numpy= array([[1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 0, 0, 0]], dtype=int32)>} ``` Instead, i'm always getting an attention_mask full of ones like this : ``` {'input_ids': <tf.Tensor: shape=(2, 8), dtype=int32, numpy= array([[ 5, 50, 50, 50, 50, 50, 50, 6], [ 5, 50, 146, 4261, 6, 1, 1, 1]], dtype=int32)>, 'token_type_ids': <tf.Tensor: shape=(2, 8), dtype=int32, numpy= array([[0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 1, 1]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 8), dtype=int32, numpy= array([[1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1]], dtype=int32)>} ``` ## Environment info - `transformers` version: 2.5.1 - Platform: Linux-5.3.0-40-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.4.0 (False) - Tensorflow version (GPU?): 2.1.0 (False) - Using GPU in script?: (False) - Using distributed or parallel set-up in script?: (False)
03-10-2020 10:54:02
03-10-2020 10:54:02
**EDIT** : I've been accidentally testing this code on the 2.4.1 version in a different environment. Since I've updated to 2.5.1 the behavior is as expected.
transformers
3,202
closed
Update README.md
- Clarify that the model is not trained on the evaluation dataset
03-10-2020 04:11:01
03-10-2020 04:11:01
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3202?src=pr&el=h1) Report > Merging [#3202](https://codecov.io/gh/huggingface/transformers/pull/3202?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5ca356a464e98e065488205f3fcf9247f56c3832?src=pr&el=desc) will **increase** coverage by `<.01%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3202/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3202?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3202 +/- ## ========================================== + Coverage 77.96% 77.97% +<.01% ========================================== Files 98 98 Lines 16668 16668 ========================================== + Hits 12996 12997 +1 + Misses 3672 3671 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3202?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3202/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.72% <0%> (+0.15%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3202?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3202?src=pr&el=footer). Last update [5ca356a...bce6ca3](https://codecov.io/gh/huggingface/transformers/pull/3202?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,201
closed
Update README.md
- Fix path of tokenizer - Clarify that the model is not trained on the evaluation set
03-10-2020 04:06:28
03-10-2020 04:06:28
transformers
3,200
closed
TF GPT2 Language model can't be created with from_pretrained() for specific shortcut name
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): TFGPT2LMHeadModel The colab notebook works for all model sizes except for gpt2-xl, where it throws an error. It looks like it can't download the correct checkpoint from the model name (gpt2-xl) I tried running the colab notebook with other gpt2-models and they all work. Stack trace: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-068b0d38bee3> in <module>() 1 strategy = tf.distribute.experimental.TPUStrategy(resolver) 2 with strategy.scope(): ----> 3 model = create_model() 4 5 loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) 2 frames <ipython-input-7-f6b9ea32b94a> in create_model() 1 def create_model(): ----> 2 return TFGPT2LMHeadModel.from_pretrained('gpt2-xl') /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 401 model(model.dummy_inputs, training=False) # build the network with dummy inputs 402 --> 403 assert os.path.isfile(resolved_archive_file), "Error retrieving file {}".format(resolved_archive_file) 404 # 'by_name' allow us to do transfer learning by skipping/adding layers 405 # see https://github.com/tensorflow/tensorflow/blob/00fad90125b18b80fe054de1055770cfb8fe4ba3/tensorflow/python/keras/engine/network.py#L1339-L1357 /usr/lib/python3.6/genericpath.py in isfile(path) 28 """Test whether a path is a regular file""" 29 try: ---> 30 st = os.stat(path) 31 except OSError: 32 return False TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType ``` Language I am using the model on (English, Chinese ...): English The problem arises when using: * my own modified scripts: (give details below) See colab: https://colab.research.google.com/drive/12gEGdxUjyVLBSUjkjngAWiE_ENIUIV8o The tasks I am working on is: * my own task or dataset: (give details below) Finetuning gpt2-xl on wikitext2 ## To reproduce Run the colab notebook, ## Expected behavior All gpt2 model sizes work except for gpt2-xl ## Environment info - `transformers` version: master - Platform: google colab - Tensorflow version (GPU?): 2.1 (TPU)
03-10-2020 01:35:11
03-10-2020 01:35:11
For some reason there isn't a TF pretrained checkpoint for gpt2-xl [here](https://github.com/huggingface/transformers/blob/9499a3778e1b782f03bc3b15b2ae0cbd20b6391f/src/transformers/modeling_tf_gpt2.py#L39) but there is for Pytorch [here](https://github.com/huggingface/transformers/blob/4134100363e878693aa41f4a25a667ca46d80a9e/src/transformers/modeling_gpt2.py#L35) Fixing this should only involve converting the pt checkpoint to a tf one. I'd be happy to do it myself if there is a conversion script that can convert Pytorch checkpoints to TF<|||||>Converting a pytorch checkpoint to tf works with ```python model = GPT2LMHeadModel.from_pretrained('gpt2-xl') model.save_pretrained('./') model = TFGPT2LMHeadModel.from_pretrained('./', from_pt=True) model.save_pretrained('./out') ``` If you can tell me where to upload the TF checkpoint to, I'll open up a pull request<|||||>Hi @bkkaggle thanks for pointing this out! @julien-c could you maybe help out here: While the model: "gpt2-xl": "https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-xl-pytorch_model.bin", does exist in PyTorch. It does not exist for TF 2. Could we add it as well?
transformers
3,199
closed
Model card for albert-base-v2-squad2
Just creating model card for new community model!
03-09-2020 23:28:16
03-09-2020 23:28:16
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3199?src=pr&el=h1) Report > Merging [#3199](https://codecov.io/gh/huggingface/transformers/pull/3199?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/49debe62fdc96e161f866dd8914d5915477bb742?src=pr&el=desc) will **increase** coverage by `0.01%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3199/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3199?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3199 +/- ## ========================================= + Coverage 77.98% 78% +0.01% ========================================= Files 98 98 Lines 16645 16645 ========================================= + Hits 12981 12984 +3 + Misses 3664 3661 -3 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3199?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3199/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68% <0%> (-0.41%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3199/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.4% <0%> (-0.16%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3199/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.64% <0%> (+0.97%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3199?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3199?src=pr&el=footer). Last update [49debe6...f53348d](https://codecov.io/gh/huggingface/transformers/pull/3199?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thanks for sharing. [Model page](https://huggingface.co/twmkn9/albert-base-v2-squad2)
transformers
3,198
closed
XLM-R Tokenizer now passes common tests + Integration tests
XLM-R Tokenizer had a lot of issues that were not identified as no testing was done on it. closes #2993 closes #2795 closes #2741 closes #2727 closes #2508 This fixes all the above issues, and works for all official checkpoints as well as other SPM files. However, there are a few things I dislike about the way things stands, which I'm detailing in the comments below.
03-09-2020 22:43:53
03-09-2020 22:43:53
This solved a CUDA runtime error for me. Strange! Thanks for this PR!
transformers
3,197
closed
[model upload] Support for organizations
03-09-2020 21:30:24
03-09-2020 21:30:24
transformers
3,196
closed
Create README.md
03-09-2020 20:22:56
03-09-2020 20:22:56
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3196?src=pr&el=h1) Report > Merging [#3196](https://codecov.io/gh/huggingface/transformers/pull/3196?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3aca02efb3d4ff2d6d231c55d3b9367e61b7c0c4?src=pr&el=desc) will **decrease** coverage by `0.97%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3196/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3196?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3196 +/- ## ========================================== - Coverage 77.98% 77.01% -0.98% ========================================== Files 98 98 Lines 16660 16660 ========================================== - Hits 12993 12831 -162 - Misses 3667 3829 +162 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3196?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96% <0%> (-2.23%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `75.84% <0%> (+0.21%)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3196/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.56% <0%> (+0.31%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3196?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3196?src=pr&el=footer). Last update [3aca02e...99b8533](https://codecov.io/gh/huggingface/transformers/pull/3196?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,195
closed
Error reported when fine tuning on my dataset using ''run_language_modeling.py"
# 🐛 Bug ## Information Model I am using (RoBerta): Language I am using the model on (English): The problem arises when using: * [ ] the official example scripts: (give details below) Using the script provided in run_language_modeling.py I tried to fine tune the model on my own dataset. But it shows a "KeyError 1" when It is about to run the first epoch and first iteration * [ ] my own modified scripts: (give details below) i only modified the script to read in my dataset using the LineByLineTextDataset class to read in my sentence which is in an excel file and convert them to a list like in the class Please advice, if there a proper format my data-set should be in, before fine tuning it. The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) To Fine tuning the language model of Bert on our my own data. ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): yes - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes
03-09-2020 19:55:22
03-09-2020 19:55:22
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,194
closed
[fix] Bart CNN Example: model.to(device)
03-09-2020 18:19:05
03-09-2020 18:19:05
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3194?src=pr&el=h1) Report > Merging [#3194](https://codecov.io/gh/huggingface/transformers/pull/3194?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5164ea91a7b4d35cb03867233527fa383a651775?src=pr&el=desc) will **decrease** coverage by `1.09%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3194/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3194?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3194 +/- ## ======================================== - Coverage 78.09% 77% -1.1% ======================================== Files 98 98 Lines 16660 16660 ======================================== - Hits 13011 12829 -182 - Misses 3649 3831 +182 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3194?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0%> (-81.21%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0%> (-10%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.67% <0%> (-3.14%)` | :arrow_down: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0%> (-2.3%)` | :arrow_down: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96% <0%> (-2.23%)` | :arrow_down: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.2% <0%> (-1.35%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3194/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `94.4% <0%> (-0.16%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3194?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3194?src=pr&el=footer). Last update [5164ea9...f3272af](https://codecov.io/gh/huggingface/transformers/pull/3194?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,193
closed
Where is the default download address for pre-trained weight
# ❓ Questions & Help ``` from transformers import DistilBertTokenizer, DistilBertModel tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased') model = DistilBertModel.from_pretrained('distilbert-base-uncased') ``` I can't find the downloaded file. Thanks for your help
03-09-2020 17:35:47
03-09-2020 17:35:47
It's in your torch home: ```py >>> from torch.hub import _get_torch_home >>> _get_torch_home() '/home/<USER>/.cache/torch' ```