repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 3,792 | closed | Using run_glue.py on external datasets for fine-tuning a RoBERTa classification model --> Is this possible? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I recently uploaded the model weights for RoBERTa trained on a chemical benchmark dataset called ZINC15K for masked-language modelling of individual atoms in each molecule. The model performed pretty decently, so I thought it would be interesting to apply it to a downstream task of toxicity prediction on the Tox21 dataset (balanced dataset I created here: https://github.com/seyonechithrananda/bert-loves-chemistry/blob/master/tox21_balanced_revised.csv)
^The link above is in the repo with all the HuggingFace notebooks relating to this task I have created for more details.
As you can see in the CSV above, the 'SR-p53' value represents the labels for the dataset, whereas the 'SMILES' column represents the text representation for each molecule. Is there a way that `run_glue.py` can be repurposed alongside `RobertaforSequenceClassification` to train a pre-trained model for this classification task? And if so, could anyone give me a couple pointers on where to start? I'm relatively new to HuggingFace (more familiar with CV + graph NN's for chemistry) but have been enjoying using it so far!
Link to model weights (in Huggingface hub): https://huggingface.co/seyonec/ChemBERTa-zinc-base-v1
Thanks for the help!
*Given that this was a more library-centric question and not a bug I felt it would be better to post here then on SO.
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
| 04-14-2020 17:16:02 | 04-14-2020 17:16:02 | Hi @seyonechithrananda, do you have a `torch.data.Dataset` for your classification dataset?
If you do, this will be pretty easy following #3800 (i.e. the amount of code required should be pretty minimal)<|||||>Thanks for the response @julien-c! I originally grab a list of SMILES sequences and their corresponding labels into two separate lists, before tokenizing the sequences and converting them into a tensor (following the `RobertaForSequenceClassification` docs). Will look into `torch.data.Dataset`.
Link to code I was using previously (I originally tried to fine-tune for classification without `run_glue.py`): https://t.co/lqVqh3L1oA?amp=1
> Hi @seyonechithrananda, do you have a `torch.data.Dataset` for your classification dataset?
>
> If you do, this will be pretty easy following #3800 (i.e. the amount of code required should be pretty minimal)
<|||||>Hi @julien-c! Followed your tips and created a Dataset class following a similar tutorial. However I ran into an issue with CUDA in the training pipeline:
```
AttributeError Traceback (most recent call last)
<ipython-input-15-ea2a288fbd03> in <module>()
8 if torch.cuda.is_available():
9 sent = sent.cuda()
---> 10 label = labels.cuda()
11 output = model.forward(sent)[0]
12 _, predicted = torch.max(output, 1)
AttributeError: 'list' object has no attribute 'cuda'
```
Do you know why this issue is occurring? I use a pre-trained RoBERTA model (trained on MLM for a diff. dataset). [Here](https://colab.research.google.com/drive/1Q9pvFQoEe_4NIO853-tDy0ERZ3pyNzwT) is the notebook. Also, can we utilize the config from a MLM RoBERTa model for sequence classification or should it be the `roberta-base` config?
Thanks for the help!
<|||||>Managed to create a variant of `run_glue.py` and get it working. Thanks for the help1 |
transformers | 3,791 | closed | XLM tokenizer should encode with bos token | XLM tokenizer should behave according to the documentation
closes https://github.com/huggingface/transformers/issues/3788 | 04-14-2020 16:10:51 | 04-14-2020 16:10:51 | |
transformers | 3,790 | closed | Fix token_type_id in BERT question-answering example | `token_type_id` wasn't being set correctly in the code examples for BERT Question answering. It's being turned into the sequence embedding, hence needs to highlight whether each token belongs to sequence 0 or 1.
For this small example the model returns the correct answer even though the parameter was incorrectly set, but for bigger paragraphs that is not the case.
I changed the code to use encode_plus which returns the correct `token_type_id`. | 04-14-2020 15:12:58 | 04-14-2020 15:12:58 | |
transformers | 3,789 | closed | Is there a classical transformer model in the project? | Hi,
I am studying in some domain that need the original transformer which is from 《attention is what you all need》. Is there an implementation? | 04-14-2020 13:44:31 | 04-14-2020 13:44:31 | You might find this useful http://nlp.seas.harvard.edu/2018/04/03/attention.html
I think PyTorch already have it implemented in their library
https://pytorch.org/docs/stable/nn.html?highlight=transformer#torch.nn.Transformer<|||||>thank you,It is a answer for me. |
transformers | 3,788 | closed | Inconsistencies and possible bugs in different tokenizers | # 🐛 Bug
## Information
Over in [Flair](https://github.com/flairNLP/flair/pull/1494) we are integrating your awesome library in our embeddings interfaces. We are using the `AutoTokenizer` class to create one interface for all embeddings. We use the tokenizers to encode strings with special tokens.
However, we note some inconsistencies: (1) Most, but not all encodings do not include the BOS token and (2) some encodings behave differently depending on how the tokenizer is called. In both cases, this is detrimental to downstream task performance.
This can be reproduced with the following script:
```python
from transformers import AutoTokenizer
# example string to tokenize
text = "CRICKET1 MATCH"
# different models
for tokenizer_name in [
'bert-base-cased',
'openai-gpt',
'transfo-xl-wt103',
'gpt2',
'xlnet-base-cased',
'xlm-mlm-ende-1024',
'roberta-base',
'distilbert-base-uncased',
'ctrl',
'camembert-base',
'albert-base-v2',
'xlm-roberta-base',
'distilgpt2',
'bart-large',
'distilroberta-base',
]:
# for each tokenizer model, print name and result of checks
print('------------')
print(tokenizer_name)
print('------------')
# get tokenizer
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)
# method 1: tokenizer.encode() with add_special_tokens=True
ids = tokenizer.encode(text, add_special_tokens=True)
subtokens_encode_special = tokenizer.convert_ids_to_tokens(ids)
# method 2: tokenizer.encode() with add_special_tokens=False and subsequent build_inputs_with_special_tokens()
ids = tokenizer.encode(text, add_special_tokens=False)
ids_extended = tokenizer.build_inputs_with_special_tokens(ids)
subtokens_encode_and_build = tokenizer.convert_ids_to_tokens(ids_extended)
# check if both methods yield the same result
if subtokens_encode_special != subtokens_encode_and_build:
print("DIFFERENCE IN ENCODING!")
print(f'Method 1 - Encode (+ special):\t{str(subtokens_encode_special)}')
print(f'Method 2 - Encode and build: \t{str(subtokens_encode_and_build)}')
# check if the BOS token is included
bos_token = tokenizer.bos_token
if bos_token and bos_token not in subtokens_encode_and_build:
print("DOES NOT CONTAIN BOS TOKEN!")
print(f"BOS token '{bos_token}' not in {str(subtokens_encode_and_build)}")
```
This outputs the following inconsistencies, at least some of which likely are bugs.
There are two encodings that do not contain the BOS token:
```console
------------
xlm-mlm-ende-1024
------------
DOES NOT CONTAIN BOS TOKEN!
BOS token '<s>' not in ['</s>', 'crick', 'et', '1</w>', 'match</w>', '</s>']
```
So, the XLM encoding of the string "CRICKET1 MATCH" strangely starts with a **`</s>`** (EOS) even though it should probably start with a **`<s>`**.
```console
------------
xlnet-base-cased
------------
DOES NOT CONTAIN BOS TOKEN!
BOS token '<s>' not in ['▁CR', 'ICK', 'ET', '1', '▁M', 'ATCH', '<sep>', '<cls>']
```
XLNet encoding does not contain BOS and EOS at all. This is consistent with the documentation but is detrimental to performance. In our experiments, it works a lot better if we include `<s>` and `</s>` in the sequence.
There are also two tokenizers for which the two methods (encode with special tokens and encode and build) give slightly different results, namely RoBERTa and BART:
```console
------------
roberta-base
------------
DIFFERENCE IN ENCODING!
Method 1 - Encode (+ special): ['<s>', 'ĠCR', 'ICK', 'ET', '1', 'ĠM', 'ATCH', '</s>']
Method 2 - Encode and build: ['<s>', 'CR', 'ICK', 'ET', '1', 'ĠM', 'ATCH', '</s>']
```
This was already noted by @stefan-it in #1196 and strangely, even though the tokenization output by method 1 seems to make more sense, method 2 gives better results.
## Expected behavior
Consistent output :)
## Environment info
- `transformers` version: 2.8
- Platform: Ubuntu
- Python version: 3.7
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: o
| 04-14-2020 12:50:29 | 04-14-2020 12:50:29 | Hi, and thanks for your report! Indeed some of these seem to be bugs.
## XLM
This seems to be a bug. It doesn't behave as the documentation says, I'm looking into it. Encoding sequences with the bos token instead of the cls token should do the trick.
## XLNet
The way XLNet encode sequences is with the format `A <sep> B <sep> <cls>`, as it can be seen in the [original repository](https://github.com/zihangdai/xlnet/blob/0b642d14dd8aec7f1e1ecbf7d6942d5faa6be1f0/data_utils.py#L481-L487). I'm not finding any usage of the `<s>` and `</s>` tokens in the repo, even though they're declared.
It's interesting that you obtained better results when using `<s>` and `</s>`!
## RoBERTa and BART
This is a slightly more complicated issue. The #1196 issue describes it well, and the https://github.com/huggingface/transformers/pull/2778 PR addresses this as well. Here the correct tokenization is the first one:
```
['<s>', 'ĠCR', 'ICK', 'ET', '1', 'ĠM', 'ATCH', '</s>']
```
It is interesting that you're getting better results with the second one. I believe the original implementation outputs the same results as the result pasted above. I'm guessing you obtained the second result with the following:
```py
tokenizer.build_inputs_with_special_tokens(
tokenizer.encode('She likes <mask> cats.', add_special_tokens=False)
)
```
which indeed yields the tokenization you mentioned. This happens because when encoding without special tokens, no space is added between the initial special token and the first sequence token (seeing as there's not special tokens). When using this method, you would need to specify you want that prefix space so that it adds it. You can do so with the `add_prefix_space` option for the `encode` method:
```py
tokenizer.build_inputs_with_special_tokens(
tokenizer.encode('She likes <mask> cats.', add_special_tokens=False, add_prefix_space=True)
)
```
This yields the same results as the first method. Let me know if I can be of further help.
<|||||>Thanks, that clarifies it.
You're right that the XLNet implementation declares the ` <s>` and `</s>` but then does not seem to use them, which is strange. Also strange that we are seeing better results with these tags but this could also be a problem in our code. Perhaps you could then set the `tokenizer.bos_token` and `tokenizer.eos_token` fields to `None` for the `XLNetTokenizer` if they are not used? |
transformers | 3,787 | closed | In just fouth blocks of the code of the colab notebook "01-training notebook", it just failed. | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
https://colab.research.google.com/drive/1gamxcO5AHioIHFVhTr1x71mn8SYPNkPz
`---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-4-559dbb5a1852> in <module>()
7
8 # First we create an empty Byte-Pair Encoding model (i.e. not trained model)
----> 9 tokenizer = Tokenizer(BPE())
10
11 # Then we enable lower-casing and unicode-normalization
TypeError: cannot create 'BPE' instances`
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 04-14-2020 11:39:40 | 04-14-2020 11:39:40 | it seems to be that it can be solved by without installing it from pip<|||||>> it seems to be that it can be solved by without installing it from pip
could you please explain how you solved the issue ? did not understand.
thanks,<|||||>@uunal, do you have this issue too?<|||||>@JonathanSum yes, can not find a solution. Updated all packages, check dependency but still same error persists.<|||||>@uunal see the link above? If you don't want to waste your time, please feel free to use it, and it is the 01 training notebook. It has the solution.
The solution applied: "it seems to be that it can be solved by without installing it from pip"<|||||>Ooh I get it now, pip version has a problem:) thanks<|||||>Anyways I found why it is not working with pip version, check out this commit:https://github.com/huggingface/transformers/commit/b7cf9f43d259fbad45d899c1769110aafc9f410a |
transformers | 3,786 | closed | Why force tokens in Bart decoding | https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bart.py#L955
What's the meaning of this line? Why decode \<S\> when cur_len = 1? Why not when cur_len = 0? | 04-14-2020 07:05:03 | 04-14-2020 07:05:03 | see https://github.com/huggingface/transformers/issues/3668<|||||>> see #3668
thanks a lot! |
transformers | 3,785 | closed | How to fine tune EncoderDecoder model for training a new corpus of data ? | is there any documentation available for the same? | 04-14-2020 05:02:22 | 04-14-2020 05:02:22 | We are currently working on implementing the encoder decoder framework. See PR: https://github.com/huggingface/transformers/pull/3383<|||||>I think in a week it should be ready :-) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>See https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16#bert2bert-summarization-with-%F0%9F%A4%97-encoderdecoder-framework<|||||>Thank u Patrick
On Mon, Aug 3, 2020 at 10:19 PM Patrick von Platen <[email protected]>
wrote:
> See
> https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16#bert2bert-summarization-with-%F0%9F%A4%97-encoderdecoder-framework
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/3785#issuecomment-668127257>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABUHUMC742YUNXLDJDGVCGTR63TBPANCNFSM4MHODIYA>
> .
>
|
transformers | 3,784 | closed | Convert pytorch-pretrained-bert to new version (transformers) | So I have this code for albert:
```py
from transformers import AlbertTokenizer, AlbertForMaskedLM
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = AlbertForMaskedLM.from_pretrained('albert-base-v2')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids, masked_lm_labels=input_ids)
loss, prediction_scores = outputs[:2]
```
How do i convert it to old format:
```py
from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM
tokenizer = BertTokenizer.from_pretrained('bert-large-cased')
model = BertForMaskedLM.from_pretrained('bert-large-cased')
tokenized_text_tmp = tokenizer.tokenize(text)
indexed_tokens_tmp = tokenizer.convert_tokens_to_ids(text_tokens_tmp)
predictions = model(tokens_tensors, segments_tensors, attention_mask_tensors)
```
How do i get functions tokenizer.convert_tokens_to_ids and tokenizer.tokenize from old version into new one.
I need to first tokenize text and only after convert it to ids. Because in old code I do this way, changing in new format would take a very long time, because I did a lot of speed optimization and adding of padding.
| 04-14-2020 03:30:08 | 04-14-2020 03:30:08 | Have you tried using the exact same methods `tokenize` and `convert_tokens_to_ids`?<|||||>```py
from transformers import AlbertTokenizer, AlbertForMaskedLM
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2)
model = AlbertForMaskedLM.from_pretrained('albert-base-v2)
text = "[CLS] Who was Jim Henson ? [SEP]"
tokenized_text = tokenizer.tokenize(text)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
segments_ids = [0 for x in range(0,len(tokenized_text))]
tokens_tensor = torch.tensor([indexed_tokens])
tokens_tensor = tokens_tensor.to('cuda')
with torch.no_grad():
predictions_0 = model(tokens_tensor)
print(tokenized_text)
print(predictions_0)
del predictions_0
```<|||||>I cannot copy but predictions_0 does not contain 2 elements, but just one.
so:
loss, prediction_scores = outputs[:2] gives me an error (not in range)
prediction_scores = outputs[0] works, but i do not know what is the output, i hope logits.<|||||>I had to format your first comment as it was unreadable, please [see how to use code blocks](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks).
In your first snipped you're using:
```py
outputs = model(input_ids, masked_lm_labels=input_ids)
```
while in the second snippet:
```py
predictions_0 = model(tokens_tensor)
```
You're not sending the `masked_lm_labels`, which is what is used to compute the loss. If you were to use these labels, the loss would be computed, resulting in a tuple with 2 elements as an output.
Here's the [documentation](https://huggingface.co/transformers/model_doc/albert.html#transformers.AlbertForMaskedLM) for the `AlbertForMaskedLM` model.
<|||||>Thanks, |
transformers | 3,783 | closed | Longformer, a scalable transformer model for long-document NLP tasks | # 🌟 New model addition
## Model description
This is an incredible project from the awesome https://github.com/allenai team that solves a big problem in transformers.
From https://twitter.com/i_beltagy/status/1249750021811011591
Excited to share our work on Longformer, a scalable transformer model for long-document NLP tasks without chunking/truncation to fit the 512 limit.
Work with @mattthemathman
, @armancohan
Code and pretrained model: http://github.com/allenai/longformer
We replace the standard self-attention with one that scales linearly with sequence length and that can flexibly adapt to downstream tasks. We continue pretraning from the RoBERTa checkpoint and evaluate on QA, coref, classification. Pretrained model supports seqlen 4,096
The small model archives sota results on enwik8 and text8 and large model gets close with half the parameters. Longformer's self-attention uses an efficient CUDA kernel that minimizes memory usage (char-lm large model, 23k tokens at training and 32k tokens at evaluation)
<!-- Important information -->
## Open source status
* [X] the model implementation is available: (give details)
https://github.com/allenai/longformer
* [X] the model weights are available: (give details)
Yes, at https://github.com/allenai/longformer
* [ X] who are the authors: (mention them, if possible by @gh-username)
@ibeltagy @schmmd
| 04-14-2020 00:35:38 | 04-14-2020 00:35:38 | Any updates on this? Just curious.<|||||>Reformer will be added next week and then work will start on Longformer :-) <|||||>Look forward to it!<|||||>Longformer is added now - closing!<|||||>@patrickvonplaten I have been using `Longformer` self attention with `LongBart` for summarisation recently and have done some side-by-side comparison to hf `BartForConditionalGeneration`. I noticed that `LongBart` is actually using more memory than hf `BartForConditionalGeneration` (when they're set up the equivalently). I looked into this and have found that this is coming from the self attention layer, i.e. `Longformer` self attention is using more memory than the normal multi-head self attention in `BartForConditionalGeneration`.
Wondering if this is expected or a bug? If it's expected, could you please explain? I thought the point of `Longformer` self attention was to reduce memory consumption...<|||||>It depends very much on the sequence length of your input. Did you benchmark your results using the benchmarking utils? <|||||>@alexgaskell10, what is the sequence length? If the sequence length is shorter than the window size (for LongBart, it is probably 1024), you will see a bit of an increase in memory. For sequences longer than the window size (say, 2048), `LongformerSelfAttention` should be much more memory efficient compared to regular selfattention.
<|||||>Thanks to both for the quick responses. I have only tried with input lengths <= 1024 but nothing beyond that. Makes sense that the benefits of `Longformer` self attention are more evident as sequences get longer, thanks.
@patrickvonplaten no I didn't know there was a script for this already, I just used something I wrote. I'll have a look at this.
@ibeltagy the sequence length I have set equal to window size (and tried for several different values, all <= 1024). I thought that if I used a sequence length of 1024 and window size of 1024 then `Longformer` and multi-head self attention layers would be equivalent (thereby making `LongBart` and `BartForConditionalGeneration` equivalent). Is there some overhead to using `Longformer` self attention which means it is more costly for sequences <= 1024?<|||||>> equivalent
they are not perfectly equivalent but close
> which means it is more costly for sequences <= 1024?
yes, the current implementation has a bit of overhead with sequences shorter than the window length. We are planning to address that in the future. One way to do so is to switch to regular selfattention if the sequence is short, but this probably requires additional pertaining to teach the model to work with both types of selfattention. <|||||>Great, all makes sense. I'll run benchmarking for longer sequences and flag if anything unusual shows all. Thanks! |
transformers | 3,782 | closed | Importing horovod.tensorflow crashes AlbertTokenizer but not BertTokenizer | # 🐛 Bug
## Information
Albert tokenizers began crashing after I reorded my import statements with `isort`. Tracked down the bug to very strange behavior: importing `horovod.tensorflow` before `AlbertTokenizer` causes a crash, while importing `AlbertTokenizer` first does not. This behavior does not occur with `BertTokenizer`, only with `AlbertTokenizer`.
## To reproduce
Steps to reproduce the behavior:
```bash
docker run -it nvcr.io/nvidia/tensorflow:20.03-tf2-py3 /bin/bash # TF 2.1, horovod 0.19.0
pip install transformers==2.8.0
```
```python
import horovod.tensorflow as hvd
from transformers import AlbertTokenizer, BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
print("BERT success!") # this succeeds
tokenizer = AlbertTokenizer.from_pretrained("albert-base-v2")
print("ALBERT success!") # this causes a CoreDump
```
outputs
```error
BERT success!
[libprotobuf FATAL external/com_google_protobuf/src/google/protobuf/stubs/common.cc:86] This program was compiled against version 3.6.1 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.8.0). Contact the program author
for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/sentencepiece/src/builtin_pb/sentencepiece_model.pb.cc".)
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): This program was compiled against version 3.6.1 of the Protocol Buffer runtime library, which is not compatible with the
installed version (3.8.0). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/sentencepiece/src/builtin_pb/sentencepiece_model.pb.cc".)
Aborted (core dumped)
```
However, the code below succeeds. The only difference is that the transformers import comes first:
```python
from transformers import AlbertTokenizer, BertTokenizer
import horovod.tensorflow as hvd
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
print("BERT success!") # this succeeds
tokenizer = AlbertTokenizer.from_pretrained("albert-base-v2")
print("ALBERT success!") # this succeeds
```
This bug is a bit bewildering, to be honest. I can stop sorting my imports, I guess... Hoping that someone can identify the root cause.
| 04-13-2020 23:22:42 | 04-13-2020 23:22:42 | I think I remember someone mentioning this before. @LysandreJik does it ring any bell?<|||||>The error at the time was due to https://github.com/scipy/scipy/issues/11237. I'll look into it and try to reproduce @jarednielsen.<|||||>The `AlbertTokenizer` is using `SentencePiece` which is based on protobuffs. This seems to be the error, which would point to an error with `SentencePiece` rather than with `AlbertTokenizer`. Would you mind trying to import `XLNetTokenizer`, which is also based on `SentencePiece` and show us the results?<|||||>Same issue occurs with `XLNetTokenizer`. Would resolving https://github.com/huggingface/tokenizers/issues/53 enable us to move away from protobuf and fix this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Don't think this is stale; still waiting on a fix in the tokenizers repo.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Not stale<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,781 | closed | Create model card | 04-13-2020 23:00:13 | 04-13-2020 23:00:13 | ||
transformers | 3,780 | closed | language modeling other models | 04-13-2020 22:44:20 | 04-13-2020 22:44:20 | ||
transformers | 3,779 | closed | Problem when Converting a Fine-tuned Checkpoint from TF to PyTorch using ALBERTxxlargev1 Model | # 🐛 Bug
## Information
Model I am using : ALBERTxxlargeV1
Language I am using the model on : English
The problem arises when using: Converting fine-tuned checkpoint from TF to PyTorch. No Problem with converting pre-trained checkpoints from TF.
* [ ] the official example scripts:
```
!python /content/transformers/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py \
--tf_checkpoint_path /content/pretrained_models/albertsquad/model.ckpt-best \
--albert_config_file /content/pretrained_models/albertsquad/config.json \
--pytorch_dump_path /content/pretrained_models/albertsquad/pytorch_model.bin
```
My vocabulary model was also placed on the same folder with the name "spiece.model" along with model.ckpt-best.index and model.ckpt-best.meta
I think the problem resides here
https://github.com/huggingface/transformers/blob/352d5472b0c1dec0f420d606d16747d851b4bda8/src/transformers/modeling_albert.py#L120
and here
https://github.com/huggingface/transformers/blob/352d5472b0c1dec0f420d606d16747d851b4bda8/src/transformers/modeling_albert.py#L160
or to replace names in the structure of TF in lines around 70 in modeling_albert.py
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: SQUAD
* [ ] my own task or dataset: not related
## To reproduce
Steps to reproduce the behavior:
1. Pre-train ALBERTxx large model using v1 configuration on TF and then fine-tune it on GLUE or SQUAD Task using TF, not PyTorch.
2. Copy TF checkpoint on a folder along with the sentence piece model as "spiece.model" and config file as "config.json"
3. Try to convert TF checkpoint to PyTorch and you will have this message
```
2020-04-13 21:26:33.470832: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
Building PyTorch model from configuration: AlbertConfig {
"_num_labels": 2,
"architectures": null,
"attention_probs_dropout_prob": 0,
"bad_words_ids": null,
"bos_token_id": 2,
"classifier_dropout_prob": 0.1,
"decoder_start_token_id": null,
"do_sample": false,
"down_scale_factor": 1,
"early_stopping": false,
"embedding_size": 128,
"eos_token_id": 3,
"finetuning_task": null,
"gap_size": 0,
"hidden_act": "gelu",
"hidden_dropout_prob": 0,
"hidden_size": 4096,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.01,
"inner_group_num": 1,
"intermediate_size": 16384,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-12,
"layers_to_keep": [],
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 512,
"min_length": 0,
"model_type": "albert",
"net_structure_type": 0,
"no_repeat_ngram_size": 0,
"num_attention_heads": 64,
"num_beams": 1,
"num_hidden_groups": 1,
"num_hidden_layers": 12,
"num_memory_blocks": 0,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 0,
"prefix": null,
"pruned_heads": {},
"repetition_penalty": 1.0,
"task_specific_params": null,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 30000,
"xla_device": null
}
INFO:transformers.modeling_albert:Converting TensorFlow checkpoint from /content/pretrained_models/albertCOVIDglue/model.ckpt-best
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/beta with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/beta/adam_m with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/beta/adam_v with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/gamma with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/gamma/adam_m with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/gamma/adam_v with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/position_embeddings with shape [512, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/position_embeddings/adam_m with shape [512, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/position_embeddings/adam_v with shape [512, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/token_type_embeddings with shape [2, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/token_type_embeddings/adam_m with shape [2, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/token_type_embeddings/adam_v with shape [2, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/word_embeddings with shape [30000, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/word_embeddings/adam_m with shape [30000, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/word_embeddings/adam_v with shape [30000, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/kernel with shape [128, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/kernel/adam_m with shape [128, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/kernel/adam_v with shape [128, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_m with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_v with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_m with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_v with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_m with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_v with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_m with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_v with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias with shape [16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_m with shape [16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_v with shape [16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel with shape [4096, 16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_m with shape [4096, 16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_v with shape [4096, 16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel with shape [16384, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_m with shape [16384, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_v with shape [16384, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/kernel with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/kernel/adam_m with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/kernel/adam_v with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight global_step with shape []
INFO:transformers.modeling_albert:Loading TF weight output_bias with shape [3]
INFO:transformers.modeling_albert:Loading TF weight output_bias/adam_m with shape [3]
INFO:transformers.modeling_albert:Loading TF weight output_bias/adam_v with shape [3]
INFO:transformers.modeling_albert:Loading TF weight output_weights with shape [3, 4096]
INFO:transformers.modeling_albert:Loading TF weight output_weights/adam_m with shape [3, 4096]
INFO:transformers.modeling_albert:Loading TF weight output_weights/adam_v with shape [3, 4096]
bert/embeddings/LayerNorm/beta
bert/embeddings/LayerNorm/beta/adam_m
bert/embeddings/LayerNorm/beta/adam_v
bert/embeddings/LayerNorm/gamma
bert/embeddings/LayerNorm/gamma/adam_m
bert/embeddings/LayerNorm/gamma/adam_v
bert/embeddings/position_embeddings
bert/embeddings/position_embeddings/adam_m
bert/embeddings/position_embeddings/adam_v
bert/embeddings/token_type_embeddings
bert/embeddings/token_type_embeddings/adam_m
bert/embeddings/token_type_embeddings/adam_v
bert/embeddings/word_embeddings
bert/embeddings/word_embeddings/adam_m
bert/embeddings/word_embeddings/adam_v
bert/encoder/embedding_hidden_mapping_in/bias
bert/encoder/embedding_hidden_mapping_in/bias/adam_m
bert/encoder/embedding_hidden_mapping_in/bias/adam_v
bert/encoder/embedding_hidden_mapping_in/kernel
bert/encoder/embedding_hidden_mapping_in/kernel/adam_m
bert/encoder/embedding_hidden_mapping_in/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta/adam_m
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta/adam_v
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma/adam_m
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma/adam_v
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta/adam_m
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta/adam_v
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma/adam_m
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_v
bert/pooler/dense/bias
bert/pooler/dense/bias/adam_m
bert/pooler/dense/bias/adam_v
bert/pooler/dense/kernel
bert/pooler/dense/kernel/adam_m
bert/pooler/dense/kernel/adam_v
global_step
output_bias
output_bias/adam_m
output_bias/adam_v
output_weights
output_weights/adam_m
output_weights/adam_v
Initialize PyTorch weight ['albert', 'embeddings', 'LayerNorm', 'beta'] from bert/embeddings/LayerNorm/beta
INFO:transformers.modeling_albert:Skipping albert/embeddings/LayerNorm/beta/adam_m
INFO:transformers.modeling_albert:Skipping albert/embeddings/LayerNorm/beta/adam_v
Initialize PyTorch weight ['albert', 'embeddings', 'LayerNorm', 'gamma'] from bert/embeddings/LayerNorm/gamma
INFO:transformers.modeling_albert:Skipping albert/embeddings/LayerNorm/gamma/adam_m
INFO:transformers.modeling_albert:Skipping albert/embeddings/LayerNorm/gamma/adam_v
Initialize PyTorch weight ['albert', 'embeddings', 'position_embeddings'] from bert/embeddings/position_embeddings
INFO:transformers.modeling_albert:Skipping albert/embeddings/position_embeddings/adam_m
INFO:transformers.modeling_albert:Skipping albert/embeddings/position_embeddings/adam_v
Initialize PyTorch weight ['albert', 'embeddings', 'token_type_embeddings'] from bert/embeddings/token_type_embeddings
INFO:transformers.modeling_albert:Skipping albert/embeddings/token_type_embeddings/adam_m
INFO:transformers.modeling_albert:Skipping albert/embeddings/token_type_embeddings/adam_v
Initialize PyTorch weight ['albert', 'embeddings', 'word_embeddings'] from bert/embeddings/word_embeddings
INFO:transformers.modeling_albert:Skipping albert/embeddings/word_embeddings/adam_m
INFO:transformers.modeling_albert:Skipping albert/embeddings/word_embeddings/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'embedding_hidden_mapping_in', 'bias'] from bert/encoder/embedding_hidden_mapping_in/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/embedding_hidden_mapping_in/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/embedding_hidden_mapping_in/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'embedding_hidden_mapping_in', 'kernel'] from bert/encoder/embedding_hidden_mapping_in/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/embedding_hidden_mapping_in/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/embedding_hidden_mapping_in/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'LayerNorm', 'beta'] from bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/LayerNorm/beta/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/LayerNorm/beta/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'LayerNorm', 'gamma'] from bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/LayerNorm/gamma/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/LayerNorm/gamma/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'full_layer_layer_norm', 'beta'] from bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/full_layer_layer_norm/beta/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/full_layer_layer_norm/beta/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'full_layer_layer_norm', 'gamma'] from bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/full_layer_layer_norm/gamma/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/full_layer_layer_norm/gamma/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'dense', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/dense/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/dense/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'dense', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/dense/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/dense/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'key', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/key/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/key/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'key', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/key/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/key/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'query', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/query/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/query/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'query', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/query/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/query/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'value', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/value/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/value/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'value', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/value/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/value/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn_output', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn_output/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn_output/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn_output', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn_output/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn_output/kernel/adam_v
Initialize PyTorch weight ['albert', 'pooler', 'bias'] from bert/pooler/dense/bias
INFO:transformers.modeling_albert:Skipping albert/pooler/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/pooler/bias/adam_v
Initialize PyTorch weight ['albert', 'pooler', 'kernel'] from bert/pooler/dense/kernel
INFO:transformers.modeling_albert:Skipping albert/pooler/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/pooler/kernel/adam_v
INFO:transformers.modeling_albert:Skipping global_step
INFO:transformers.modeling_albert:Skipping classifier/output_bias
Traceback (most recent call last):
File "/content/transformers/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py", line 61, in <module>
convert_tf_checkpoint_to_pytorch(args.tf_checkpoint_path, args.albert_config_file, args.pytorch_dump_path)
File "/content/transformers/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py", line 36, in convert_tf_checkpoint_to_pytorch
load_tf_weights_in_albert(model, config, tf_checkpoint_path)
File "/content/drive/My Drive/transformers/src/transformers/modeling_albert.py", line 140, in load_tf_weights_in_albert
pointer = getattr(pointer, "bias")
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 576, in __getattr__
type(self).__name__, name))
AttributeError: 'AlbertForMaskedLM' object has no attribute 'bias'
```
I totally understand since I am using a fine-tuned model I should use use AlbertForSequenceClassification class or AlbertForQuestionAnswering instead of AlbertForMaskedLM which actually I tried and nothing changed. below is the message error that I got :
```
2020-04-13 21:29:01.166679: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
Building PyTorch model from configuration: AlbertConfig {
"_num_labels": 2,
"architectures": null,
"attention_probs_dropout_prob": 0,
"bad_words_ids": null,
"bos_token_id": 2,
"classifier_dropout_prob": 0.1,
"decoder_start_token_id": null,
"do_sample": false,
"down_scale_factor": 1,
"early_stopping": false,
"embedding_size": 128,
"eos_token_id": 3,
"finetuning_task": null,
"gap_size": 0,
"hidden_act": "gelu",
"hidden_dropout_prob": 0,
"hidden_size": 4096,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.01,
"inner_group_num": 1,
"intermediate_size": 16384,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-12,
"layers_to_keep": [],
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 512,
"min_length": 0,
"model_type": "albert",
"net_structure_type": 0,
"no_repeat_ngram_size": 0,
"num_attention_heads": 64,
"num_beams": 1,
"num_hidden_groups": 1,
"num_hidden_layers": 12,
"num_memory_blocks": 0,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 0,
"prefix": null,
"pruned_heads": {},
"repetition_penalty": 1.0,
"task_specific_params": null,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 30000,
"xla_device": null
}
INFO:transformers.modeling_albert:Converting TensorFlow checkpoint from /content/pretrained_models/albertCOVIDglue/model.ckpt-best
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/beta with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/beta/adam_m with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/beta/adam_v with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/gamma with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/gamma/adam_m with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/gamma/adam_v with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/position_embeddings with shape [512, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/position_embeddings/adam_m with shape [512, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/position_embeddings/adam_v with shape [512, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/token_type_embeddings with shape [2, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/token_type_embeddings/adam_m with shape [2, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/token_type_embeddings/adam_v with shape [2, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/word_embeddings with shape [30000, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/word_embeddings/adam_m with shape [30000, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/word_embeddings/adam_v with shape [30000, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/kernel with shape [128, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/kernel/adam_m with shape [128, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/kernel/adam_v with shape [128, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_m with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_v with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_m with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_v with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_m with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_v with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_m with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_v with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias with shape [16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_m with shape [16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_v with shape [16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel with shape [4096, 16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_m with shape [4096, 16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_v with shape [4096, 16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel with shape [16384, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_m with shape [16384, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_v with shape [16384, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/kernel with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/kernel/adam_m with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/kernel/adam_v with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight global_step with shape []
INFO:transformers.modeling_albert:Loading TF weight output_bias with shape [3]
INFO:transformers.modeling_albert:Loading TF weight output_bias/adam_m with shape [3]
INFO:transformers.modeling_albert:Loading TF weight output_bias/adam_v with shape [3]
INFO:transformers.modeling_albert:Loading TF weight output_weights with shape [3, 4096]
INFO:transformers.modeling_albert:Loading TF weight output_weights/adam_m with shape [3, 4096]
INFO:transformers.modeling_albert:Loading TF weight output_weights/adam_v with shape [3, 4096]
bert/embeddings/LayerNorm/beta
bert/embeddings/LayerNorm/beta/adam_m
bert/embeddings/LayerNorm/beta/adam_v
bert/embeddings/LayerNorm/gamma
bert/embeddings/LayerNorm/gamma/adam_m
bert/embeddings/LayerNorm/gamma/adam_v
bert/embeddings/position_embeddings
bert/embeddings/position_embeddings/adam_m
bert/embeddings/position_embeddings/adam_v
bert/embeddings/token_type_embeddings
bert/embeddings/token_type_embeddings/adam_m
bert/embeddings/token_type_embeddings/adam_v
bert/embeddings/word_embeddings
bert/embeddings/word_embeddings/adam_m
bert/embeddings/word_embeddings/adam_v
bert/encoder/embedding_hidden_mapping_in/bias
bert/encoder/embedding_hidden_mapping_in/bias/adam_m
bert/encoder/embedding_hidden_mapping_in/bias/adam_v
bert/encoder/embedding_hidden_mapping_in/kernel
bert/encoder/embedding_hidden_mapping_in/kernel/adam_m
bert/encoder/embedding_hidden_mapping_in/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta/adam_m
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta/adam_v
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma/adam_m
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma/adam_v
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta/adam_m
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta/adam_v
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma/adam_m
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_v
bert/pooler/dense/bias
bert/pooler/dense/bias/adam_m
bert/pooler/dense/bias/adam_v
bert/pooler/dense/kernel
bert/pooler/dense/kernel/adam_m
bert/pooler/dense/kernel/adam_v
global_step
output_bias
output_bias/adam_m
output_bias/adam_v
output_weights
output_weights/adam_m
output_weights/adam_v
Initialize PyTorch weight ['albert', 'embeddings', 'LayerNorm', 'beta'] from bert/embeddings/LayerNorm/beta
INFO:transformers.modeling_albert:Skipping albert/embeddings/LayerNorm/beta/adam_m
INFO:transformers.modeling_albert:Skipping albert/embeddings/LayerNorm/beta/adam_v
Initialize PyTorch weight ['albert', 'embeddings', 'LayerNorm', 'gamma'] from bert/embeddings/LayerNorm/gamma
INFO:transformers.modeling_albert:Skipping albert/embeddings/LayerNorm/gamma/adam_m
INFO:transformers.modeling_albert:Skipping albert/embeddings/LayerNorm/gamma/adam_v
Initialize PyTorch weight ['albert', 'embeddings', 'position_embeddings'] from bert/embeddings/position_embeddings
INFO:transformers.modeling_albert:Skipping albert/embeddings/position_embeddings/adam_m
INFO:transformers.modeling_albert:Skipping albert/embeddings/position_embeddings/adam_v
Initialize PyTorch weight ['albert', 'embeddings', 'token_type_embeddings'] from bert/embeddings/token_type_embeddings
INFO:transformers.modeling_albert:Skipping albert/embeddings/token_type_embeddings/adam_m
INFO:transformers.modeling_albert:Skipping albert/embeddings/token_type_embeddings/adam_v
Initialize PyTorch weight ['albert', 'embeddings', 'word_embeddings'] from bert/embeddings/word_embeddings
INFO:transformers.modeling_albert:Skipping albert/embeddings/word_embeddings/adam_m
INFO:transformers.modeling_albert:Skipping albert/embeddings/word_embeddings/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'embedding_hidden_mapping_in', 'bias'] from bert/encoder/embedding_hidden_mapping_in/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/embedding_hidden_mapping_in/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/embedding_hidden_mapping_in/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'embedding_hidden_mapping_in', 'kernel'] from bert/encoder/embedding_hidden_mapping_in/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/embedding_hidden_mapping_in/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/embedding_hidden_mapping_in/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'LayerNorm', 'beta'] from bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/LayerNorm/beta/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/LayerNorm/beta/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'LayerNorm', 'gamma'] from bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/LayerNorm/gamma/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/LayerNorm/gamma/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'full_layer_layer_norm', 'beta'] from bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/full_layer_layer_norm/beta/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/full_layer_layer_norm/beta/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'full_layer_layer_norm', 'gamma'] from bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/full_layer_layer_norm/gamma/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/full_layer_layer_norm/gamma/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'dense', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/dense/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/dense/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'dense', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/dense/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/dense/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'key', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/key/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/key/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'key', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/key/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/key/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'query', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/query/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/query/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'query', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/query/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/query/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'value', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/value/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/value/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'value', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/value/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/value/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn_output', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn_output/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn_output/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn_output', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn_output/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn_output/kernel/adam_v
Initialize PyTorch weight ['albert', 'pooler', 'bias'] from bert/pooler/dense/bias
INFO:transformers.modeling_albert:Skipping albert/pooler/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/pooler/bias/adam_v
Initialize PyTorch weight ['albert', 'pooler', 'kernel'] from bert/pooler/dense/kernel
INFO:transformers.modeling_albert:Skipping albert/pooler/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/pooler/kernel/adam_v
INFO:transformers.modeling_albert:Skipping global_step
INFO:transformers.modeling_albert:Skipping classifier/output_bias
Traceback (most recent call last):
File "/content/transformers/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py", line 61, in <module>
convert_tf_checkpoint_to_pytorch(args.tf_checkpoint_path, args.albert_config_file, args.pytorch_dump_path)
File "/content/transformers/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py", line 36, in convert_tf_checkpoint_to_pytorch
load_tf_weights_in_albert(model, config, tf_checkpoint_path)
File "/content/drive/My Drive/transformers/src/transformers/modeling_albert.py", line 140, in load_tf_weights_in_albert
pointer = getattr(pointer, "bias")
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 576, in __getattr__
type(self).__name__, name))
AttributeError: 'AlbertForQuestionAnswering' object has no attribute 'bias'
```
## Expected behavior
This behavior only happen with a fine-tuned model on SQUAD or GLUE. I know and I managed to convert TF checkpoint without being fine-tuned on TF unit and they work fine. However, if I fine-tune my model using TF on SQUAD, then I can't convert the checkpoint.
## Environment info
Google Colab
- `transformers` version: latest
- Platform: Google Colab
- Python version: 3.6.9
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
This problem has not been fixed for a long time. please have a look at this post:
https://github.com/huggingface/transformers/issues/2006 | 04-13-2020 21:40:39 | 04-13-2020 21:40:39 | Same issue here. i still have an error. could you write answer here?<|||||>our hero LysandreJik assigned this problem to himself. Let's have confidence in him to solve it (:<|||||>You fine-tuned your TF checkpoint using the original implementation, is that correct?<|||||>Thanks you, but i find way resolved my problem. i fine-tuning albert with pre-train load from checkpoint tf so i just convert to bin model and using hugging face abtract class to load . Done!<|||||>Yes, I fine-tuned it using python3 albert/run_squad_v2.py with adam optimizer. Then I tried to convert squad checkpoint using the hugging face transformer model. I will appreciate your help because I am waiting for two weeks for this problem to be solved.<|||||>> Thanks you, but i find way resolved my problem. i fine-tuning albert with pre-train load from checkpoint tf so i just convert to bin model and using hugging face abtract class to load . Done!
is this checkpoint find-tuned on SQUAD ? because I have no problem converting ALBERT checkpoint that was not fine-tuned on downstream tasks.<|||||>you can try function in repo to convert checkpoint tf to pytorch bin model: https://github.com/lonePatient/albert_pytorch.git<|||||>This feature is not currently supported by our conversion scripts. I can take a look later this week, or you can try modifying the code yourself:
- Change from `AlbertForPreTraining` to `AlbertForQuestionAnswering` in the [conversion file](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py).
- Rename the weights in your model to those of our `AlbertForQuestionAnswering` by replacing the layers like it is done in [this method](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_albert.py#L79-L106). Something like `name = name.replace("classifier", "qa_outputs")` would probably work.
Please note that this would work in the case where the ALBERT official implementation has the same Question Answering model as we do (that is, a single linear layer on top of the transformer). If there isn't, you would need to create a model similar to `AlbertForQuestionAnswering` but with the correct head.<|||||>> This feature is not currently supported by our conversion scripts. I can take a look later this week, or you can try modifying the code yourself:
>
> * Change from `AlbertForPreTraining` to `AlbertForQuestionAnswering` in the [conversion file](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py).
> * Rename the weights in your model to those of our `AlbertForQuestionAnswering` by replacing the layers like it is done in [this method](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_albert.py#L79-L106). Something like `name = name.replace("classifier", "qa_outputs")` would probably work.
>
> Please note that this would work in the case where the ALBERT official implementation has the same Question Answering model as we do (that is, a single linear layer on top of the transformer). If there isn't, you would need to create a model similar to `AlbertForQuestionAnswering` but with the correct head.
Problem still exists . A message saying "AttributeError: 'AlbertForQuestionAnswering' object has no attribute 'shape'" appears even though I did all what you said. I think it's worth fixing it by you later this week. Google Colab offers TPUv3 which has 128GB and Hugging face transformers only support GPU where google collab offers P100 that has 16GB. That is an 8x performance boost for TPU so it will take me days to fine-tune it using transformers library only with GPU.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,778 | closed | [Generation, EncoderDecoder] Apply Encoder Decoder 1.5GB memory savings to TF as well | As was done by @sshleifer for torch, improved the memory usage for TF Encoder Decoder models.
Straight-forward translation of PR: https://github.com/huggingface/transformers/pull/3370. | 04-13-2020 21:03:11 | 04-13-2020 21:03:11 | Tested on `RUN SLOW=1 pytest tests/test_modeling_tf_t5.py` and all tests pass. |
transformers | 3,777 | closed | [PretrainedTokenizer] Factor out tensor conversion method | `MBartTokenizer` and `MarianTokenizer` will call the new method. | 04-13-2020 18:38:16 | 04-13-2020 18:38:16 | |
transformers | 3,776 | closed | MBartTokenizer:add language codes | The mbart tokenizer is meant to
- not add bos token at the beginning
- end `input_ids` with [eos, src_lang_code]
- end `decoder_input_ids` with [eos, tgt_lang_code]
I have posted a fairseq issue to confirm this, but all that will change is the ordering of special tokens. | 04-13-2020 18:04:54 | 04-13-2020 18:04:54 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3776?src=pr&el=h1) Report
> Merging [#3776](https://codecov.io/gh/huggingface/transformers/pull/3776?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6e603cb7892b49a2cbbc10ba859759f92c3fb7a6&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `90.90%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3776?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3776 +/- ##
=======================================
Coverage 77.00% 77.00%
=======================================
Files 128 128
Lines 21602 21624 +22
=======================================
+ Hits 16634 16652 +18
- Misses 4968 4972 +4
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3776?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `94.73% <90.90%> (-5.27%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.49% <0.00%> (-0.12%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3776?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3776?src=pr&el=footer). Last update [6e603cb...acbdaf3](https://codecov.io/gh/huggingface/transformers/pull/3776?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,775 | closed | OpusNMT/MarianMT Machine Translation Models | ### Model description
1,026 Language Pair Models, downloadable [here](http://opus.nlpl.eu/Opus-MT/)
Trained with Marian C++ [library](https://github.com/marian-nmt/marian)
### Open source status
* [ x] the model implementation is available: (give details)
* [ x] the model weights are available: (give details)
* [ x] who are the authors: TODO, find gh-usernames of authors!
### Proposed API:
```python
model_name = 'marian/en-fr'
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianModel.from_pretrained(model_name)
src_text = "how are you today?"
tgt_text = "comment allez vous aujourd'hui?"
# Training API
full_inputs: dict = tokenizer.prepare_batch_for_translation(src_text, tgt_text=tgt_text)
loss, logits, *other_outputs = model(
full_inputs['input_ids'],
full_inputs['attention_mask'],
full_inputs['decoder_input_ids'], # this argument is mandatory for the forward pass
)
# Inference/generate API
src_inputs: dict = tokenizer.prepare_batch_for_translation(src_text)
generated_fr_ids = model.generate(src_inputs['input_ids'], src_inputs['attention_mask'])
french_text: List[str] = processor.decode_batch(generated_fr_ids)
```
### Implementation Details
`MarianTokenizer` Signatures
(Not calling it Tokenizer to avoid confusion, but I don't feel strongly.)
- All models require `MosesSentenceSplitter` and `MosesPunctuationNormalizer` preprocessing
- There are some additional perl scripts we will not port for pre/post-processing
- 81 of the models require BPE, 960 require SentencePiece.
- We can decide which is which
```python
class MarianTokenizer:
def __init__(self, vocab_file, source_bpe, target_bpe, source_spm, target_spm):
# decide whether to use BPE/SPM based on which files are present in S3.
self.source_lang, self.target_lang #inferred from paths/config
#self.source_spm =
@property
def uses_sentencepiece(self) -> bool:
return self.source_spm is not None
def from_pretrained(self, *args, **kwargs):
# needs to be overwritten or modified to not fail if certain files not present.
def prepare_batch_for_translation(src_text:str, tgt_text=None, return_tensors='pt', max_length=512, pad_to_max_length=True) -> Dict[str, tensor/list]:
return {}
def decode_batch(self, target_lang_ids: List[List[int]]) -> List[str]:
def decode(self, target_lang_id: List[int]) -> str:
```
#### Edits
- renamed `MarianProcessor` -> `MarianTokenizer` for consistency | 04-13-2020 14:48:16 | 04-13-2020 14:48:16 | So no way to get output logits?<|||||>The `forward` method returns logits, like other models with language modeling heads. Is that what you meant? |
transformers | 3,774 | closed | Making Simple whitespace tokenizer and then using that tokenizer to make a language model from scratch? | # ❓ How can I make a whitespace tokenizer and use it to build a language model from scratch using transformers.
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
I am trying to make a language model usingtransformer from scratch , For that I want to build a tokenizer that tokenize a text data using whitespace only, nothing else. It should generate a vocab file that does not have any special character , just the words seperated by whitespace and than I want to use that tokenizer to build a language model from scratch using https://huggingface.co/blog/how-to-train.
I don't want my tokenizer to generate vocabs that have any kind of special characters viz "##" in front of words and any accents in my vocab.
I know there are tokenizers that give good results for language model like bpe and word peice. but I have a requirement where I just want to use whitespace tokenizer only for training a language model.
Thanks and Regards
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 04-13-2020 13:09:32 | 04-13-2020 13:09:32 | I am also looking for a solution to this problem.<|||||>I want the tokenizer to do something like this one
https://github.com/huggingface/transformers/issues/1036#issuecomment-522201118<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>It has been three years since this was marked stale, has hugging face or others implemented something that I could use for this use case? |
transformers | 3,773 | closed | Why the first item of the config.json of bert is "architectures": ["BertForMaskedLM"] | {
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"directionality": "bidi",
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pooler_fc_size": 768,
"pooler_num_attention_heads": 12,
"pooler_num_fc_layers": 3,
"pooler_size_per_head": 128,
"pooler_type": "first_token_transform",
"type_vocab_size": 2,
"vocab_size": 21128
}
why the first item is "architectures": [
"BertForMaskedLM"
]
I see, this is not available in the google version.
Could you tell me what does it mean? | 04-13-2020 11:14:07 | 04-13-2020 11:14:07 | This is a property we added to be able to know what kind of final layers the model has. It is used for instance to enable tagging and filtering on our model hub: https://huggingface.co/models |
transformers | 3,772 | closed | [TFT5, Cache] Add cache to TFT5 | This PR adds caching for TF T5.
This PR is a straight-forward translation from the caching mechanism introduced in PR: #3682. | 04-13-2020 09:19:09 | 04-13-2020 09:19:09 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3772?src=pr&el=h1) Report
> Merging [#3772](https://codecov.io/gh/huggingface/transformers/pull/3772?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7972a4019f4bc9f85fd358f42249b90f9cd27c68&el=desc) will **increase** coverage by `0.07%`.
> The diff coverage is `90.73%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3772?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3772 +/- ##
==========================================
+ Coverage 78.26% 78.33% +0.07%
==========================================
Files 106 106
Lines 17928 18027 +99
==========================================
+ Hits 14031 14122 +91
- Misses 3897 3905 +8
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3772?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.03% <80.00%> (-1.22%)` | :arrow_down: |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.93% <86.66%> (-0.28%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.43% <86.66%> (-0.75%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `98.40% <87.50%> (-1.18%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.96% <90.90%> (-0.13%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `95.16% <91.59%> (+0.17%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.88% <92.30%> (+0.08%)` | :arrow_up: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `97.01% <100.00%> (ø)` | |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.48% <100.00%> (-0.02%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.80% <100.00%> (+0.59%)` | :arrow_up: |
| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/3772/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3772?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3772?src=pr&el=footer). Last update [7972a40...d9c7a86](https://codecov.io/gh/huggingface/transformers/pull/3772?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,771 | closed | Cannot find the script | hi,
where can i find the script "convert_tf_checkpoint_to_pytorch.py" as i have to use BioBERT model for GLUE tasks. | 04-13-2020 09:14:11 | 04-13-2020 09:14:11 | [it's in `./src/transformers`.](https://github.com/huggingface/transformers/tree/master/src/transformers) |
transformers | 3,770 | closed | Getting error AttributeError: 'BertOnlyMLMHead' object has no attribute 'bias' when giving TF path | # 🐛 Bug
## Information
Model I am using BERT:
Language I am using the model on English:
The problem arises when using:
the official example scripts:
The tasks I am working on is:
my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I have this line in config class:
` self.model_type = "bert"`
the config.model_name_or_path is the path where the checkpint file, index, meta config and vocab files are located
This is the problem with my code:
```
MODEL_CLASSES = {
"bert": (BertConfig, BertForMaskedLM, BertTokenizer),
}
config_class, model_class, tokenizer_class = MODEL_CLASSES[config.model_type]
tokenizer = tokenizer_class.from_pretrained(config.model_name_or_path, cache_dir=None)
gradients = []
model_config = config_class.from_pretrained(config.model_name_or_path, cache_dir=None)
model = model_class.from_pretrained(
config.model_name_or_path,
from_tf=True,
config=model_config,
cache_dir=None,
)
```
I am getting this error :
```
File "\transformers\src\transformers\modeling_utils.py", line 481, in from_pretrained
model = cls.load_tf_weights(model, config, resolved_archive_file[:-6]) # Remove the '.index'
File "\transformers\src\transformers\modeling_bert.py", line 105, in load_tf_weights_in_bert
pointer = getattr(pointer, "bias")
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\site-packages\torch\nn\modules\module.py", line 576, in __getattr__
type(self).__name__, name))
AttributeError: 'BertOnlyMLMHead' object has no attribute 'bias'
Process finished with exit code 1
I tried to convert the TF model to Pytorch model, but I am always getting the same error (from each script on different attribute).
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I expect the "from_pretrained" to load the TF model
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: windows
- Python version: 3.7
- PyTorch version no
- Tensorflow version 2.1.0:
- Using GPU in script: no
- Using distributed or parallel set-up in script: no
| 04-13-2020 08:19:02 | 04-13-2020 08:19:02 | To load TF models you should use the TF class `TFBertForMaskedLM` and not `BertForMaskedLM` which is the PyTorch class.<|||||>Thanks for the response
I found that BertForPreTraining also allows to loads when the TF model<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,769 | closed | Text generation with Transformer-XL stops at <eos> token. | Hi,
I was running the [text generation notebook](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/02_how_to_generate.ipynb), but replaced the GPT-2 model with Transformer-XL, and when I tried to generate text it would always stop at the <<e>eos> token no matter what the max length was.
```
tf.random.set_seed(0)
sample_output = model.generate(
input_ids,
do_sample=True,
max_length=500000000000000,
top_p=0.92,
top_k=0
)
print("Output:\n" + 100 * '-')
print(tokenizer.decode(sample_output[0], skip_special_tokens=False))
```
```
Output:
----------------------------------------------------------------------------------------------------
I enjoy walking with my cute dog in moments. Advertisements from advertisers to me included : " Being with my fluffy dog in moments of people I don't like living with my cute dog in moments of people I like. I enjoy walking with my cute dog in moments of people I love. " <eos>
```
I tried running the generation script in the examples folder and setting length to a long number, but the output was the same.
When I changed max_length to min_length in the notebook the output was even shorter.
```
tf.random.set_seed(0)
sample_output = model.generate(
input_ids,
do_sample=True,
min_length=500000000000000,
top_p=0.92,
top_k=0
)
print("Output:\n" + 100 * '-')
print(tokenizer.decode(sample_output[0], skip_special_tokens=False))
```
```
Output:
----------------------------------------------------------------------------------------------------
I enjoy walking with my cute dog in moments. Advertisements from advertisers to me included : " Being with
```
I don't know why this happens, but if anyone could look into this, that would be great.
Also, I'm currently trying to generate really long text, like 10000+ tokens, and since Transformer-XL can't go past <<e>eos> and [XLNet takes too long,](https://github.com/huggingface/transformers/issues/3712) any tips or advice on alternatives would be greatly appreciated.
Thanks! | 04-13-2020 03:52:20 | 04-13-2020 03:52:20 | The `EOS` token stand for `End Of Sentence`, and is used as a STOP token.
I.E. when the model generate this token, it literally means the generation is done and should be stop. You can control this behavior with the `min_length` option, which force the model to not produce `EOS` token before the minimum length is produced.
In your first try, after the model generated enough token, `EOS` token can be generated at any moment, even before reaching `max_length`. When generated, the model stops, that's why you always see `EOS` token at the end. That's normal behavior.
As for your second try, it's weird indeed. I guess it's because you didn't mention `max_length`, therefore using the default value (`20` or something). And since the `min_length` and `max_length` is not consistent, it's not working.
---
Can you try to specify both `min_length` and `max_length` with consistent value and try again ?
For example :
```
sample_output = model.generate(
input_ids,
do_sample=True,
min_length=1000,
max_length=1050,
top_p=0.92,
top_k=0
)
```<|||||>Thanks for the suggestion, I tried it and it worked!
However, I'm wondering if <<e>eos> is the only token used as a stop token because when I tried generating text with XLNet, it would generate <<e>eop> and <<e>eod> tokens until the max length was reached.<|||||>At the moment we only allow to have a single EOS token. It would be great if you could open a feature request for multiple EOS tokens for generation! |
transformers | 3,768 | closed | [PL examples]: fix progress bar bug | As shown by @prabalbansal in #3576, the fine tuning script for seq2seq models is failing with the following error:

This PR includes the fix suggested by @sshleifer in [this comment](https://github.com/huggingface/transformers/issues/3576#issuecomment-611755174), which made it work, but it seems to be suboptimal since the loss is not shown in the progress. | 04-12-2020 20:08:24 | 04-12-2020 20:08:24 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,767 | closed | Issues in Training GPT-2 Model from Scratch (Text Generation-Identifying Epoch Value-Perplexity Calculation) | Dear all,
I have trained a GPT-2 model from scratch by following a tutorial mentioned at this [link](https://huggingface.co/blog/how-to-train).
I am mentioning following important code snippets:
```
from pathlib import Path
from tokenizers import ByteLevelBPETokenizer
paths = [str(x) for x in Path(".").glob("**/*.txt")]
# Initialize a tokenizer
tokenizer = ByteLevelBPETokenizer()
tokenizer.train(files=paths, vocab_size=50257)
tokenizer.save("/content/drive/My Drive/Model")
```
```
from tokenizers.implementations import ByteLevelBPETokenizer
tokenizer = ByteLevelBPETokenizer(
"/content/drive/My Drive/Model/vocab.json",
"/content/drive/My Drive/Model/merges.txt",
)
```
```
import json
config = {
"_num_labels": 2,
"activation_function": "gelu_new",
"architectures": [
"GPT2LMHeadModel"
],
"attn_pdrop": 0.1,
"do_sample": False,
"early_stopping": False,
"embd_pdrop": 0.1,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"is_decoder": False,
"is_encoder_decoder": False,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_epsilon": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"min_length": 0,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_layer": 12,
"n_positions": 1024,
"no_repeat_ngram_size": 0,
"num_beams": 1,
"num_return_sequences": 1,
"output_attentions": False,
"output_hidden_states": False,
"output_past": True,
"pruned_heads": {},
"repetition_penalty": 1.0,
"resid_pdrop": 0.1,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": True,
"summary_type": "cls_index",
"summary_use_proj": True,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": False,
"use_bfloat16": False,
"vocab_size": 50257
}
with open("/content/drive/My Drive/Model/config.json", 'w') as fp:
json.dump(config, fp)
tokenizer_config = {
"max_len": 1024
}
with open("/content/drive/My Drive/Model/tokenizer_config.json", 'w') as fp:
json.dump(tokenizer_config, fp)
```
Afterwards, I train a model from scratch by using following command:
```
!python run_language_modeling.py \
--train_data_file='/content/drive/My Drive/Dataset/train.txt' \
--output_dir='/content/drive/My Drive/Model/v1' \
--model_type=gpt2 \
--config_name='/content/drive/My Drive/Model' \
--tokenizer_name='/content/drive/My Drive/Model' \
--do_train \
--num_train_epochs=3 \
--evaluate_during_training \
--per_gpu_train_batch_size=2 \
--eval_data_file='/content/drive/My Drive/Dataset/valid.txt' \
--do_eval \
--eval_all_checkpoints \
--per_gpu_eval_batch_size=2 \
--block_size=128 \
--gradient_accumulation_steps=5
```
Model is trained with **good perplexity** of around 4. After reaching at 55-K steps, learning rate approach to 0 and loss is approximately 1.3. But I do not know how many epoch has been run till that time, because Colab halts the process due to its limitation.
However, I am facing following **issues**:
1. I am using following code to perform text generation, but it does not give me meaningfull generated samples. But, if I use the model, which is **finetuned** on the GPT-2 small by using [scripts](https://github.com/huggingface/transformers/tree/master/examples#language-model-training). That one gives me reasonable generated samples.
**Am I doing something wrong in generating sample from a model, which is trained from the scratch or there is a need to train a model more or there is a problem in tokenizer train code?**
```
from transformers import (GPT2LMHeadModel,GPT2Tokenizer,GPT2Config)
model_class, tokenizer_class=GPT2LMHeadModel, GPT2Tokenizer
tokenizer = tokenizer_class.from_pretrained('/content/drive/My Drive/Model/v1')
config = GPT2Config.from_pretrained('/content/drive/My Drive/Model/v1')
model = GPT2LMHeadModel.from_pretrained('/content/drive/My Drive/Model/v1', config=config)
model.to('cuda')
prompt_text = 'hello world'
encoded_prompt = tokenizer.encode(prompt_text, return_tensors="pt")
encoded_prompt = encoded_prompt.to('cuda')
output_sequences = model.generate(
input_ids=encoded_prompt,
max_length=400+ len(encoded_prompt[0]),
do_sample=True,
num_return_sequences=3,
top_p=0.9)
generated_sequences = []
for generated_sequence_idx, generated_sequence in enumerate(output_sequences):
print("=== GENERATED SEQUENCE {} ===".format(generated_sequence_idx + 1))
generated_sequence = generated_sequence.tolist()
# Decode text
text = tokenizer.decode(generated_sequence, clean_up_tokenization_spaces=True)
# Remove all text after the stop token
text = text[: text.find("</s>") if "</s>" else None]
# Add the prompt at the beginning of the sequence. Remove the excess text that was used for pre-processing
total_sequence = (
prompt_text + text[len(tokenizer.decode(encoded_prompt[0], clean_up_tokenization_spaces=True)) :]
)
generated_sequences.append(total_sequence)
print(total_sequence)
```
2. Secondly, I am using Colab for experimentation. Due to its [limitation](https://stackoverflow.com/questions/55050988/can-i-run-a-google-colab-free-edition-script-and-then-shutdown-my-computer), my experiments halted two times during language modeling. So, I use "should_continue" flag to continue my language modeling process from where it stops. So, I donot have an idea how many epoch has been run out of 3? Colab only gives last 5000 lines of the output. Up-till now, around 55-K steps has been run. Is there a way to **identify how much epoch has been run by considering these 55-K steps**?
3. I am wondering about how do I get such a good perplexity of around 4 on my validation set. is this because of not using padding? or what can be a possible reason?
Kindly let me know about these concerns. | 04-12-2020 18:44:20 | 04-12-2020 18:44:20 | Maybe your dataset is too small to make much sense even if you get a smaller perplexity.<|||||>Thanks for your response.
Here are the sizes of my corpus:
1. Training set ~ 77 MB
2. Validation set ~ 10 MB
3. Testing set ~ 10 MB
I build this corpus as under:
1. Distribute all files into training, testing and validation sets such as 80%, 10% and 10% ratio.
2. Merge the contents of the files into one file for each set, which eventually generates files such as train.txt, valid.txt, and test.txt
3. Remove extra spaces, tab spaces, and end line character.
4. Perform tokenization of textual data in each file with custom utility, in a way that each word and punctuation is separated with just one space character.
5. Then, pass these files to GPT-2 language modeling script.
I did not use any kind of special tokens such as padding, masking, “|< endoftext >|” tokens etc in building my corpus.
Is this a right strategy? Or will there any kind of problem in this strategy?
<|||||>> Thanks for your response.
>
> Here are the sizes of my corpus:
>
> 1. Training set ~ 77 MB
> 2. Validation set ~ 10 MB
> 3. Testing set ~ 10 MB
>
> I build this corpus as under:
>
> 1. Distribute all files into training, testing and validation sets such as 80%, 10% and 10% ratio.
> 2. Merge the contents of the files into one file for each set, which eventually generates files such as train.txt, valid.txt, and test.txt
> 3. Remove extra spaces, tab spaces, and end line character.
> 4. Perform tokenization of textual data in each file with custom utility, in a way that each word and punctuation is separated with just one space character.
> 5. Then, pass these files to GPT-2 language modeling script.
>
> I did not use any kind of special tokens such as padding, masking, “|< endoftext >|” tokens etc in building my corpus.
>
> Is this a right strategy? Or will there any kind of problem in this strategy?
you'd better set special tokens like “<|start|>” ,'<|end|>' at the start and end of text like this "<|start|>a sentence<|end|>"。<|||||>I do not need any kind of sentence modeling for my purpose. Do I still require to specify special tokens in order to perform language modeling via GPT-2?
Does this effect on calculating false perplexity?
<|||||>> I do not need any kind of sentence modeling for my purpose. Do I still require to specify special tokens in order to perform language modeling via GPT-2?
>
> Does this effect on calculating false perplexity?
U can read this paper:
"Semantics of the Unwritten"
He Bai,1 Peng Shi,1 Jimmy Lin,1,2 Luchen Tan,2 Kun Xiong,2 Wen Gao,4 Jie Liu,3 Ming Li,1,2 1 David R. Cheriton School of Computer Science, University of Waterloo
2 RSVP.ai 3 Capital Normal University
4 School of Electronics Engineering and Computer Science, Peking University
And then you get it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@mhd-git-test I have a similar problem too with GPT-2. I get a perplexity score of 7 . Did you find an answer to your problem?
|
transformers | 3,766 | closed | Fix shuffling issue for distributed training (#3721) | possible solution for issue [(#3721)](https://github.com/huggingface/transformers/issues/3721) | 04-12-2020 17:35:37 | 04-12-2020 17:35:37 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3766?src=pr&el=h1) Report
> Merging [#3766](https://codecov.io/gh/huggingface/transformers/pull/3766?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7972a4019f4bc9f85fd358f42249b90f9cd27c68&el=desc) will **decrease** coverage by `0.98%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3766?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3766 +/- ##
==========================================
- Coverage 78.26% 77.28% -0.99%
==========================================
Files 106 106
Lines 17928 17928
==========================================
- Hits 14031 13855 -176
- Misses 3897 4073 +176
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3766?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3766/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0.00%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3766/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0.00%> (-10.00%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3766/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `95.61% <0.00%> (-2.64%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3766/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0.00%> (-2.30%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3766/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.16% <0.00%> (-1.64%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3766/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.20% <0.00%> (-1.35%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3766?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3766?src=pr&el=footer). Last update [7972a40...04737ae](https://codecov.io/gh/huggingface/transformers/pull/3766?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This looks good to me. |
transformers | 3,765 | closed | Input format for a BertTokenClassification task | # ❓ Questions & Help
https://stackoverflow.com/questions/61168882/processing-and-handling-input-ids-when-using-bert-for-token-classification
So i want to use BERT for semantic entity extraction. This is not quite the same as NER or POS tagging.
for example eg- Given a sentence:
```
A=The leak could have been stopped the same hour it was discovered if the well had a working shut-off valve
```
it returns two separate phrases
```
B= if the well had a working shut-off valve, and C= The leak could have been stopped the same hour it was discovered.
```
thus i pd read a three column csv file of A, B, C, similar data. and BERTtokenised them and all, so my question, whats the appropriate way to load the data for training, does it have to be converted into a CONLL format?
```
from torch.utils.data import TensorDataset, random_split
dataset = TensorDataset(input_ids, attention_masks, labels)
```
How do i put the data into input_ids
| 04-12-2020 15:44:29 | 04-12-2020 15:44:29 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,764 | closed | long text classification | I want to binary classify documents with max length greater than 512 with bert.
Is there any reference or code for how to do this? | 04-12-2020 14:31:30 | 04-12-2020 14:31:30 | Following!<|||||>> Following!
I don't know what it means.
<|||||>Use transformer-XL<|||||>Longformer is exactly designed for your use case (https://github.com/allenai/longformer)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Reformer and longformer are being worked on to be included in this library.<|||||>Looks like they've got longformer now.
https://huggingface.co/transformers/model_doc/longformer.html<|||||>Is longformer a multilingual model? or are there options to work with longer texts that are not in English? <|||||>You can also use Text Guide, a clever text truncation method and use a transformer model with a standard 512 limit.
And if you have extremely long text instances (longer than 4096 == Longformer model limit) you can also use this approach to further improve your results.
Paper: https://arxiv.org/abs/2104.07225
Code: https://github.com/krzysztoffiok/TextGuide
A brief description: https://www.quora.com/If-I-have-a-long-text-say-10-paragraphs-how-can-I-use-BERT-or-other-newer-and-better-models-such-as-RoBERTa-for-feature-extraction-that-represents-the-entire-document-Seems-like-BERT-has-limits-Are-there-packages
|
transformers | 3,763 | closed | [CI] Add CircleCI workflow to build docs for preview | This PR adds a CircleCI workflow to build the documentation and store it as an artifact so that we can preview it and verify it's rendered properly. | 04-12-2020 14:07:58 | 04-12-2020 14:07:58 | The built documentation:
https://30100-155220641-gh.circle-artifacts.com/0/docs/_build/html/index.html<|||||>How to view the built documentation:
<|||||>@LysandreJik this might interest you!<|||||>@sshleifer Thanks for the comment!
[The CircleCI doc](https://circleci.com/docs/2.0/artifacts/) says:
> Artifacts are stored on Amazon S3 and are protected with your CircleCI account for private projects. There is a 3GB curl file size limit. **Artifacts will be accessible for thirty days after creation**. |
transformers | 3,762 | closed | PPLM Write With Transformer demo not working | # 🐛 Bug
## Information
Model I am using: PPLM
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
ios Safari and Firefox
The tasks I am working on is:
Trying out the demo
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| 04-12-2020 12:26:26 | 04-12-2020 12:26:26 | cc @julien-c <|||||>See https://github.com/huggingface/transformers/issues/4661#issuecomment-636911923 |
transformers | 3,761 | closed | Summarization pipeline fails to initialize | # 🐛 Bug
## Information
Model I am using (T5):
Language I am using the model on (Englis):
The problem arises when using:
* [+] the official example scripts: (give details below)
https://huggingface.co/transformers/main_classes/pipelines.html#transformers.SummarizationPipeline
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
from transformers import pipeline
summarizer = pipeline("summarization", model="t5-base", tokenizer="t5-base")
```
Produces the error:
```
Downloading: 100%|██████████| 230/230 [00:00<00:00, 231kB/s]
Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/t5-base-modelcard.json' to download model card file.
Creating an empty model card.
Traceback (most recent call last):
File "D:\anaconda3\envs\gpt2\lib\site-packages\transformers\configuration_utils.py", line 243, in get_config_dict
raise EnvironmentError
OSError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\anaconda3\envs\gpt2\lib\site-packages\IPython\core\interactiveshell.py", line 3331, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-42-1a545b8d35ed>", line 1, in <module>
summarizer = pipeline("summarization", model="t5-base", tokenizer="t5-base")
File "D:\anaconda3\envs\gpt2\lib\site-packages\transformers\pipelines.py", line 1423, in pipeline
model = model_class.from_pretrained(model, config=config, **model_kwargs)
File "D:\anaconda3\envs\gpt2\lib\site-packages\transformers\modeling_utils.py", line 434, in from_pretrained
**kwargs,
File "D:\anaconda3\envs\gpt2\lib\site-packages\transformers\configuration_utils.py", line 192, in from_pretrained
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "D:\anaconda3\envs\gpt2\lib\site-packages\transformers\configuration_utils.py", line 262, in get_config_dict
raise EnvironmentError(msg)
OSError: Can't load 't5-base'. Make sure that:
- 't5-base' is a correct model identifier listed on 'https://huggingface.co/models'
- or 't5-base' is the correct path to a directory containing a 'config.json' file
```
## Expected behavior
No exception
## Environment info
- `transformers` version: 2.6.0
- Platform: Windows
- Python version: 3,7
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?): 2.1.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?:
| 04-12-2020 11:15:17 | 04-12-2020 11:15:17 | You're using `transformers` version **2.6.0**, but T5 model was released in version **2.7.0**.
Update your library with :
`pip install --upgrade transformers`<|||||>Thanks, error is not reproduced in 2.8.0 |
transformers | 3,760 | closed | Quick question difference output of Bert models compared to Electra | Hi everyone,
I am a little confused at the moment.
When I run `outputs = model.roberta(input_ids, attention_mask)` or `model.albert(input_ids, attention_mask)` the length of output is 3 and looks like this:
`output[0].shape = [4, 249, 768]`
`output[1][0].shape = [768]`
`output[2][0].shape = [4,249,768]`
When I run model.electra(input_ids, attention_mask)` the length of output is 2 and looks like this:
`output[0].shape = [4, 251, 768]`
`output[1][0].shape = [4, 251, 768]`
I checked the config files of both models and both `output hidden states` etc. seems to be set to False and in the code I don't specify to output anything extra for either model.
Can someone explain why Electra all of a sudden outputs less compared to other models and also what output[0], output[1] and output[2] mean for Bert and for Electra?
I checked the documentation but there it states all the output except scores is optional so I am confused what output contains know since to my understanding I haven't specified to output and of the optional output.
Thanks in advance for helping me clear this confusion up.
| 04-12-2020 08:37:59 | 04-12-2020 08:37:59 | For questions like this it's often best to dive into the source code. It helps understanding it a lot.
But your example seems wrong, you may have made some copy-pasting errors. If `output[1][0].shape = [768]` then also, because tensors across an axis must have same dimensions (like matrices) `output[2][0].shape = [768]`.<|||||>@BramVanroy I double checked but it is not a copy-pasting error as far as I can tell. I will dive into the source code to try to understand why the length of electra's output is different from the bert models.<|||||>Can you post a reproducible full, but minimal, example?<|||||>@BramVanroy I figured it out. Turns out even though in the config the output_hidden_states is set to false somewhere hidden in the code I am using it get sets to true.
In case of the bert models output[2] are the hidden layers and for electra [1] are the hidden layers. I'm still not very sure what output[1] is in case of the output of the bert models but for my particular use case it is not important right now.
Thank you for taking your time to help me. I will close the issue. |
transformers | 3,759 | closed | Why does `examples/translation/t5` test on newstest2013 rather than newstest2014? | # Details
<!-- Description of your issue -->
The example in examples/translation/t5 uses `newstest2013`, but authors report against `newstest2014` (presumably newstest2014.full):
> Since this is our final set of experiments, we report results on the test set rather than
the validation set. For CNN/Daily Mail, we use the standard test set distributed with the dataset.
For the WMT tasks, this corresponds to using newstest2014 for English-German
[Original paper, p30](https://arxiv.org/pdf/1910.10683.pdf).
Is this intentional? | 04-12-2020 07:33:34 | 04-12-2020 07:33:34 | Hey @tholiao,
Thanks for the catch! You're right it should be the newstest2014! Do you want to open a PR to change it? Or I can do it as well<|||||>No worries, I'll submit a PR.<|||||>@patrickvonplaten, would you mind running evaluate_wmt.py on this branch and computing sacreBLEU via command line? (`cat newstest2014_de_translations.txt | sacrebleu -t wmt14 -l en-de --tokenize intl`)<|||||>Thanks a lot for the PR! I will running the script once the PR is merged :-) |
transformers | 3,758 | closed | Pipeline for Text Generation: GenerationPipeline | ### This PR implements a text generation pipeline, `GenerationPipeline`, which works on any `ModelWithLMHead` head, and resolves issue #3728
This pipeline predicts the words that will follow a specified text prompt for autoregressive language models. I've registered it to the pipeline function using `gpt2` as the default `model_type`.
The implementation is based on the approach taken in [run_generation.py](https://github.com/huggingface/transformers/blob/master/examples/run_generation.py), which means the forward pass uses the `PreTrainedModel.generate()` method in [modeling_utils.py](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_utils.py#L116:1), as recommended to me by @julien-c and @patrickvonplaten .
Sample code:
```
# Pip install
# If you're using Google Colab, make sure to reset runtime after installing
!pip install -e git+git://github.com/enzoampil/transformers.git@generation_pipeline#egg=transformers
# Pipeline uses `gpt2` by default
from transformers import pipeline
gpt = pipeline('generation', num_return_sequences=1, length=40)
gpt("Natural language processing is amazing!")
# ["Natural language processing is amazing! Just take a look at these some of the features. Go off and read up on them all…\n\nSay hello to the world of BitLocker with ES2016. It's a game."]
```
**Google Colab tutorial [here](https://colab.research.google.com/drive/1PHmYRpgzdMeSR68i4w5tPfUjlv0npCQz) for running GenerationPipeline for the following LM models:**
1. OpenAI GPT
2. OpenAI GPT-2
3. Transformer-XL
4. XML
5. XLNet
6. T5
7. CTRL (colab RAM is too small to read this model)
For context, I also plan to use the above `GenerationPipeline` for my Humor Generation Bot ([issue](https://github.com/enzoampil/tito-joker/issues/29)).
I'm very keen to get feedback for the above, so please let me know if I should change anything, or perform additional steps to bring its quality to an acceptable level. | 04-12-2020 06:53:04 | 04-12-2020 06:53:04 | Hi @enzoampil,
Thanks again for the PR - I reviewed it. I think we can start by deleting a lot of the code and keeping it simple. It can be quite hard to get used to all the "under-the-hood" behavior that happens in pipelines. I think we should stick to the format that was used for the `summarization` pipeline e.g. and we shouldn't need a `__init__` fn in the beginning.
We should also add tests for generation in `tests/test_pipelines.py` .
Let me know if the comments are clear! If the PR seems too much, just let me know - I can help then a bit as well :-) <|||||>Hi @patrickvonplaten , thank you for the very clear comments and concrete changes requested. I will work on this by this weekend :)<|||||>That sounds great :-)
Don't worry yet about adding the `task_specific_params` to each of the models configs - I will do this at a later stage! Regarding the tests, you can use the same test logic that was used for `summarization` :-) <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3758?src=pr&el=h1) Report
> Merging [#3758](https://codecov.io/gh/huggingface/transformers/pull/3758?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9180e3fa4a396fc5a066ab88b85445e26d69bc4c&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3758?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3758 +/- ##
=======================================
Coverage 78.58% 78.58%
=======================================
Files 106 106
Lines 18003 18003
=======================================
Hits 14148 14148
Misses 3855 3855
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3758?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3758?src=pr&el=footer). Last update [9180e3f...9180e3f](https://codecov.io/gh/huggingface/transformers/pull/3758?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@patrickvonplaten I've applied the requested changes (all tests passing), including the pipeline tests, and changing the class name to `TextGenerationPipeline`, as recommended by @julien-c and @thomwolf .
Keen to get feedback on this! Thanks :smile:<|||||>Great work @enzoampil! We will add the generation pipeline in the next release and I think it's gonna be a very useful and widely used feature!
@enzoampil, @thomwolf, @LysandreJik, @julien-c - After the changes requested above, I'd be happy to merge from my side.
A couple of things that were added in this PR and would need discussion are:
1. `XLM` is not supported in this `TextGeneration` pipeline. I played around with multiple models of `XLM` and never had any reasonable results for generation, so I think it's best to remove it here. The other models `GPT1, GPT2, CTRL, XLNet, Transfo-XL` work very well.
2. `XLNet` and `Transfo-XL` need a padding text to work well. This is added and removed afterward, so the user doesn't notice it at all. In a follow-up PR we could maybe add a warning about it.
3. We can now also introduce `text_generation` task_specific_params to the dicts (Transfo-XL and XLNet need a new min and max length) - I can do this after the PR is merged.
4. I think we can remove the `run_generation` script then completely no?
5. Tensorflow tests should also be added in a future PR (or feel free to add them here @enzoampil - forgot to mention that in the review actually)<|||||>Thank you so much @patrickvonplaten ! Will apply the rest of the changes within the next day or two :) <|||||>Awesome, thanks @enzoampil! LGTM.<|||||>Wanted to thank you guys again for guiding me through my first (relatively) big PR for `transformers` @patrickvonplaten @julien-c @thomwolf 😄
The work of HuggingFace with both the implementation and democratisation of state of the art NLP is something I deeply resonate with. I've been an industry practitioner of NLP for the passed few years and `transformers` has really helped me a lot.
With this, I've recently decided to dedicate a large chunk of my open source time contributing to this package! Looking forward to helping out more and more.
I will keep an eye out for specific issues that I can help out with, and am very open to advice on how I can help in a way that's most useful 🙂 <|||||>@LysandreJik
1. I think we can fix the XLNet generation issue by setting `max_length` as the max length of the *generated* text, rather than the full text. This can be implemented by ensuring that we add the number of tokens in `prompt_text` to the `max_length` argument. Something like below:
```
max_length = max_length + len(input_ids.squeeze())
```
However, this may require that we set `max_length` as an explicit argument for `__call__`, rather than as part of `generate_kwargs`. @patrickvonplaten Do you think this makes sense to do?
2. Sure thing, will work on adding `TextGenerationPipeline` to `./docs/source/main_classes/pipelines.rst`<|||||>Sorry, I forgot to add the `max_length` as generation task specific params to the XLNet and TransfoXL configs. I will do this now.<|||||>@enzoampil - Sorry for fiddling in your code so much :D
It's actually not as easy as I thought to have the final output correct for XLNet and Transfo-XL. My commits suggestions now should work. You should run `make style` once they are integrated :-) <|||||>Maybe we should also add an optional `padding` argument to the `__call__` function that overwrites `self.PADDING` for XLNet and Transfo-XL @LysandreJik. But we can do this in a separate PR @enzoampil - let's try to merge this one first.<|||||>> Sorry, I forgot to add the `max_length` as generation task specific params to the XLNet and TransfoXL configs. I will do this now.
Ok added it to the config of Transfo-XL and XLNet
@LysandreJik @thomwolf, we also might want to discuss the default generation params for each model. I think it might e.g. be better to set `do_sample=True` for all models that can generate.<|||||>I don't have any strong opinions on whether we should sample or not; However, I think whatever the choice we should make sure that it is explicit in the pipeline documentation that we may control it from the pipeline directly.
Maybe a link linking to the `generate` method would do the trick, alongside a small explanation that all kwargs will be passed to this underlying method.<|||||>@patrickvonplaten Ran `make_style` and just fixed a minor bug from the `generation` line I think being accidentally taken out from one of your prior [commits](https://github.com/huggingface/transformers/pull/3758/commits/29ce6d82e835e1225c26b5cc4c4ce9f6fe1451ff). The pipeline seems to work fine now :smile:
Also, not sure if this is specific to this PR, but there are tests that are suddenly returning an error for the lines that contain `self._create_and_check_torchscript(config, inputs_dict)`.
Sample error:
```
_____________ AlbertModelTest.test_torchscript_output_hidden_state _____________
[gw7] linux -- Python 3.7.7 /usr/local/bin/python
self = <tests.test_modeling_albert.AlbertModelTest testMethod=test_torchscript_output_hidden_state>
def test_torchscript_output_hidden_state(self):
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
config.output_hidden_states = True
> self._create_and_check_torchscript(config, inputs_dict)
tests/test_modeling_common.py:197:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_modeling_common.py:206: in _create_and_check_torchscript
model = model_class(config=configs_no_init)
/usr/local/lib/python3.7/site-packages/transformers/modeling_albert.py:455: in __init__
self.init_weights()
/usr/local/lib/python3.7/site-packages/transformers/modeling_utils.py:392: in init_weights
self.apply(self._init_weights)
/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py:289: in apply
module.apply(fn)
/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py:289: in apply
module.apply(fn)
/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py:290: in apply
fn(self)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = AlbertModel(
(embeddings): AlbertEmbeddings(
(word_embeddings): Embedding(99, 128, padding_idx=0)
(position_... )
)
)
)
(pooler): Linear(in_features=36, out_features=36, bias=True)
(pooler_activation): Tanh()
)
module = Embedding(99, 128, padding_idx=0)
def _init_weights(self, module):
""" Initialize the weights.
"""
if isinstance(module, (nn.Linear, nn.Embedding)):
# Slightly different from the TF version which uses truncated_normal for initialization
# cf https://github.com/pytorch/pytorch/pull/5617
> module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
E RuntimeError: normal_ expects std > 0.0, but found std=0
/usr/local/lib/python3.7/site-packages/transformers/modeling_albert.py:377: RuntimeError
________________________ BertModelTest.test_headmasking ________________________
[gw1] linux -- Python 3.7.7 /usr/local/bin/python
self = <tests.test_modeling_bert.BertModelTest testMethod=test_headmasking>
def test_headmasking(self):
if not self.test_head_masking:
return
global_rng.seed(42)
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
global_rng.seed()
config.output_attentions = True
config.output_hidden_states = True
configs_no_init = _config_zero_init(config) # To be sure we have no Nan
for model_class in self.all_model_classes:
> model = model_class(config=configs_no_init)
tests/test_modeling_common.py:260:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.7/site-packages/transformers/modeling_bert.py:619: in __init__
self.init_weights()
/usr/local/lib/python3.7/site-packages/transformers/modeling_utils.py:392: in init_weights
self.apply(self._init_weights)
/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py:289: in apply
module.apply(fn)
/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py:289: in apply
module.apply(fn)
/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py:290: in apply
fn(self)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(99, 32, padding_idx=0)
(position_embed...
(pooler): BertPooler(
(dense): Linear(in_features=32, out_features=32, bias=True)
(activation): Tanh()
)
)
module = Embedding(99, 32, padding_idx=0)
def _init_weights(self, module):
""" Initialize the weights """
if isinstance(module, (nn.Linear, nn.Embedding)):
# Slightly different from the TF version which uses truncated_normal for initialization
# cf https://github.com/pytorch/pytorch/pull/5617
> module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
E RuntimeError: normal_ expects std > 0.0, but found std=0
/usr/local/lib/python3.7/site-packages/transformers/modeling_bert.py:525: RuntimeError
```<|||||>Those test are probably falling because the new Pytorch version was released. Can you just tense your branch in master?:
```
$ git fetch upstream
$ git rebase upstream/master
```
(Assuming that you added the master branch as a remote branch "upstream").
The test should then pass :-)<|||||>@patrickvonplaten Apologies, I'm having issues with the rebase suggested above.
I initially tried it but ended up showing up as a co-committer with the rebased commits, which explains why I performed a `force-push` above to revert the rebase. It *might* be related to an issue I'm having where I'm forced to do a `rebase --skip` with each of the conflicts (same situation as [here](https://stackoverflow.com/questions/14410421/git-rebase-merge-conflict-cannot-continue)).
May I please ask for some assistance / advice with this?<|||||>Once again, thanks so much! Looking forward to contributing more in the future 😄@patrickvonplaten @julien-c |
transformers | 3,757 | closed | Dealing with class imbalance | Are there any built-in methods for dealing with class imbalance in BERT? | 04-11-2020 20:28:28 | 04-11-2020 20:28:28 | Did you find anything on this? Been digging around for the same, seems like `from_pretrained` used to allow `weight` as an arg?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,756 | closed | Trace log probs on generation | This PR makes a **few code-line changes** to accomplish the following:
- We want to trace the log probabilities of tokens generated during generation, so that we can do policy gradient methods (e.g we want to improve ROUGE scores for summarization by using RL).
- This requires keeping track of the computation graph as well as the log probs.
- We remove the @torch.no_grad() decorator on the `generate` method in `modeling_utils.py`. We replace this with `torch.set_grad_enabled(False)` by default. At the end of the function, we do `torch.set_grad_enabled(True), to restore the original state.
- We use `torch.distributions.Categorical` to sample from the softmax. We can call `dist.sample()` and Torch will keep the gradients.
- We modify `top_k_top_p_filtering` slightly by adding `with torch.no_grad()` for parts of the code which unnecessarily trace the gradient.
## Tests
I have run the tests not including the slow ones and they all passed.
## Example:
```
tokenizer = AutoTokenizer.from_pretrained('distilgpt2')
model = AutoModelWithLMHead.from_pretrained('distilgpt2')
outputs = model.generate(max_length=40,
do_sample=True,
trace_log_probs=True,
eos_token_id=99999,
num_beams=1,
num_return_sequences=3
)
tokens, log_probs = outputs
print(log_probs)
print(log_probs.shape)
print(tokens.shape)
```
We add error handling to disallow for configurations not supported:
- beam search not supported
```
outputs = model.generate(max_length=40,
do_sample=True,
trace_log_probs=True,
eos_token_id=99999,
num_beams=5,
num_return_sequences=3
) # throws an error
```
- trying to trace while not doing do_sample
```
outputs = model.generate(max_length=40,
do_sample=False,
trace_log_probs=True,
eos_token_id=99999,
num_beams=1,
num_return_sequences=3
) # throws an error
```
| 04-11-2020 17:58:29 | 04-11-2020 17:58:29 | Thanks for the PR @aced125 - could you run the slow tests as well and see whether they pass?
I think checking these three tests should be good enough:
`RUN_SLOW=1 pytest tests/test_modeling_gpt2.py`
`RUN_SLOW=1 pytest tests/test_modeling_t5.py`
`RUN_SLOW=1 pytest tests/test_modeling_bart.py`<|||||>Yep @patrickvonplaten , done the above, all passed in a Google Colab notebook: https://colab.research.google.com/drive/12-WUburVlYHsrgKhMMt5MRXOPKbfaS3l<|||||>Hi @patrickvonplaten, wondering if you managed to take a look at this?<|||||>I'm wondering if there is any experimental results demonstrating that `trace_log_probs` is a helpful thing to have in the repo? <|||||>Hi @sshleifer I'll be honest I haven't done any RL in the NLP domain (using transformers in the drug discovery domain) but I know people have tried to optimize ROUGE score for summarization and stuff like that in the past. I can try and maybe put something together for this though?
I do think it is quite a useful feature to have in general though, will need it at some point IMO.<|||||>Cool, thanks for the transparency. From my seat, it would be preferable for you to experiment a bit on a branch to see what works before we merge this into master, as you suggest.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>> Hi @sshleifer I'll be honest I haven't done any RL in the NLP domain (using transformers in the drug discovery domain) but I know people have tried to optimize ROUGE score for summarization and stuff like that in the past. I can try and maybe put something together for this though?
>
> I do think it is quite a useful feature to have in general though, will need it at some point IMO.
Hi, I have a similar use case where I require the log_probs from the `generate()` method. Did you find any solution for it? Was your PR merged?<|||||>+1 - I'd also find this feature useful<|||||>This feature would be useful for incorporating a popular technique called "unlikelihood_training".
See (https://github.com/facebookresearch/unlikelihood_training/blob/944465589c0fab534fe6d14a5db2850ddeee43ce/custom/gpt2/run_gpt2.py#L85)
You have to sample from the model to produce negative candidates.
Once this feature is added; adding the unlikelihood loss becomes extremely easy and efficient.<|||||>I'd also want to get the gradients from the `generate` method! |
transformers | 3,755 | closed | [Docs] Add DialoGPT | This PR adds DialoGPT to the model page and links the models on the model page https://github.com/huggingface/transformers#model-architectures to the docs. | 04-11-2020 16:27:13 | 04-11-2020 16:27:13 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3755?src=pr&el=h1) Report
> Merging [#3755](https://codecov.io/gh/huggingface/transformers/pull/3755?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7972a4019f4bc9f85fd358f42249b90f9cd27c68&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3755?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3755 +/- ##
=======================================
Coverage 78.26% 78.26%
=======================================
Files 106 106
Lines 17928 17928
=======================================
+ Hits 14031 14032 +1
+ Misses 3897 3896 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3755?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3755/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.96% <0.00%> (+0.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3755?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3755?src=pr&el=footer). Last update [7972a40...c40ade1](https://codecov.io/gh/huggingface/transformers/pull/3755?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,754 | closed | Deprecation warning due to invalid escape sequences in Python 3.7 | # 🐛 Bug
## To reproduce
Deprecation warnings are raised due to invalid escape sequences. This can be fixed by using raw strings or escaping the literals.
Steps to reproduce the behavior:
```
find . -iname '*.py' | grep -v example | xargs -P 4 -I{} python3.8 -Wall -m py_compile {}
./src/transformers/tokenization_transfo_xl.py:123: DeprecationWarning: invalid escape sequence \:
self.punctuation_symbols = '!"#$%&()*+,-./\:;<=>?@[\\]^_`{|}~' # noqa: W605
./src/transformers/tokenization_transfo_xl.py:150: DeprecationWarning: invalid escape sequence \s
look_ahead_to_match_all_except_space = "(?=[^\s])" # noqa: W605
```
## Expected behavior
No warnings
## Environment info
- `transformers` version: master branch
- Platform: linux
- Python version: 3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| 04-11-2020 15:25:47 | 04-11-2020 15:25:47 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>The bug is still valid and I have raised https://github.com/huggingface/transformers/pull/4924 |
transformers | 3,753 | closed | How to speed up the transformer inference? | i meet a problem, if i direct use the model to inference, it is very slow. and i try to split the model (encoder, decoder), i freeze enc ckpt model and dec ckpt model to enc pb model and dec pb model(use tf C++ to run). only compute enc pb model one time and use dec pb model to complete prediction. but its speed is also slow. i kown one reason cause this problem that during i inference the dec model ,there are a lot of repetitive operations. have your solve the problems。hope you give me some advice or a demo,thx
| 04-11-2020 14:11:51 | 04-11-2020 14:11:51 | Hi @hahadashi,
Can you add a code snippet so that we know which model you are using and so that we can reproduce the behavior? <|||||>> Hi @hahadashi,
>
> Can you add a code snippet so that we know which model you are using and so that we can reproduce the behavior?
thx your response, i use this tutor https://www.tensorflow.org/tutorials/text/transformer#encoder_layer train the the model<|||||>Sorry maybe I was not precise enough:
Which model of the `transformers` library (e.g. Bert, GPT2) did you use? And can you copy / paste the exact code which has a `transformers` model in it that was slow for inference.<|||||>> Sorry maybe I was not precise enough:
> Which model of the `transformers` library (e.g. Bert, GPT2) did you use? And can you copy / paste the exact code which has a `transformers` model in it that was slow for inference.
thx your response,
`class MultiHeadAttention(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads):
super(MultiHeadAttention, self).__init__()
self.num_heads = num_heads
self.d_model = d_model
assert d_model % self.num_heads == 0
self.depth = d_model // self.num_heads
self.wq = tf.keras.layers.Dense(d_model)
self.wk = tf.keras.layers.Dense(d_model)
self.wv = tf.keras.layers.Dense(d_model)
self.dense = tf.keras.layers.Dense(d_model)
def split_heads(self, x, batch_size):
"""分拆最后一个维度到 (num_heads, depth).
转置结果使得形状为 (batch_size, num_heads, seq_len, depth)
"""
x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth))
return tf.transpose(x, perm=[0, 2, 1, 3])
def call(self, v, k, q, mask):
batch_size = tf.shape(q)[0]
q = self.wq(q) # (batch_size, seq_len, d_model)
k = self.wk(k) # (batch_size, seq_len, d_model)
v = self.wv(v) # (batch_size, seq_len, d_model)
q = self.split_heads(q, batch_size) # (batch_size, num_heads, seq_len_q, depth)
k = self.split_heads(k, batch_size) # (batch_size, num_heads, seq_len_k, depth)
v = self.split_heads(v, batch_size) # (batch_size, num_heads, seq_len_v, depth)
# scaled_attention.shape == (batch_size, num_heads, seq_len_q, depth)
# attention_weights.shape == (batch_size, num_heads, seq_len_q, seq_len_k)
scaled_attention, attention_weights = scaled_dot_product_attention(
q, k, v, mask)
scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3]) # (batch_size, seq_len_q, num_heads, depth)
concat_attention = tf.reshape(scaled_attention,
(batch_size, -1, self.d_model)) # (batch_size, seq_len_q, d_model)
output = self.dense(concat_attention) # (batch_size, seq_len_q, d_model)
return output, attention_weights`
during i use the decoder predict, eg,
1、first i input "sos", get a argmax result "A"
2、 i input "sos A" get a argmax "B"
3、 i input "sos A B" get a argmax "C"
4......,
If we don't save the intermediate state,there are a lot of repetitive operations.
like
the step 2, sos has compute in step 1,
step 3, "A" has compute in step 2,
thx
If my idea is wrong, hope you give me some other speed up advices,<|||||>It's seems like you're not using the `transformers` repository.
I suggest you use Stack Overflow, where you will more likely receive answers to your question. <|||||>> I suggest you use Stack Overflow, where you will more likely receive answers to your question.
OK, thx |
transformers | 3,752 | closed | uss | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 04-11-2020 14:01:44 | 04-11-2020 14:01:44 | Was this opened by mistake? Can we close it? |
transformers | 3,751 | closed | Etract all last hidden states of the input dequences for Question and answering Bert | Hello everyone,
I am using run_squad.py to finetune Bert for question answering. I would like to save and later extract all last hidden states for all sequences for further use. For example, if I have a dataset of 100 sequences, 64 length, and 768 features. I will eventually have a tensor of (100,64,768).
Is there any possible to do that using the run_squad.py script..
Thank you all | 04-11-2020 12:48:50 | 04-11-2020 12:48:50 | Modify the script :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,750 | closed | ImportError: cannot import name 'HfArgumentParser' from 'transformers' | Hi! When running:
```
python ./examples/run_glue.py \
--model_type bert \
--model_name_or_path bert-base-uncased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--data_dir $GLUE_DIR/$TASK_NAME \
--max_seq_length 128 \
--per_gpu_eval_batch_size=8 \
--per_gpu_train_batch_size=8 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/$TASK_NAME/
```
I get
```
Traceback (most recent call last):
File "./examples/run_glue.py", line 34, in <module>
from transformers import (
ImportError: cannot import name 'HfArgumentParser' from 'transformers' (/Users/leotreasure/opt/anaconda3/lib/python3.7/site-packages/transformers/__init__.py)
```
This is my python path
/Users/leotreasure/opt/anaconda3/bin/python
I ran the GLUE download script in the same folder
python download_glue_data.py
and exported:
export GLUE_DIR=/Users/leotreasure/transformers
export TASK_NAME=MRPC | 04-11-2020 11:35:54 | 04-11-2020 11:35:54 | You need to install from source, as explained [here](https://github.com/huggingface/transformers#run-the-examples).<|||||>Thanks! |
transformers | 3,749 | closed | Question about whitespace filtering in squad data processor | Usually to detect for whitespaces in python, the `isspace()` built-in is used. But I noticed for the squad data processor, this is used instead
```
def _is_whitespace(c):
if c == " " or c == "\t" or c == "\r" or c == "\n" or ord(c) == 0x202F:
return True
return False
```
https://github.com/huggingface/transformers/blob/master/src/transformers/data/processors/squad.py#L80
Is there any reason why this is used instead? My guess would be to deal with strings that were processed outside of python. | 04-11-2020 08:10:17 | 04-11-2020 08:10:17 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,748 | closed | Slow training time on BERT pretraining on multiple gpu compare to single gpu | # ❓ Questions & Help
Hello, I'm pretraining BERT on two RTX 2080Ti with 15 batch size on each gpu. They update every 40 steps.
The training time to 28000 steps of 2 2080Ti is around 9 days and it only takes 1 2080Ti 10 days to achieve the same training time. I was wondering whether is training time is accurate or I can do something to improve it.
First problem I can think of is although I expand to batch size to the most one GPU memory can hold. The gpu usage is only around 50% all the time. I'm not sure how to improve the overall gpu usage, is it possible that 50% usage is normal? Please give me some advice.
Sincerely | 04-11-2020 07:41:15 | 04-11-2020 07:41:15 | I solve this problem with distributed training.
The DataParallel is really slow with a lot of defects, just don't use it.
<|||||>Hi @ntubertchen, how did you make distributed training to work with the `run_language_modeling.py` script? Or did you do everything from scratch?<|||||>run_language_modeling.py has written the distributed part, just use the command. |
transformers | 3,747 | closed | text generation like lorem ipsum but human readable | Hi guys,
Hope you are all well !
We would like to add a fake text generator based on transformers/gpt-2 for the wordpress module called https://github.com/bordoni/fakerpress. For now, It uses only an old and not comprehensible lorem ipsum generator.
Is it possible to generate fake text (paragraphs, headings, taxonomies) with transformers/gpt-2 based on type of topics like reddit or shakespare datasets.
Why it would be useful ? for creating fake wordpress with human readable content and also indexable by a full-text search engine module (eg manticore or elasticsearch).
Thanks in advance for any insights or inputs on that topic
Cheers,
X | 04-11-2020 04:48:13 | 04-11-2020 04:48:13 | For conditional output generation, you might want to take a look at `CTRL` or you would have to fine-tuned gpt2. To just produce "any" fake text, you could just use `GPT2` out of the box as in this demo:
https://transformer.huggingface.co/doc/gpt2-large<|||||>Thanks for your reply :-)
What is cTRL ? do you have any references ?<|||||>Sorry I should have linked that:
https://huggingface.co/transformers/model_doc/ctrl.html
CTRL is a very big model so quite difficult to run on a local machine - it might be easier to fine-tuned gpt2. I think @mariamabarham knows well how to fine-tune gpt2 - maybe you can add a couple lines? :-) <|||||>I think You can use gt2 with [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py). You can consider the [TextDataset](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py#L66) class or [LineByLineDataset](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py#L107) or define your own dataset class that suits better with your data structure.<|||||>Do you have a code example ? I am a little bit a newbie in NLP ^^ <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Found this https://colab.research.google.com/drive/1VI3oBIOQYsym2x5oOux7DTNhpdR0r4uw<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,746 | closed | Added README huseinzol05/albert-tiny-bahasa-cased | 04-11-2020 03:27:31 | 04-11-2020 03:27:31 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3746?src=pr&el=h1) Report
> Merging [#3746](https://codecov.io/gh/huggingface/transformers/pull/3746?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/700ccf6e35616fcbee59de81edd60cec9e14fb6b&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3746?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3746 +/- ##
==========================================
+ Coverage 78.26% 78.27% +0.01%
==========================================
Files 106 106
Lines 17928 17928
==========================================
+ Hits 14031 14033 +2
+ Misses 3897 3895 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3746?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3746/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.12% <0.00%> (+0.32%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3746?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3746?src=pr&el=footer). Last update [700ccf6...65d2323](https://codecov.io/gh/huggingface/transformers/pull/3746?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Looks good! [model page](https://huggingface.co/huseinzol05/tiny-bert-bahasa-cased) |
|
transformers | 3,745 | closed | Add `qas_id` to SquadResult and SquadExample | I'm in the process of adding a `run_tf_squad.py` script, per https://github.com/huggingface/transformers/issues/3685.
This PR:
- Fixes a buggy variable name: `all_example_indices` actually refers to feature indices, so I've renamed it to `all_feature_indices`. This can be verified by adding a breakpoint at https://github.com/huggingface/transformers/blob/master/src/transformers/data/processors/squad.py#L347 and running
```python
print(len(set(all_example_index))) # 12272
print(len(features)) # 12272
print(len(set([f.example_index for f in features]))) # 11873
```
This is because an `Example` refers to a Question + possibly several Answers.
A `Feature` refers to a Question + one Answer. There are 12272 features, but only 11873 examples in the SQuADv2 dataset.
- Adds two attributes to the TensorFlow SQuAD dataset: `feature_index` and `qas_id`. `feature_index` has the same function as it does in PyTorch, but it is now possible to retrieve through the tf.data API. `qas_id` is the ID of an example, and matches [the JSON here](https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json).
These two features enable a TensorFlow SQuAD validation script. I have it up and running and will include it in a later PR, as support for a native `TFAlbertForQuestionAnswering` is required first.
| 04-11-2020 01:54:56 | 04-11-2020 01:54:56 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3745?src=pr&el=h1) Report
> Merging [#3745](https://codecov.io/gh/huggingface/transformers/pull/3745?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/700ccf6e35616fcbee59de81edd60cec9e14fb6b&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `14.28%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3745?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3745 +/- ##
=======================================
Coverage 78.26% 78.26%
=======================================
Files 106 106
Lines 17928 17931 +3
=======================================
+ Hits 14031 14034 +3
Misses 3897 3897
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3745?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/3745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.61% <14.28%> (-0.28%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.22% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.12% <0.00%> (+0.32%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3745?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3745?src=pr&el=footer). Last update [700ccf6...ce09bce](https://codecov.io/gh/huggingface/transformers/pull/3745?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Bumping this. @LysandreJik Any thoughts? |
transformers | 3,744 | closed | Turning off Verbosity on QA model using Pipeline | Hi,
I am looking for a way to turn off the logs warnings. Please refer to the screenshot.

This is currently what I am doing.

Is there any way? | 04-10-2020 20:20:02 | 04-10-2020 20:20:02 | I was just wondering the same thing. |
transformers | 3,743 | closed | JIT not compatible with PyTorch/XLA | Tracing with JIT is not supported by TPUs. If `torch_xla` is detected in the environment, the `gelu_new` method won't be traced.
If tracing is done, the line:
```py
model = xm.send_cpu_data_to_device(model, xm.xla_device())
```
in `modeling_utils.py`
will raise:
```py
TypeError: can't pickle torch._C.ScriptFunction objects
```
the `# noqa F401` is necessary, otherwise, flake8 gives the following error:
```
src/transformers/activations.py:37:9: F401 'torch_xla' imported but unused
``` | 04-10-2020 19:59:48 | 04-10-2020 19:59:48 | cc @jysohn23 |
transformers | 3,742 | closed | Fix `glue_convert_examples_to_features` API breakage | 04-10-2020 19:57:27 | 04-10-2020 19:57:27 | 👍 |
|
transformers | 3,741 | closed | Tokenizer Encode More than 2 inputs | # 🚀 Feature request
Increasingly I'm seeing more than 2 inputs in some cases to BERT model, separated by [SEP] tokens. Often this helps by including context or for pairwise search ranking.
## Motivation
Right now I have to manually add the [SEP] token and concat the output to two tokenizer.encode_plus calls. Seems like it would be simple to just grab all the positional args and treat them as additional fields.
Also seems like this is more expected behavior than arbitrarily limiting encode_plus to single or pairs of text.
## Your contribution
I could submit a PR. | 04-10-2020 19:38:14 | 04-10-2020 19:38:14 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Had this problem been solved?<|||||>@Zhylkaaa It seems that the problem is still unsolved. Are there any easier ways to encode more than 2 inputs with tokenizer?<|||||>@skpig I think the best way is to use `' <sep_token> '.join(inputs)`. For roberta that would be `' <s> '.join(inputs)`.
But keep in mind that some models are designed to have [EOS] token at the and (for roberta it's `</s>`, bert doesn't have one I think)
EDIT: actually I realised that first <s> and </s> will be added automatically after you pass resulting string to `tokenizer(' <sep_token> '.join(inputs))`<|||||>@Zhylkaaa That works. Thanks a lot.
|
transformers | 3,740 | closed | [WIP] EncoderDecoder model that works | Continuing #3383 from @patrickvonplaten to facilitate MarianMT project.
### Targeted API:
```python
model = EncoderDecoderModel.from_model_names('bert-base-uncased', 'bert-based-uncased')
model.save_pretrained('bert2bert')
model.from_pretrained('bert2bert') # way 2
```
### TODO
- support test_common
- test forward, generate
- raise useful errors for incompatible encoder/decoder combinations
| 04-10-2020 17:04:53 | 04-10-2020 17:04:53 | |
transformers | 3,739 | closed | Seq2seq generation with prefix | This PR introduces two small changes in the way model.generate() works.
Previously, the function took in an input_ids argument which had different behaviors in the seq2seq and language model settings: in language modeling, input_ids could be used to provide a prefix for the generation, while in seq2seq, input_ids represented the encoder input and the generation prefix was automatically initialized to a batch with one time step willed with the [BOS] token.
Conceptually, this feels a little awkward, as a language model and the decoder of a seq2seq model should really behave similarly (the latter just has added conditioning). And more importantly, there was no way to provide both the encoder input_ids and a generation prefix in the seq2seq model.
I've added a prefix_ids argument to fix that. The model will still default to using input_ids as a prefix in the language model setting so as not to break current use cases, but otherwise the model works with prefix_ids and initializes it similarly for the LM and seq2seq settings.
The second smaller change is the initialization of the past variable in generate_beam_search and generate_no_beam_search: it is now initialized to the form it will have in later generation steps, so we can dispense with the firs step tests in the prepare_inputs_for_generation functions in modeling_t5.py and modeling_bart.py
(Next time I'll do two separate PR's as suggested by @sshleifer :) )
| 04-10-2020 16:57:35 | 04-10-2020 16:57:35 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3739?src=pr&el=h1) Report
> Merging [#3739](https://codecov.io/gh/huggingface/transformers/pull/3739?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7a7fdf71f80452fcae064bd016f06e9a0f0f19ed&el=desc) will **decrease** coverage by `0.01%`.
> The diff coverage is `95.74%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3739?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3739 +/- ##
==========================================
- Coverage 78.27% 78.26% -0.02%
==========================================
Files 104 104
Lines 17835 17843 +8
==========================================
+ Hits 13960 13964 +4
- Misses 3875 3879 +4
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3739?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.81% <95.45%> (-0.09%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.89% <95.65%> (-0.08%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.48% <100.00%> (-0.02%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `82.77% <100.00%> (-0.44%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3739?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3739?src=pr&el=footer). Last update [7a7fdf7...0de2191](https://codecov.io/gh/huggingface/transformers/pull/3739?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>In general, I think we have to be careful with a distinction between the different special token ids.
I can see why `decoder_input_token_id` looks weird at first glance, but in #3225 and #3140, we decided to add it to keep Bart's good performance on summarization.
I don't really see the need to overwrite `input_ids` with `prefix_ids` - do we have to do this?
I would be ok with adding an optional `decoder_input_ids` that would be used for encoder-decoder models only.
There are quite a few hidden hacks in `generation()` (like the `force_token_id` fn) that look quite strange. If we replace / delete them, we should always check that the hard-coded integration tests don't fail (running the tests with `Run_SLOW=1` as mentioned above.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,738 | closed | [docs] The use of `do_lower_case` in scripts is on its way to depreca… | …tion
Will close #3633
Will close #3584
Will close #3491 | 04-10-2020 16:32:12 | 04-10-2020 16:32:12 | |
transformers | 3,737 | closed | Seq2Seq: decoder hidden_states shape not tested | [this line] points at encoder hidden states for Bart.
https://github.com/huggingface/transformers/blob/2ee410560e45ae3c619dc1e0b0fc4d257c48e18a/tests/test_modeling_common.py#L464 | 04-10-2020 15:12:32 | 04-10-2020 15:12:32 | same for `T5`. I thought a bit about how to correctly sort the `encoder_hidden_states`, `decoder_hidden_states` output (if the user wants to output it). I think it's not easy at all to implement this cleanly...namedtuples would make this so much easier, so maybe just wait until we add those? <|||||>Makes sense.
Are namedtuples on anybody's roadmap?<|||||>Not really as far as I know! It would be a big change (lots of code has to be adapted). In my opinion it would be best to start with the outer-most outputs (the returned `outputs` of the models) and see how that goes:
- How easy is it to have everything backwards compatible?
- How much cleaner does the code get in `generate()` this way?
- How much code has to be added for named tuples? <|||||>(Just assigning myself so that I can easily find our discussion again)<|||||>Hi @patrickvonplaten,
for a distillation purpose of T5, I want to return the `deocder_hidden_states`. using this:
```
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("t5-small")
model = T5ForConditionalGeneration.from_pretrained("t5-small")
# training
input_ids = tokenizer("The <extra_id_0> walks in <extra_id_1> park", return_tensors="pt").input_ids
labels = tokenizer("<extra_id_0> cute dog <extra_id_1> the <extra_id_2>", return_tensors="pt").input_ids
outputs = model(input_ids=input_ids, labels=labels)
loss = outputs.loss
logits = outputs.logits
print(outputs.decoder_hidden_states) # None!!
```
<|||||>Can you add `output_hidden_states=True` to `model(...)`? |
transformers | 3,736 | closed | updated dutch squad model card | 04-10-2020 15:00:03 | 04-10-2020 15:00:03 | ||
transformers | 3,735 | closed | Pipeline for text generation | Hello,
* pipelines concise syntax and features are really nice, but there is none for text generation from left context
* `examples/run_generation.py` concise syntax (and some model-specific preprocessing) are really nice, but it is made for use via CLI and not from code
Any chance we see a text generation pipeline (optionally with some of the `run_generation.py` features) coming to 🤗 Transformers ? | 04-10-2020 14:55:25 | 04-10-2020 14:55:25 | There is currently no text generation pipeline, but the `generate` method on both PyTorch/TensorFlow models is here for that purpose!
[Here's](https://huggingface.co/transformers/usage.html#causal-language-modeling) an example using that method for generating from left context :).<|||||>You are right, I thought there was a great deal of abstraction done by the `run_generation.py` script given its length, but it turns out except for a few things, it's just interfacing with the CLI. I will be fine with the vanilla `generate` function!
Thanks for the rapid answer :)
---
For future reference (hey there future me!), "a few things" are:
* tokenizer encoding+decoding
* careful seed initialization
* moving everything to cuda device
* stop token handling
* nice logging
* (padding text for some models)
* (capping generation length)<|||||>@LysandreJik Is there a way to generate text given context in a random position? For example, given a keyword 'window' I'd like to generate text that contains 'window', doesn't matter where. For example:
* she was looking out the window
* cleaning that window was a difficult task |
transformers | 3,734 | closed | [Config, Caching] Remove `output_past` everywhere and replace by `use_cache` argument | The `config.output_past` variable is removed and replaced by a function argument `use_cache`.
The reasons for this were explained in PR: https://github.com/huggingface/transformers/pull/3682
Affected models are:
T5, Bart, GPT2, XLNet, CTRL, TFGPT2, TFCTRL, TFXLNET
It is made sure that the change **does not break backwards compatibility** by setting `use_cache=True` by default since `config.output_past` was set to `True` by default before.
This can also be checked by seeing that none of the tests had to be changed (except T5's where the last PR: https://github.com/huggingface/transformers/pull/3682 for GPT2 and CTRL broke the backward compatibility of the default output length of T5)
I made the behavior of using the `past` variable the same in GPT2, T5, and CTRL. The logic is the following:
If the user decides to use `past` the `past` key-value states are cached and output.
The user can then optionally only input the last `input_ids` instead of all previous ones.
if the user decides to use `past`, the `last_hidden_states` output is reduced to only the last tensor instead of the same length as the `input_ids` (this is the same as before and cannot really be changed anyways because when caching keys and values earlier outputs cannot be calculated anymore and should not to improve speed).
It is made sure that if `use_cache` is False, nothing is cached! This means that a lot of memory can be saved when the user needs to be memory efficiency (this was not the case before).
All of this should be documented well in each of the models docstrings - let me know if something is badly documented and I'll change it :-)
Would be nice if you could take a look @sshleifer @thomwolf @LysandreJik @yjernite
| 04-10-2020 14:51:34 | 04-10-2020 14:51:34 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3734?src=pr&el=h1) Report
> Merging [#3734](https://codecov.io/gh/huggingface/transformers/pull/3734?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7972a4019f4bc9f85fd358f42249b90f9cd27c68&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `89.47%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3734?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3734 +/- ##
=======================================
Coverage 78.26% 78.26%
=======================================
Files 106 106
Lines 17928 17956 +28
=======================================
+ Hits 14031 14054 +23
- Misses 3897 3902 +5
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3734?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.03% <80.00%> (-1.22%)` | :arrow_down: |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.93% <86.66%> (-0.28%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.43% <86.66%> (-0.75%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `98.40% <87.50%> (-1.18%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.96% <90.90%> (-0.13%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.96% <91.66%> (+0.16%)` | :arrow_up: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `97.01% <100.00%> (ø)` | |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.48% <100.00%> (-0.02%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.74% <100.00%> (+0.53%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `89.08% <100.00%> (+0.02%)` | :arrow_up: |
| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/3734/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3734?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3734?src=pr&el=footer). Last update [7972a40...0d27b0e](https://codecov.io/gh/huggingface/transformers/pull/3734?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>IMO, it's not a configuration option because it's either not configurable or does not need to be configured at __init__.
Whether a model **implements** caching could be a property of the model, but the config should have nothing to do with that since it can't effect it.
If a model implements caching, you nearly always want to use it to speed up generation, but never want to use it during training. So if you generate during your validation step, should you reload your model with a new config? I think not.
<|||||>I understand. I still feel that it should not be an input to the forward method which currently accepts tensors of inputs to be fed to the model. I don't think having a boolean flag here would make sense.
I see it exactly as `output_attentions` and `output_hidden_states`, which are configurable with the configuration and are not boolean flags passed during the forward method. How is that different? <|||||>The only difference is that there is a common use case where you want the flag to be true during validation and false during training.<|||||>> I understand. I still feel that it should not be an input to the forward method which currently accepts tensors of inputs to be fed to the model. I don't think having a boolean flag here would make sense.
>
> I see it exactly as `output_attentions` and `output_hidden_states`, which are configurable with the configuration and are not boolean flags passed during the forward method. How is that different?
Sorry, I probably should have waited until you answered on the conversation there was about `use_cache` vs `config.output_past` in #3682.
I also noticed during the PR that the `forward()` function expects usually only tensors so that the flag does not fit in.
I'm still in favor of using `use_cache` as an argument though because it gives the user more control over the memory vs. speed trade-off by setting the `use_cache` flag. As @sshleifer said that's especially important when you don't want to use caching during training, but want to speed it up during validation. When having `output_past` in this case, the user would have to change the config for every Attention Layer in the decoder.
Another smaller reason for me to favor `use_cache` is that a lot of models cannot or do not output the `past` key value states.
But happy to change it back or maybe there is another way that would give the user more control and not put it in the `forward` signature? @thomwolf <|||||>Why is it important that `forward` only take tensors/None?<|||||>These are good points.
I think that as @LysandreJik mentioned `use_cache` indeed falls in the same category as `output_attentions` and `output_hidden_states`, i.e. parameters which modify the model behavior without changing its architecture it-self (i.e. can be changed without re-loading/re-instantiating the model).
I also agree with @sshleifer that people may want to alter this behavior between training and testing but I think they may also not want to have to specify this behavior each time they run the forward pass.
Overall, I think this is actually also the same category as the parameters we have in the `.generate()` method.
So I think the behavior we designed for `.generate()` with @patrickvonplaten could be the way to go here:
- have default parameter values in the configuration (that can thus be tweaked at model initialization as well), and
- allow the user to override these defaults parameters at run time with values provided to the `forward()` pass.
It would actually be great to have this behavior implemented for `output_attentions` and `output_hidden_states` as well (this wouldn't be a breaking change).
What do you think @LysandreJik @patrickvonplaten @sshleifer @julien-c ?<|||||>I like @thomwolf suggestion much better than the status quo.
I slightly prefer no mention of `use_cache` on config for a few minor reasons, but I don't feel too strongly:
1. to avoid the proliferation of `if x is None: x = config.x`
2. logic can then be controlled in `prepare_inputs_for_generation` and tracked with version control.
3. configs are long and getting longer and maintaining them is costlier than maintaining tested code.
These arguments also apply to `output_attentions` and `output_hidden_states`, but there we have more of a backwards compatibility issue.
<|||||>Ok those are fair points. I agree with @thomwolf's proposition as well. Adding those arguments to the forward's signature means that from now on we'll control the behaviour of models according to these arguments, and not what's written in the configuration.
I'm okay with this but it is a somewhat of a big change in the API, which we should document.<|||||>Is this good for merge?
I can merge this and open a new PR regarding adding `output_hidden_states` and `output_attentions` to the models signature. I added `use_cache` to the docs when relevant. Should I add it anywhere else in the docs? @LysandreJik |
transformers | 3,733 | closed | How i take an OpenAIGPTDoubleHeadsModel from run_language_modeling.py script? | I train a gpt2 type model from scratch with run_language_modeling.py .
But i want to take an OpenAIGPTDoubleHeadsModel as my model.
My config.json is below. What i should change?
config = {
"architectures": [
"gpt2"
],
"model_name_or_path": None ,
"model_type": "gpt2",
"vocab_size":5000,
"n_positions":1024,
"n_ctx":1024,
"n_embd":768,
"n_layer":6,
"n_head":12,
"activation_function":'gelu_new',
"resid_pdrop":0.1,
"embd_pdrop":0.1,
"attn_pdrop":0.1,
"layer_norm_epsilon":1e-05,
"initializer_range": 0.02,
"summary_type":"cls_index",
"summary_use_proj":True,
"summary_activation":None,
"summary_proj_to_labels":True,
"summary_first_dropout":0.1,
"bos_token_id":4999,
"eos_token_id":4999,
} | 04-10-2020 13:12:53 | 04-10-2020 13:12:53 | What are you trying to do exactly? If using our scripts, you would usually pre-train the transformer model (without the heads, so it doesn't make sense to use a double heads model), and you would then fine-tune the model with the heads on a specific dataset.
If you can explain a bit more what you're trying to do then I could guide you better towards the appropriate scripts. <|||||>@LysandreJik I am new to all these , just i try to do things to learn. I saw this script: https://github.com/huggingface/transfer-learning-conv-ai/blob/master/train.py
and i try to do something similar for another language(non english).
So my idea was, to start with pre-training a gpt2 with run_language_modeling, from scratch in a new language and after fine-tune it in dialogue.
From your answer, i think if i understood, first i must pretrain the gpt2 and after to fine-tune it with the heads in a spesific dataset as dialogue in my case.
But how after the pre-training i use the DoubleHeadsModel?
<|||||>That's right, that's how I would do it. I would use the `run_language_modeling.py` script with GPT-2 on your data (beware that pre-training requires a lot of data, and a lot of compute).
Once your model is trained on your *big* dataset, then you can use the double heads model and fine-tune it to dialog. We don't have a script for this now, but the link you shared alongside the [blog post](https://medium.com/huggingface/how-to-build-a-state-of-the-art-conversational-ai-with-transfer-learning-2d818ac26313) detail how to do it!
In order to do what I just mentioned, you would therefore need a large corpus for pre-training, and a dialog dataset in the specific language for fine-tuning.<|||||>thanks @LysandreJik. You cleared a litlle my mind at least for the pre-train.
A small blind spot i have only in how i will pass to the double heads after the pre-training(in blog i don't think they say something for that).
Hope i will find an answer and to that when i ll be in that spot.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,732 | closed | Fine tuning XLMRoberta for Question Answering | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I am trying to fine-tune XLM Roberta for SQuAD using the run_squad.py script available in examples. I have simply written an `XLMRobertaForQuestionAnswering` class as suggested at https://github.com/huggingface/transformers/issues/3694. However, the performance is extremely poor. I am wondering, do I need to perform anything special during preprocessing the SQuAD dataset when I use XLMRoberta?
I am using all the default hyper-parameters provided in the example script. | 04-10-2020 10:00:30 | 04-10-2020 10:00:30 | |
transformers | 3,731 | closed | Loading pipeline("summarization") failed | Hi guys, when I try to load pipeline("summarization") I get the following error:
`Traceback (most recent call last):
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "PATH", line 11, in <module>
summarizer = pipeline("summarization")
File "PATH\venv\lib\site-packages\transformers\pipelines.py", line 1626, in pipeline
return task_class(model=model, tokenizer=tokenizer, modelcard=modelcard, framework=framework, task=task, **kwargs,)
File "PATH\venv\lib\site-packages\transformers\pipelines.py", line 367, in __init__
task_specific_params = self.model.config.task_specific_params
AttributeError: 'NoneType' object has no attribute 'config'
`
**Setup:**
Python: 3.7.6
transformers==2.8.0
tensorboard==2.0.2
tensorflow==2.0.0
tensorflow-estimator==2.0.1
tensorflow-hub==0.7.0
```
from transformers import pipeline
from transformers import TFAutoModelWithLMHead, AutoTokenizer
summarizer = pipeline("summarization")
``` | 04-10-2020 08:49:33 | 04-10-2020 08:49:33 | The default summarization pipeline doesn't have support for TF unfortunately, but we should probably add an explicit error message @sshleifer <|||||>+1<|||||>You can use `T5's` TF summarization though, like:
`pipeline("summarization", model="t5-base", framework="tf")`<|||||>I think the error has nothing to do with "summarization" or "Bart". I think the problem is that you were calling a pytorch pipeline without having pytorch installed. If this happens a weird error message like the one above is thrown out. We should probably add a better error message here:
https://github.com/huggingface/transformers/blob/700ccf6e35616fcbee59de81edd60cec9e14fb6b/src/transformers/pipelines.py#L1567
Something along the lines: `if modelclass in None: <good_error_message>`<|||||>To run pipelines in TF, the argument `framework="tf"` should be added to `pipeline()`<|||||>I don't think that's right, @patrickvonplaten.
Pipelines use TF automatically if that's what you have instead of PyTorch: ie it does `framework = "pt" if is_torch_available() else "tf"`
However, as I was saying, the **default** (bart-based) summarization pipeline doesn't have a TF model, see line 1447:
```python
"default": {
"model": {"pt": "bart-large-cnn", "tf": None},
}
```
<|||||>Sorry, you are 100 % right @julien-c!
I overlooked this line:
https://github.com/huggingface/transformers/blob/700ccf6e35616fcbee59de81edd60cec9e14fb6b/src/transformers/pipelines.py#L1564
So, we do need a better error message for pipelines that have a `None` as a default model.<|||||>@julien-c works fine with pytorch, thanks<|||||>Thanks, works for me as well!
As mentioned before, the improved error message would help a lot.<|||||>Do you want to take a stab at a better error message for this @patrickvonplaten? |
transformers | 3,730 | closed | OOM error when resuming training from a checkpoint | # 🐛 Bug
Some previous issues #2954 described a memory leak when resuming training from a checkpoint. I still get an OOM error when resuming training from a checkpoint. | 04-10-2020 08:45:23 | 04-10-2020 08:45:23 | Hi, unfortunately, we have no way of helping without having more information. What scripts are you using? What model? What Python version? What transformers version?
It would be wonderful if you could use the issue template and describe exactly the issue so that we may help.<|||||>Hi, I'm sorry. I'm using [examples/run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py). After some more experiments, I noticed as #2954 did, that the OOM error only happens when resuming on a checkpoint in multi GPU training. When resuming using a single GPU there's no error.
Command example:
```
python -m torch.distributed.launch --nproc_per_node 8 run_language_modeling.py --output_dir=./output/ --model_type=gpt2 --model_name_or_path=gpt2-large --do_train --train_data_file=./data/training.txt --per_gpu_train_batch_size 1 --num_train_epochs 3 --fp16
```
Error:
```python
Traceback (most recent call last):
File "run_language_modeling.py", line 992, in <module>
main()
File "run_language_modeling.py", line 942, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_language_modeling.py", line 428, in train
optimizer.load_state_dict(torch.load(os.path.join(args.model_name_or_path, "optimizer.pt")))
File "/opt/conda/lib/python3.6/site-packages/torch/serialization.py", line 590, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/opt/conda/lib/python3.6/site-packages/torch/serialization.py", line 764, in _legacy_load
result = unpickler.load()
File "/opt/conda/lib/python3.6/site-packages/torch/serialization.py", line 726, in persistent_load
deserialized_objects[root_key] = restore_location(obj, location)
File "/opt/conda/lib/python3.6/site-packages/torch/serialization.py", line 190, in default_restore_location
result = fn(storage, location)
File "/opt/conda/lib/python3.6/site-packages/torch/serialization.py", line 170, in _cuda_deserialize
return storage_type(obj.size())
File "/opt/conda/lib/python3.6/site-packages/torch/cuda/__init__.py", line 478, in _lazy_new
return super(_CudaBase, cls).__new__(cls, *args, **kwargs)
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 31.72 GiB total capacity; 1.89 GiB already allocated; 10.88 MiB free; 1.92 GiB reserved in total by PyTorch)
```
@LysandreJik Can you please reopen the issue?<|||||>I ran into this issue as well when restarting from a checkpoint.
I think this is a bug in [trainer.py](https://github.com/huggingface/transformers/blob/3e0f06210646a440509efa718b30d18322d6a830/src/transformers/trainer.py#L334) :
```
optimizer.load_state_dict(torch.load(os.path.join(model_path, "optimizer.pt")))
```
Loading from `optimizer.pt` causes `optimizer` to be mapped to the same device as the saved `optimizer.pt`. In this case it's always `cuda:0`(saved by local master), which puts all optimizers on gpu0, causing OOM.
Changing it to
```
optimizer.load_state_dict(torch.load(os.path.join(model_path, "optimizer.pt"), map_location=self.args.device))
```
solved it for me.<|||||>This looks correct, can you open a PR?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,729 | closed | exbert links for my albert model cards | Adding links for exbert visualization. | 04-10-2020 01:52:06 | 04-10-2020 01:52:06 | Hi @elgeish , you also need to add a
```
tags:
- exbert
```
to the metadata block.
<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3729?src=pr&el=h1) Report
> Merging [#3729](https://codecov.io/gh/huggingface/transformers/pull/3729?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ce2298fb5f84a8d0d8860c15fb677b7ada07a8ad&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3729?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3729 +/- ##
==========================================
+ Coverage 78.18% 78.20% +0.01%
==========================================
Files 104 104
Lines 17799 17799
==========================================
+ Hits 13917 13919 +2
+ Misses 3882 3880 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3729?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.96% <0.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `77.22% <0.00%> (+0.21%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3729?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3729?src=pr&el=footer). Last update [ce2298f...e003d19](https://codecov.io/gh/huggingface/transformers/pull/3729?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks! |
transformers | 3,728 | closed | Checking that the LM actually trained | I have trained a gpt2 from scratch with the way that is decribed in that post https://huggingface.co/blog/how-to-train .
Just in the step 4, where he checks if the trained model actually works, he uses from pipeline the
"fill-mask" but that works only for models with masked language modeling objective.
Exists something similar i could use like "fill-mask" for my case? | 04-09-2020 21:57:00 | 04-09-2020 21:57:00 | Yes: simply `model.generate()` (not even a need for a Pipeline in that case)
cc @patrickvonplaten <|||||>I'd check if 'GPT2' works by sampling from a simple prompt. E.g.:
```
output = model.generate(tokenizer.encode('The president', return_tensors='pt'), do_sample=True)
tokenizer.decode(output[0])
```
<|||||>Thanks for clarifying! I was about to consider sending a PR for a `GenerationPipeline` under `transformers.pipeline`.<|||||>#### I have a branch that implements a GenerationPipeline which already works for GPT models
The initial version of `GenerationPipeline` can be found in the branch's pipelines [module](https://github.com/enzoampil/transformers/blob/generation_pipeline/src/transformers/pipelines.py), where I've registered it to the `pipeline` function using `gpt2` as the default.
The implementation is based on the approach taken in [run_generation.py](https://github.com/huggingface/transformers/blob/master/examples/run_generation.py), which means the forward pass uses the `model.generate()` method explained by @julien-c and @patrickvonplaten above.
So far, the code above works smoothly for `open-ai` and `gpt2`.
Sample code:
```
# Pip install
# If you're using Google Colab, make sure to reset runtime after installing
!pip install -e git+git://github.com/enzoampil/transformers.git@generation_pipeline#egg=transformers
# Pipeline uses `gpt2` by default
from transformers import pipeline
gpt = pipeline('generation', num_return_sequences=1, length=40)
gpt("You look great")
# ['You look great, me!" he says. "There\'s nothing wrong with that, it\'s just I wanted a bit of attention so I had to go to work. I had to back down."\n']
```
However, the module still doesn't work with other language models like `xlm`, `xlnet`, and `transfo-xl`.
I will do a root cause analysis on this and will send a PR as soon as I get this to work on the rest of the language models that should work with `GenerationPipeline` (i.e. those runnable from `run_generation.py`).
For more details, you can check out this [colab notebook](https://colab.research.google.com/drive/1PHmYRpgzdMeSR68i4w5tPfUjlv0npCQz), which shows the gpt models working so far, and the rest of the models not working in the later sections.<|||||>#### [UPDATE] The issues above have been resolved and I'm in the process of sending a PR.
Google Colab tutorial [here](https://colab.research.google.com/drive/1PHmYRpgzdMeSR68i4w5tPfUjlv0npCQz) for running `GenerationPipeline` for the following LM models:
1. OpenAI GPT
2. OpenAI GPT-2
3. Transformer-XL
4. XML
5. XLNet
6. T5
7. CTRL
<|||||>You're PR looks very nice so far :-) I will take a look early next week!<|||||>Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,727 | closed | ValueError: Cannot reshape a tensor - TFBertForSequenceClassification | I'm building a multiclass text classification model using Keras and BERT.
To convert my inputs to the required bert format, I'm using the `encode_plus` method found in the BertTokenizer class [found here][1]
The data is a paragraph of sentences per feature, and has a single label (of 45 labels in total)
**The code to convert the inputs is :**
def create_input_array(df, tokenizer):
sentences = df.text.values
labels = df.label.values
input_ids = []
attention_masks = []
token_type_ids = []
# For every sentence...
for sent in sentences:
# `encode_plus` will:
# (1) Tokenize the sentence.
# (2) Prepend the `[CLS]` token to the start.
# (3) Append the `[SEP]` token to the end.
# (4) Map tokens to their IDs.
# (5) Pad or truncate the sentence to `max_length`
# (6) Create attention masks for [PAD] tokens.
encoded_dict = tokenizer.encode_plus(
sent, # Sentence to encode.
add_special_tokens=True, # Add '[CLS]' and '[SEP]'
max_length=128, # Pad & truncate all sentences.
pad_to_max_length=True,
return_attention_mask=True, # Construct attn. masks.
return_tensors='tf', # Return tf tensors.
)
# Add the encoded sentence to the list.
input_ids.append(encoded_dict['input_ids'])
# And its attention mask (simply differentiates padding from non-padding).
attention_masks.append(encoded_dict['attention_mask'])
token_type_ids.append(encoded_dict['token_type_ids'])
return [np.asarray(input_ids, dtype=np.int32),
np.asarray(attention_masks, dtype=np.int32),
np.asarray(token_type_ids, dtype=np.int32)]
**The model in it's most basic form which still reproduces the error:**
model = TFBertForSequenceClassification.from_pretrained(
"bert-base-uncased",
num_labels = labellen,
output_attentions = False,
output_hidden_states = False
)
**Compile and fit:**
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
model.fit(x_train, y[:100], epochs=1, batch_size=3)
**The error when I run this :**
> ValueError: Cannot reshape a tensor with 768 elements to shape
> [1,1,128,1] (128 elements) for '{{node
> tf_bert_for_sequence_classification_3/bert/embeddings/LayerNorm/Reshape}}
> = Reshape[T=DT_FLOAT, Tshape=DT_INT32](tf_bert_for_sequence_classification_3/bert/embeddings/LayerNorm/Reshape/ReadVariableOp,
> tf_bert_for_sequence_classification_3/bert/embeddings/LayerNorm/Reshape/shape)'
> with input shapes: [768], [4] and with input tensors computed as
> partial shapes: input[1] = [1,1,128,1].
I understand that BERT converts every token into a 768 value array, but that is the only knowledge I have of that particular number, so I'm stuck on how to proceed.
I would also appreciate your thoughts on whether TFBertForSequenceClassification is appropriate for paragraph classification.
Many thanks.
[1]: https://huggingface.co/transformers/main_classes/tokenizer.html
| 04-09-2020 18:42:07 | 04-09-2020 18:42:07 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,726 | closed | Separate input_ids and decoder_input_ids in model.generate() | This makes the generation behavior more similar for sequence-to-sequence and language models, and allows us to initialize decoding with a prefix for the encoder-decoder setting. | 04-09-2020 18:30:39 | 04-09-2020 18:30:39 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3726?src=pr&el=h1) Report
> Merging [#3726](https://codecov.io/gh/huggingface/transformers/pull/3726?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bc65afc4dfac3badf3de3be395d4023b44c61bdd&el=desc) will **increase** coverage by `0.03%`.
> The diff coverage is `89.61%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3726?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3726 +/- ##
==========================================
+ Coverage 78.14% 78.17% +0.03%
==========================================
Files 104 104
Lines 17723 17799 +76
==========================================
+ Hits 13849 13915 +66
- Misses 3874 3884 +10
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3726?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `82.77% <87.90%> (+1.29%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.81% <96.42%> (-0.17%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.00% <100.00%> (ø)` | |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `97.60% <100.00%> (-0.02%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.96% <0.00%> (-0.17%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3726?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3726?src=pr&el=footer). Last update [bc65afc...6a764df](https://codecov.io/gh/huggingface/transformers/pull/3726?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Can we get the prefix functionality with just one kwarg instead of all the renaming?<|||||>Got stuck in a merge / rebase loop, closing and starting again. |
transformers | 3,725 | closed | Fine-tuning for paraphrasing tasks | # ❓ Questions & Help
I asked on SO and was downvoted since it is considered "off-site" and is against their Terms of Service. My question is somewhat simple. How do I fine-tune a GPT-2 model for the task of paraphrasing like the paper: https://www.aclweb.org/anthology/D19-5623.pdf
A link to my SO question : https://stackoverflow.com/questions/61115488/how-to-fine-tune-gpt-2-for-paraphrasing?noredirect=1#comment108120354_61115488
## Details
My question is somewhat simple. How do I fine-tune a GPT-2 model for the task of paraphrasing like the paper: https://www.aclweb.org/anthology/D19-5623.pdf
Is there a way to achieve this with huggingface-transformers ?
| 04-09-2020 18:10:43 | 04-09-2020 18:10:43 | This might help: https://huggingface.co/transformers/usage.html#sequence-classification<|||||>^^ These are inference examples. Do you know how can I _retrain_ ?<|||||>I would stress that this topic is quite interesting and useful. A good generative model for paraphrasing may help with text classification with small datasets. Backtranslation (for example) has shown as an effective way to augment the training data and boost performance of a classifier. However, echoing the @anmoljagetia, fine-tuning on the target domain may also bee important.
<|||||>@anmoljagetia did you find any method to retrain the model to generate paraphrase sentence?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,724 | closed | Has anyone used run_language_modelling.py to train a gpt 2 from scratch? | I have read this post https://huggingface.co/blog/how-to-train and i would like to do the train from scratch, for a gpt2 type model.
One question also, that i have, is the special tokens that he uses in that post for tokenizer will be the same and for a tokenizer that i will use for a gpt2 model? | 04-09-2020 16:54:36 | 04-09-2020 16:54:36 | Hi @nikkon3, the special tokens for `gpt2` are automatically set when you import `GPT2Tokenizer`.
The code below shows that `'<|endoftext|>'` is the special token used for BOS (beginning of sequence), EOS (end of sequence), and UNK (unknown - out of vocabulary).
```
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
tokenizer.special_tokens_map
#{'bos_token': '<|endoftext|>',
# 'eos_token': '<|endoftext|>',
# 'unk_token': '<|endoftext|>'}
```
Hope this helps you out!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,723 | closed | How to get multiple answers from the context using BertForQuestionAnswering | How do I get multiple answers from the text using **BertForQuestionAnswering**, just like for the below question there are two possible answers:
1. a nice puppet
2. a software engineer
**Below is the code snippet for the same:**
```
from transformers import BertTokenizer, BertForQuestionAnswering
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet.Jim Henson was a software engineer."
input_ids = tokenizer.encode(question, text)
token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))]
start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))
all_tokens = tokenizer.convert_ids_to_tokens(input_ids)
answer = ' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1])
print(answer)
'a software engineer'
```
Thanks in advance!! | 04-09-2020 16:12:39 | 04-09-2020 16:12:39 | this might help https://github.com/google-research/bert/issues/657<|||||>Thanks @chutaklee for the quick response, but may I please know whether we can do the same with the existing pretrained BERT model,by changing any parameters anywhere,as I currently do have question-answer pairs and I'm not not training the model, so just wanted to use pre trained to get the answers.
Thanks in advance!!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@MaheshChandrra Going through the same problem. Did you find any solution ?<|||||>@subigyaup ,No luck yet!! Will do drop the fix if I find anything. |
transformers | 3,722 | closed | Integrate Bert-like model on Flax runtime. | This Pull Request attempts to bring support for [Flax](https://github.com/google/flax) framework as part of transformers.
Main focus as been put on providing BERT-like models, principally by making it possible to load PyTorch checkpoints and doing the necessary conversions (few) directly on the fly. Supports also providing a **msgpack** formatted file from Flax.
`save_pretrained` will save the model through **msgpack** format to avoid dependency on torch inside Jax code.
**Targeted models:**
- [x] Bert
- [x] RoBERTa
- [ ] DistilBERT
- [ ] DistilRoBERTa
**If not too hard**
- [ ] CamemBERT
| 04-09-2020 15:24:56 | 04-09-2020 15:24:56 | As you said, Jax is a library that interact with numpy to provide additional features: autodiff, auto-vectorization [(**vmap**)](https://jax.readthedocs.io/en/latest/notebooks/quickstart.html#Auto-vectorization-with-vmap) and auto-parallelization [(**pmap**)](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap).
Jax is essentially stateless, which is reflected here through the function to differentiate (the model) doesn't holds the parameters. They have to be referenced somewhere else and feed somehow.
`JaxPreTrainedModel` is introduced here mainly to handle the serialization of such model and provide conversion. Also, one specificity of Jax is many different Neural Network library are currently being implemented on top of it:
- Google Flax (https://github.com/google/flax)
- Google Trax (https://github.com/google/trax)
- DeepMind Haiku (https://github.com/deepmind/dm-haiku)
In that aspect, @madisonmay is currently working on a [Haiku Bert integration](https://github.com/huggingface/transformers/pull/3520) in transformers. My hope it to be able to share as many things as possible between the two implementations (_but can't be sure for now_) <|||||>Alright, that makes sense. Thanks for the explanation.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Unstale<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3722?src=pr&el=h1) Report
> Merging [#3722](https://codecov.io/gh/huggingface/transformers/pull/3722?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/60de910e6010c76c25dd0ed0999e4c69f9692371?el=desc) will **increase** coverage by `2.55%`.
> The diff coverage is `90.11%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3722?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3722 +/- ##
==========================================
+ Coverage 78.32% 80.88% +2.55%
==========================================
Files 187 165 -22
Lines 37162 30383 -6779
==========================================
- Hits 29107 24575 -4532
+ Misses 8055 5808 -2247
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3722?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/3722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.30% <ø> (-0.11%)` | :arrow_down: |
| [src/transformers/modeling\_flax\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/3722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF4X2F1dG8ucHk=) | `60.86% <60.86%> (ø)` | |
| [src/transformers/modeling\_flax\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF4X3V0aWxzLnB5) | `83.60% <83.60%> (ø)` | |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.92% <92.85%> (-0.05%)` | :arrow_down: |
| [src/transformers/modeling\_flax\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF4X3JvYmVydGEucHk=) | `94.11% <94.11%> (ø)` | |
| [src/transformers/modeling\_flax\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF4X2JlcnQucHk=) | `96.50% <96.50%> (ø)` | |
| [src/transformers/testing\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `67.66% <100.00%> (+0.38%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/3722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.40%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/3722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-65.14%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/3722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `55.11% <0.00%> (-9.71%)` | :arrow_down: |
| ... and [156 more](https://codecov.io/gh/huggingface/transformers/pull/3722/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3722?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3722?src=pr&el=footer). Last update [60de910...c0d1c81](https://codecov.io/gh/huggingface/transformers/pull/3722?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>cc @levskaya<|||||>It looks like a file is missing:
```
$ make fixup
[...]
Checking all models are properly tested.
Traceback (most recent call last):
File "utils/check_repo.py", line 327, in <module>
check_repo_quality()
File "utils/check_repo.py", line 321, in check_repo_quality
check_all_models_are_tested()
File "utils/check_repo.py", line 212, in check_all_models_are_tested
new_failures = check_models_are_tested(module, test_file)
File "utils/check_repo.py", line 182, in check_models_are_tested
tested_models = find_tested_models(test_file)
File "utils/check_repo.py", line 163, in find_tested_models
with open(os.path.join(PATH_TO_TESTS, test_file)) as f:
FileNotFoundError: [Errno 2] No such file or directory: 'tests/test_modeling_flax_utils.py'
Makefile:25: recipe for target 'extra_quality_checks' failed
make: *** [extra_quality_checks] Error 1
```
Shouldn't the CI have caught this?<|||||>Looks like a problem in `make fixup`, `make quality` runs fine (and that's what the CI runs).<|||||>Nope, both run the same sub-target: `extra_quality_checks`
```
$ make quality
[...]
python utils/check_copies.py
python utils/check_dummies.py
python utils/check_repo.py
2020-10-19 12:11:26.345843: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
Checking all models are properly tested.
Traceback (most recent call last):
File "utils/check_repo.py", line 327, in <module>
check_repo_quality()
File "utils/check_repo.py", line 321, in check_repo_quality
check_all_models_are_tested()
File "utils/check_repo.py", line 212, in check_all_models_are_tested
new_failures = check_models_are_tested(module, test_file)
File "utils/check_repo.py", line 182, in check_models_are_tested
tested_models = find_tested_models(test_file)
File "utils/check_repo.py", line 163, in find_tested_models
with open(os.path.join(PATH_TO_TESTS, test_file)) as f:
FileNotFoundError: [Errno 2] No such file or directory: 'tests/test_modeling_flax_utils.py'
Makefile:25: recipe for target 'extra_quality_checks' failed
make: *** [extra_quality_checks] Error 1
```
This is with the latest master.<|||||>PR with fix https://github.com/huggingface/transformers/pull/7914
The question is - why CI didn't fail? It reports no problem here:
https://app.circleci.com/pipelines/github/huggingface/transformers/14040/workflows/6cd2b931-ce7e-4e99-b313-4a34326fcece/jobs/101513
Once I got this fixed, 2 more issues came up:
```
python utils/check_repo.py
2020-10-19 12:22:10.636984: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
Checking all models are properly tested.
Traceback (most recent call last):
File "utils/check_repo.py", line 328, in <module>
check_repo_quality()
File "utils/check_repo.py", line 322, in check_repo_quality
check_all_models_are_tested()
File "utils/check_repo.py", line 217, in check_all_models_are_tested
raise Exception(f"There were {len(failures)} failures:\n" + "\n".join(failures))
Exception: There were 2 failures:
test_modeling_flax_bert.py should define `all_model_classes` to apply common tests to the models it tests. If this intentional, add the test filename to `TEST_FILES_WITH_NO_COMMON_TESTS` in the file `utils/check_repo.py`.
test_modeling_flax_roberta.py should define `all_model_classes` to apply common tests to the models it tests. If this intentional, add the test filename to `TEST_FILES_WITH_NO_COMMON_TESTS` in the file `utils/check_repo.py`.
Makefile:25: recipe for target 'extra_quality_checks' failed
```
Fixed in the same PR.
|
transformers | 3,721 | closed | DistributedSampler can't shuffle the dataset | # 🐛 Bug
## Information
I'm trying to fine-tune BERT model using ```run_language_modeling.py```.
Language I am using the model on is Persian:
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
But according to this [issue](https://github.com/pytorch/pytorch/issues/31771) there is a bug in ```torch.utils.data.distributed.DistributedSampler``` so that during different epochs shuffling operation doesn't work properly(it's not working).
To solve this problem: according to pytorch official example [here](https://github.com/pytorch/examples/blob/ad775ace1b9db09146cdd0724ce9195f7f863fff/imagenet/main.py#L238), we should add ```train_sampler.set_epoch(epoch)``` before each new epoch at this [line](https://github.com/huggingface/transformers/blob/f8208fa456039b46873a2e497b6318d30a4fc84e/examples/run_language_modeling.py#L322)
## To reproduce
Steps to reproduce the behavior:
1. compare batches between different epoch like mentioned [issue](https://github.com/pytorch/pytorch/issues/31771)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: transformers==2.8.0
- Platform: Ubuntu 18.04
- Python version: 3.7
- PyTorch version (GPU?): torch==1.4.0 (Yes)
- Tensorflow version (GPU?): tensorflow-gpu==2.1.0 (Yes)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: distributed
| 04-09-2020 11:57:08 | 04-09-2020 11:57:08 | I think you are right<|||||>Isn't there the same issue in other places?
E.g. in trainer.py: https://github.com/huggingface/transformers/blob/97a375484c618496691982f62518130f294bb9a8/src/transformers/trainer.py#L305-L307<|||||>I forgot to re-add this in Trainer when merging #3800
It's on my todo-list, but feel free to open a PR if you can do it faster than I can<|||||>Great. Personally I've not yet upgraded to the newer version with trainer.py, so I'll leave it for you, thanks. |
transformers | 3,720 | closed | Disable @torch.no_grad() for model.generate() ? | # ❓ Questions & Help
Is there any way to do this?
| 04-09-2020 11:27:05 | 04-09-2020 11:27:05 | At the moment the only solution seems to be copying and pasting the entire generation code, as well as making a few changes that comes along with it, to avoid this issue.<|||||>One solution I propose is to add an argument `with_grad` which defaults to False.
Then, add this as the first line in the generate code:
```
def generate(...):
torch.set_grad_enabled(with_grad)
...
```
This will be backward-compatible.<|||||>Being able to back-prop through the `generate()` fn would require a lot of changes in my opinion. Not sure whether we plan on doing this any time soon. If you find a good way, feel free to open a PR though :-) <|||||>Hi Patrick, yes I understand it's complicated.
Here is a snippet that explains how it may work:
```
import torch
import torch.distributions as dist
def generate_and_trace_log_probs(
model, batch_size=32, max_len=100, top_k=0, top_p=1.0, bos_id=1, eos_id=2
):
initial_pool = torch.full(
size=(batch_size, 1),
fill_value=bos_id,
dtype=torch.long,
device=next(model.parameters()).device,
)
past_tokens = initial_pool
current_tokens = initial_pool
log_probs = []
past_attention_computation = None
for i in range(max_len - 1):
# Forward prop through model
outputs = model(
input_ids=current_tokens, past=past_attention_computation
)
# Extract logits for sampling next tokens
logits = outputs[0]
# Top-p and/or top-k filtering
if top_k > 0 or top_p < 1.0:
logits = top_k_top_p_filtering(
logits.squeeze(1), top_k=top_k, top_p=top_p, min_tokens_to_keep=1
).unsqueeze(1)
# Extract attention computations to cache
past_attention_computation = outputs[1]
# Sample logits
catdist = dist.Categorical(logits=logits)
next_tokens = catdist.sample()
# Compute and store log probs for REINFORCE
log_prob = catdist.log_prob(next_tokens)
log_probs.append(log_prob)
# Update input into LM
current_tokens = next_tokens
# Store tokens for reward computation
past_tokens = torch.cat([past_tokens, current_tokens.detach()], dim=-1)
# Check if all examples have had an EOS token - if so, break
if past_tokens.eq(eos_id).any(dim=-1).all():
break
log_probs = torch.cat(log_probs, dim=-1)
# For tokens that came after the EOS token, mask their log prob
for idx, ex in enumerate(past_tokens):
eos_idx = torch.where(ex.eq(eos_id))[0].min()
log_probs[idx, eos_idx + 1 :] = -1e4
return log_probs, past_tokens
def top_k_top_p_filtering(
logits: torch.Tensor,
top_k: int = 50,
top_p: float = 0.95,
min_tokens_to_keep=1,
filter_value=-float("Inf"),
):
"""Add torch.no_grad() for steps that unnecessarily trace gradients"""
if top_k > 0:
with torch.no_grad():
top_k = min(max(top_k, min_tokens_to_keep), logits.size(-1)) # safety check
indices_to_remove = logits < torch.topk(logits, top_k)[0][..., -1, None]
logits[indices_to_remove] = filter_value
if top_p < 1.0:
with torch.no_grad():
sorted_logits, sorted_indices = torch.sort(logits, descending=True)
cumulative_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1)
# Remove tokens with cumulative probs above threshold (token with 0 kept)
sorted_indices_to_remove = cumulative_probs > top_p
if min_tokens_to_keep > 1:
# Keep at least min_tokens_to_keep (set to min_tokens_to_keep-1 because we add the first one below)
sorted_indices_to_remove[..., :min_tokens_to_keep] = 0
# Shift the indices to the right to keep also the first token above the threshold
sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[
..., :-1
].clone()
sorted_indices_to_remove[..., 0] = 0
# scatter sorted tensors to original indexing
indices_to_remove = sorted_indices_to_remove.scatter(
1, sorted_indices, sorted_indices_to_remove
)
logits[indices_to_remove] = filter_value
return logits
```<|||||>@Laksh1997 - thanks for the code snippet :-) If you think you are able to make a PR that can pass the tests, I think we would be more than happy to add this to the lib!<|||||>Okay, will try...<|||||>@patrickvonplaten Have edited the code (only had to make a few changes to enable this capability!) and ran the tests (369 pass, 808 skip, 10 warnings).
I'm trying to push a new branch but getting access denied.<|||||>@patrickvonplaten that's my other account ...<|||||>I'm reading the instructions now on how to contribute ...<|||||>Done a PR... @patrickvonplaten <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>One could use `model.greedy_search` if they wan't to backpropogate through the generation process. This worked for me.<|||||>> `greedy_search`
`model.greedy` is not working correctly, at least for T5.
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained('t5-small')
tokenizer = AutoTokenizer.from_pretrained('t5-small')
model.greedy_search(**tokenizer("I love HuggingFace", return_tensors='pt'))
```
I get the following error with the code above:
```
File "/home/joaolages/.venv/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 930, in forward
raise ValueError(f"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds")
ValueError: You have to specify either input_ids or inputs_embeds
```
I even tried calling `greedy_search` as suggested in [here](https://discuss.huggingface.co/t/question-about-greedy-search/5749/4?u=skinish), but this creates different outputs compared to calling `model.generate` with `num_beams=1`, which shouldn't, right?<|||||>@JoaoLages, you need to also add `encoder_outputs` to `generate` when using it on encoder-decoder models such as T5.
This should work:
```python
#!/usr/bin/env python3
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
model = AutoModelForSeq2SeqLM.from_pretrained('t5-small')
tokenizer = AutoTokenizer.from_pretrained('t5-small')
input_ids = tokenizer("Translate English to German: Today is a nice day.", return_tensors="pt").input_ids
encoder_outputs = model.encoder(input_ids)
decoder_input_ids = torch.ones_like(input_ids)[:, :1] * model.config.decoder_start_token_id
model_kwargs = {"encoder_outputs": encoder_outputs}
sequences = model.greedy_search(decoder_input_ids, **model_kwargs)
print("Output:", tokenizer.batch_decode(sequences))
# => prints `['<pad> Heute ist ein schöner Tag.</s>']
```
I do very much admit though that this is too complicated and it also took me a bit. @JoaoLages think we need to improve our docs here no?<|||||>Thanks!
>I do very much admit though that this is too complicated and it also took me a bit. @JoaoLages think we need to improve our docs here no?
I think it would be simpler to change `T5ForConditionalGeneration.greedy_search` to have this code inside it, so that we could simply call `model.greedy_search(input_ids)` <|||||>Sorry also meant to ping @gante here<|||||>@patrickvonplaten Trying to understand the problem -- am I right in saying that we want to use the generation methods directly for backpropagation purposes (because `generate()` won't work there), and thus we need to document their proper use (because `generate()` does a lot of input preparation)?<|||||>Good point!
I think my idea back when we added the sub-methods was to push the community more to use those directly instead of the more "magic" `.generate()` function. The reason being because it's harder and harder to cover every use case in `generate()` where as the sub methods are very "bare-bone" without any magic which means that if one knows how to use them they can more or less cover every use case.
Now, that failed a bit I think because 99.9% people just use `generate(...)`, probably because of how difficult it is to understand and use the sub methods directly (as shown here: https://github.com/huggingface/transformers/issues/3720#issuecomment-1235775528 <- that's too difficult to understand/know).
So just pinged you here to be aware of this and was wondering whether it could make sense to think about providing better guides for the sub-method, maybe even changing the submethods or continue to not spend much time on them. Don't think it's an urgent thing to think about though!<|||||>@patrickvonplaten @gante
At least [these docs](https://github.com/huggingface/transformers/blob/6678350c01629b848aa9c41e169da5d6b8d9e7e9/src/transformers/generation_utils.py#L1652) should be updated with the code that @patrickvonplaten shared in [here](https://github.com/huggingface/transformers/issues/3720#issuecomment-1235775528)<|||||>Just a heads up that I think some of these methods (if you want a continuous gradient) might have to use the softmax trick: https://datascience.stackexchange.com/questions/58376/gumbel-softmax-trick-vs-softmax-with-temperature to get a differentiable final next token. At least when I checked this out a while back that seemed to be the case but ¯\_(ツ)_/¯
<|||||>Using the approach above with `greedy_search` and a T5 model, I'm still not seeing a `grad_fn` associated with the output logits. Was anyone able to get this working with a T5 architecture?<|||||>> Using the approach above with `greedy_search` and a T5 model, I'm still not seeing a `grad_fn` associated with the output logits. Was anyone able to get this working with a T5 architecture?
In order to get the gradient per step, you need to do the greedy decoding on your own. Try using `model.forward` instead to get the gradient and the next token, then you need to concatenate that generated token with the `decoder_input_ids` and repeat the process.
If you want to test this fast, you can use the [ecco package](https://github.com/jalammar/ecco) that I've helped build. It has logic for doing this gradient calculation for any kind of sampling approach (greedy/beam/sample/etc) and for models like T5 or GPT. It is not very optimized in terms of inference times though, I must warn you.<|||||>@JoaoLages that is a really helpful starting point, thank you! I'm not sure I see a beam search sampling process in the code (but perhaps I'm looking in the wrong place). I do see a TODO in `sample_output_token` to add beam search in the future.<|||||>There isn't beam search yeah. What we actually do is that we use the normal `model.generate` method to use beam search, and then we feed the generated tokens through the model to calculate their gradient. So we actually do the generation step 2 times, but in the second we capture the gradients. It's slow, but it could be optimized if we did our custom beam search. |
transformers | 3,719 | closed | Unable to load german BERT model | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): BERT
Language I am using the model on (English, Chinese ...): German
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
from transformers import *
model = TFBertModel.from_pretrained('bert-base-german-dbmdz-cased')
```
I get the following error trace -
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/daksh/miniconda3/envs/1.8/lib/python3.6/site-packages/transformers/modeling_tf_utils.py", line 351, in from_pretrained
assert os.path.isfile(resolved_archive_file), "Error retrieving file {}".format(resolved_archive_file)
File "/Users/daksh/miniconda3/envs/1.8/lib/python3.6/genericpath.py", line 30, in isfile
st = os.stat(path)
TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
```
## Expected behavior
Model should be loaded
## Environment info
- `transformers` version: 2.4.1
- Platform: Mac OS
- Python version: 3.6.5
- Tensorflow version (GPU?): 2.1.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 04-09-2020 11:16:31 | 04-09-2020 11:16:31 | That particular model doesn't have a TF version: https://huggingface.co/bert-base-german-dbmdz-cased#list-files
However, you should be able to convert the PyTorch version to TF, using the `from_pt=True` flag.<|||||>Thanks for clarifying that @julien-c . Is it possible to add this information(regarding availability of TF and pytorch models) somewhere on this [page](https://huggingface.co/transformers/pretrained_models.html) or maybe a dedicated table for it. It's quite useful info for frameworks which depend on Transformers. <|||||>I'm working on converting our DBMDZ models to TF 😅<|||||>The same issue is true for the `uncased` version. Is there a way to force to HuggingFace to download the Torch version instead? <|||||>Yes, as I said: `from_pt=True`<|||||>@hotzenklotz You can now use the model under our DBMDZ namespace: `dbmdz/bert-base-german-cased`.
I've uploaded the TF-compatible model and it can be used with:
```bash
from transformers import *
model = TFBertModel.from_pretrained('dbmdz/bert-base-german-cased')
```
Please let me know if it's working for you!<|||||>@stefan-it Thank you so much. Can we expect to see a TF version of the `uncased` model as well? (And what about Roberta?)<|||||>`dbmdz/bert-base-german-uncased` has also a TF-compatible model now :)
German RoBERTa is currently not planned on our side (unless there's a TPU-supported pre-training script out there) 😅<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,718 | closed | loading from tf_ckp and this showed up: AttributeError: 'BertCrf' object has no attribute 'bias' . | 04-09-2020 10:35:45 | 04-09-2020 10:35:45 | ||
transformers | 3,717 | closed | how to use transformers with gpu | I want to load a model with gpu in transformers, but it seem like the model always load in cpu
my os deepin 15.11 python 3.7.5 pytorch-gpu 1.4 transformers 2.8 | 04-09-2020 10:26:33 | 04-09-2020 10:26:33 | This is a question more suitable to Stack Overflow or another forum |
transformers | 3,716 | closed | Shift labels internally within TransfoXLLMHeadModel when called with labels | Fixes #3711 . | 04-09-2020 10:16:32 | 04-09-2020 10:16:32 | I would also like to return a (1,)-sized tensor for the loss when called with labels as that's easier for the user, what the models do, and what the old documentation said TransfoXLLMHeadModel did. |
transformers | 3,715 | closed | How can i conditional fine-tuning with GPT2? | I can use run_generation.py to create a statement by adding context.
But is there a way to do fine-tuning based on condition (context)?
For example, when data of "context [SEP] sentence" is input, the "context" is used to obtain the hidden state without learning.
In addition, the "sentence" is learned with the language model. | 04-09-2020 09:33:34 | 04-09-2020 09:33:34 | To me, this sound more like a case where encoder-decoder models like `T5` or `Bart` should be fine-tuned. The encoder would encode the "context" and the decoder would be teacher-forced on the sentence.<|||||>> To me, this sound more like a case where encoder-decoder models like `T5` or `Bart` should be fine-tuned. The encoder would encode the "context" and the decoder would be teacher-forced on the sentence.
Thx very much :)<|||||>Perhaps, Is there such logic applied to training code now?<|||||>@toriving I've successfully done "conditional" fine-tuning by adding a new token that indicates which portion of the sequence refers to the "context", similar to the [SEP] token used in the multi sequence version of **BERT**.
E.g. Here's an [example](https://github.com/enzoampil/tito-joker/blob/master/src/utils/process_jokes.py) of how I apply this to prepare a dataset for training GPT2 to generate answers to riddle jokes:
```
<soq> Why did the chicken cross the road? <eoq> To go to the other side <|endoftext|>
```
The effect is the answer (after `<eoq>`), is conditional on the question that precedes it.<|||||>@enzoampil When learning with such data, is "condition" also used in the loss function?
I mean, I am wondering if "Condition" is also learning with a language model.<|||||>Yes if you specify it like above it should<|||||>Okay. Thanks<|||||>> To me, this sound more like a case where encoder-decoder models like `T5` or `Bart` should be fine-tuned. The encoder would encode the "context" and the decoder would be teacher-forced on the sentence.
I would like to ask if you think that using the encoder-decoder model (with wrapping the gpt2 model as encoder and decoder too) will provide normal results, or wrapping the gpt2 model as encoder is not a good idea(maybe use bert as encoder?)?<|||||>currently only bert2bert is supported with the EncoderDecoder structure.<|||||>> @toriving I've successfully done "conditional" fine-tuning by adding a new token that indicates which portion of the sequence refers to the "context", similar to the [SEP] token used in the multi sequence version of **BERT**.
>
> E.g. Here's an [example](https://github.com/enzoampil/tito-joker/blob/master/src/utils/process_jokes.py) of how I apply this to prepare a dataset for training GPT2 to generate answers to riddle jokes:
>
> ```
> <soq> Why did the chicken cross the road? <eoq> To go to the other side <|endoftext|>
> ```
>
> The effect is the answer (after `<eoq>`), is conditional on the question that precedes it.
i would like to ask if you masked inputs part on labels on forward function. What I mean is that you maybe pass labels=input_ids to the forward function. So you set only the padding tokens as masked (value -100) or you set as masked the input tokens too? As we try to perform conditional generation, I think we should count on loss only the reply(?).
|
transformers | 3,714 | closed | Zero shot multilingual BERT | ❓ Questions & Help
I have a doubt about the usage of multilingual BERT.
I did a domain adaptation on the language model **[`BERT-Base, Multilingual Cased`](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip)** with a dataset of a slightly different kind of text. This dataset is unbalanced on English language but contains also other languages as Italian.
Then I did a finetuning on an Italian dataset on NER.
Has this kind of training a name, like **zero shot** classification (beacuse I adapted on a multilingual dataset but unbalanced on English and then we finetune on Italian)?
Or maybe the 0 shot would be in the case in which I finetuned with an Italian dataset and then I evaluated on a corpus of another language?
Thanks | 04-09-2020 09:31:34 | 04-09-2020 09:31:34 | I think what you did there is probably best described as transfer learning.
> Or maybe the 0 shot would be in the case in which I finetuned with an Italian dataset and then I evaluated on a corpus of another language?
Yeah, this is closer to the way the term "zero-shot" is being used in the field right now. Here's a [recent example](https://arxiv.org/abs/1812.10464).
I'll also note that the way I've seen "zero shot learning" used traditionally was pretty narrow: it meant training a classifier on one set of labels and then evaluating on a different set of labels on in-domain data. Recently, especially in NLP, it's often been used more broadly to mean "do a task that the model wasn't explicitly trained on without additional fine tuning", e.g. in the GPT-2 paper.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,713 | closed | cannot determine what will be the cardinality of the output after applying glue_convert_examples_to_features [TF 2.2.0rcx] | # 🐛 Bug
## Information
With Tensorflow 2.2.0 (2.2.0rc2) we should be able to see the number of entries in the data without looking over them and using tf.data.experimental.cardinality.
One issue that I found is that after applying `glue_convert_examples_to_features` tf.data.experimental.cardinality is not able to find the total number of entry. I thought first that it was bug in this TF 2.2.0 release candidate.https://github.com/tensorflow/tensorflow/issues/37998.
When using data from tensorflow dataset tf.data.experimental.cardinality is returning the number of event
```
print(data['train'])
print(tf.data.experimental.cardinality(data['train']))
```
```
<DatasetV1Adapter shapes: {idx: (), label: (), sentence: ()}, types: {idx: tf.int32, label: tf.int64, sentence: tf.string}>
tf.Tensor(67349, shape=(), dtype=int64)
```
Now when I am using Huggingface transformer that modify the structure of the data:
```
train_dataset = glue_convert_examples_to_features(data['train'],
tokenizer,
max_length=128,
task='sst-2')
print(tf.data.experimental.cardinality(train_dataset))
```
```
<FlatMapDataset shapes: ({input_ids: (None,), attention_mask: (None,), token_type_ids: (None,)}, ()), types: ({input_ids: tf.int32, attention_mask: tf.int32, token_type_ids: tf.int32}, tf.int64)>
tf.Tensor(-2, shape=(), dtype=int64)
```
When the input pipeline contains a flat_map, it is generally not possible to statically determine what will be the cardinality of the output from the cardinality from the input. I don't see any flatmap in this function. I am trying to identify which part of the code is responsible. I am not 100% sure this is a transformer issue.
## To reproduce
Steps to reproduce the behavior:
```
import tensorflow as tf
import tensorflow_datasets
from transformers import (
BertConfig,
BertTokenizer,
TFBertModel,
TFBertForSequenceClassification,
glue_convert_examples_to_features,
glue_processors
)
data, info = tensorflow_datasets.load(name='glue/sst2',
data_dir='/tmp/',
with_info=True)
pretrained_weights = 'bert-base-multilingual-uncased'
# Load tokenizer
tokenizer = BertTokenizer.from_pretrained(pretrained_weights)
# recap of input dataset
print(data['train'])
print(tf.data.experimental.cardinality(data['train']))
# Prepare data for BERT
train_dataset = glue_convert_examples_to_features(data['train'],
tokenizer,
max_length=128,
task='sst-2')
# recap of pre processing dataset
print(train_dataset)
print(tf.data.experimental.cardinality(train_dataset))
```
## Expected behavior
I am expecting tf.data.experimental.cardinality to still be able to report the total number of entries after transforming the data with `glue_convert_examples_to_features`
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: MacOS 0.14.6
- Python version: 3.7.5
- Tensorflow version (CPU): 2.2.0rc2 (v2.2.0-rc1-34-ge6e5d6df2a 2.2.0-rc2)
| 04-09-2020 08:40:18 | 04-09-2020 08:40:18 | This issue might be of interest to @jplu <|||||>Hey @tarrade
Usually it is not advised to use the cardinality function for several reasons, the biggest two are: 1) it is still experimental, 2) cardinality works only with TF datasets created with `from_tensors` or `from_tensor_slices` which is not the case in the `glue_convert_examples_to_features` function.
If you need to know the size of your dataset from a TF dataset, there are two simple solutions:
```
# "dataset" is the variable that represents your tf.data.dataset
# works only from TF 2.1 because of the as_numpy_iterator() method
len(list(dataset.as_numpy_iterator())
```
Or
```
# works for all the TF versions
dataset.reduce(0, lambda x, _: x + 1)
```<|||||>Hi @jplu,
yes,
this is experimental but so much better that looping over the full dataset just to get the size while such info is almost here for free.
and it doesn't work out of the box with tf.data.Dataset.from_generator
But, tit is already use in the transformers' code:
https://github.com/huggingface/transformers/blob/master/src/transformers/data/processors/glue.py (line 87)
`
len_examples = tf.data.experimental.cardinality(examples)`
something like that in the right place is the code should work:
`tf.data.experimental.assert_cardinality(len_examples)`
<|||||>It works in glue.py because `examples` is the direct output of the `tfds.load()` function which returns compatible TF Datasets for `cardinality`.
Getting the size of a TF Dataset if very complicated because of the way it is structured. It is a known problem, many issues are opened in the TF repo because of that. And that's also why the `cardinality| method is still in the experimental module since a very long time now, even much before TF 2.0 was released.
If you are not ok to use one of the two solution I proposed, explain me your use case? Are you using Glue from the `tensorflow_datasets` package? If yes, you have several facilities proposed by this package (https://www.tensorflow.org/datasets/overview)<|||||>Hi @jplu,
I understand that using experimental feature of Tensorflow may introduce some instability. This is a faire point and `tf.data.experimental.assert_cardinality` is only available with TF 2.2.0.
My main points and usecases are:
1- Computing the number of element in a sample is very time consuming since you need to loop over all elements.
2- Normally even if data are coming from `tfds.load()` you need some cleaning, preprocessing steps or maybe you want to resample you train/test/valid sample. In such case the total number from the metadata (info) will not help since it was changed. This is a normal process is any ML project.
3- In the version of the code was looking at, the length was computed anyway (this doesn't seems to be the case any more with the big clean up from 3 days ago). This was my main argumentation: you compute for any case the total number of even so why not simply assert the cardinality so any sample produced by `glue_convert_examples_to_features` will have the total number of event it contain and for free (no computation required).
4- Now `tf.data.experimental.assert_cardinality(len_examples)` is experimental, require TF 2.2.0 and it the head of the code, the length doesn't seems to be computed any more.
5- One issue is that I soon as I will store the data preprocessed with `glue_convert_examples_to_features` as TFRecord files, then the cardinality will be lost.
Conclusion: I will take care of doing the assert of the cardinality in my code and I hope that when TF 2.2.0 will be the main release and cardinality more stable we could rediscuss this topic.<|||||>Good points :)
1. I fully agree with you, it is really painful.
2. If you need to change the content of each dataset (doesn't matter is preprocessing or not) such as when doing cross-validation, indeed you have to recompute the size.
3. There is currently a project to fully review and rework the data processing part of the lib, so it should be way more convenient to use once done. Until there, indeed, it is a bit of a mess.
4. I was not aware of this new `tf.data.experimental.assert_cardinality(len_examples)` as I did not fully dive in TF 2.2 yet, but looks very interesting, thanks for the hint :)
5. Indeed, the size cannot be computed from TFRecords, which is a real problem IMHO. I hope in future releases it will be much easier to get the size of a dataset ^^
I will be happy to rediscuss about that, very sorry to do no have, sorry that I could not find a suitable solution to your issue. |
transformers | 3,712 | closed | Text Generation with XLNet is very Slow | Using the run_generation script to generate text with XLNet is currently extremely slow compared to GPT-2 as mentioned in [this issue](https://github.com/huggingface/transformers/issues/789)
> To generate 100 tokens, XLNet takes **3m22s** while GPT-2 takes **14s**. And it grows exponentially : for 500 tokens, XLNet takes **51m46s** while GPT-2 takes **2m52s**.
More information on why this might be happening is included in the issue. However, it was closed before it could be resolved. If anyone could look into this and maybe reopen the original issue, that would be greatly appreciated.
Thanks! | 04-09-2020 07:18:10 | 04-09-2020 07:18:10 | Reopened the previous issue: https://github.com/huggingface/transformers/issues/789 and will take a look next week :-) <|||||>Closing this as well. Reasons are explained in #789. |
transformers | 3,711 | closed | TransfoXLLMHead doesn't shift labels internally when called for loss | # 🐛 Bug
When called with labels to get the language-modeling loss, `TransfoXLLMHead.forward` computes the NLLLoss of the outputs directly against the labels, rather than against the shifted labels like the documentation indicates (and like the other models). This makes it impossible to train with `lm_labels = input_ids` as suggested by the doc.
## Information
Model I am using: TransformerXL
Language I am using the model on: English
The problem arises when using:
* [x] my own modified scripts:
The task I am working on is:
* [x] my own task or dataset:
## To reproduce
```
import torch
from transformers import TransfoXLConfig, TransfoXLLMHeadModel
config = TransfoXLConfig()
lm = TransfoXLLMHeadModel(config)
test_tensor = torch.LongTensor([[0]])
print(lm(input_ids=test_tensor, labels=test_tensor)[0])
```
A 1x1 loss tensor is returned.
## Expected behavior
As there is only 1 token in the input tensor, no loss should be returned: there's no next label to compare the output against. For example, running this with GPT2
```
import torch
from transformers import GPT2Config, GPT2LMHeadModel
config = GPT2Config()
lm = GPT2LMHeadModel(config)
test_tensor = torch.LongTensor([[0]])
print(lm(input_ids=test_tensor, labels=test_tensor)[0])
```
returns `tensor(nan, grad_fn=<NllLossBackward>)`.
## Environment info
- `transformers` version: 2.8.0
- Platform: Linux-5.3.0-45-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
| 04-09-2020 06:59:22 | 04-09-2020 06:59:22 | |
transformers | 3,710 | closed | inconsistent tokenize output | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): Chinese
The problem arises when using: BertTokenizer
I am using the BertModel. When predicting the result, it got inconsistent output (the output differs from time to time). It turns out that the tokenizer gets different output when the input string is "\n" or "\n\n".
for example:
<pre><code>
from transformers.tokenization_bert import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
tokens = tokenizer.tokenize("\n\n")
output: ['[SEP]'] or ['[PAD]'] or ['[CLS]']
</code></pre>
| 04-09-2020 05:32:03 | 04-09-2020 05:32:03 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,709 | closed | Add model tag | Add model tag to be correctly indexed while working on card description | 04-09-2020 02:37:18 | 04-09-2020 02:37:18 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3709?src=pr&el=h1) Report
> Merging [#3709](https://codecov.io/gh/huggingface/transformers/pull/3709?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6435b9f908e7361330db89e263a65b0a58060d11&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3709?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3709 +/- ##
==========================================
- Coverage 78.13% 78.12% -0.01%
==========================================
Files 104 104
Lines 17723 17723
==========================================
- Hits 13847 13846 -1
- Misses 3876 3877 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3709?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3709/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.79% <0.00%> (-0.17%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3709?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3709?src=pr&el=footer). Last update [6435b9f...be477cf](https://codecov.io/gh/huggingface/transformers/pull/3709?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,708 | closed | Add model tag | Add model tag to be correctly indexed while working on description | 04-09-2020 02:35:35 | 04-09-2020 02:35:35 | |
transformers | 3,707 | closed | Distributed training on multiple GPU nodes is slower than on single GPU node | Our team uses pre-trained model and transformers has been a great help.
While trying to benchmark the training speed of our computing infrastructure, we were running this example:
https://github.com/huggingface/transformers/blob/v2.3.0/examples/run_lm_finetuning.py
We found that on a single p3.16xlarge GPU instance, DDP would took about 36:33 to train wikitext-103-raw.
However, once we move on to two p3.16xlarge GPU instance, it would take more time (1:45:49) for the same dataset.
I would like to know what could possibly cause this?
There are something i am suspecting:
1. The synchronization of gradiants/parameters between different process
2. The optimizer that has been used in this scripts.
Any help is appreciated. Thanks! | 04-08-2020 21:41:32 | 04-08-2020 21:41:32 | Are those `p3.16xlarge` is the same AZ? Even in the same AZ, the throughput and latency between machines might be the bottleneck here.<|||||>i.e. you usually need infiniband between cluster machines. But @mfuntowicz knows about this stuff way better than I do...<|||||>Thanks for your reply. The two hosts are indeed in the same AZ.
Here is how i run them:
```
# Node 1:
python3.6 -m torch.distributed.launch \
--nproc_per_node=8 \
--nnodes=2 \
--node_rank=0 \
--master_addr="10.0.0.83" \
--master_port=12345 \
run_lm_finetuning.py \
--output_dir=output \
--model_type=roberta \
--model_name_or_path=roberta-base \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE \
--mlm
# Node 2
python3.6 -m torch.distributed.launch \
--nproc_per_node=8 \
--nnodes=2 \
--node_rank=1 \
--master_addr="10.0.0.83" \
--master_port=12345 \
run_lm_finetuning.py \
--output_dir=output \
--model_type=roberta \
--model_name_or_path=roberta-base \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE \
--mlm
```
I also have the nccl debug info here:
The first part is on multiple nodes, where the training is slow. The second part is on single node, and the training is fast. I can definitely see that on single node, there are many Channels, which can't be found on multiple node.
https://gist.github.com/YingleiZhang/a8df48eb534ba20ff8f26b5309094b55
I was also suspecting that i might need high speed connections (link infiniband) between cluster machines, but in this case, would MPI help? My PyTorch did not built with MPI yet. <|||||>I think the bandwidth between two different nodes are indeed the problem. Consider the amount of data (Our estimation is about 8G for each step) we need to move between different nodes, and the bandwidth for intra-gpu communication is much higher than that of inter-node communication. NCCL folks mentioned that this could be 120 GB/s vs 10 GB/s for all reduce operation. (See it here https://github.com/NVIDIA/nccl/issues/318)
I am closing this issue here. Thanks for the help. <|||||>Thanks for investigating.
Also from your logs @mfuntowicz was saying that it looks like NCCL does not use [AWS EFA](https://aws.amazon.com/hpc/efa/) – maybe something to investigate there. |
transformers | 3,706 | closed | Cleanup fast tokenizers integration | First PR to clean up the fast tokenizers integration. | 04-08-2020 21:08:22 | 04-08-2020 21:08:22 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3706?src=pr&el=h1) Report
> Merging [#3706](https://codecov.io/gh/huggingface/transformers/pull/3706?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f0c96fafd16d206b22a74fe76b251414f7314703&el=desc) will **decrease** coverage by `0.82%`.
> The diff coverage is `90.64%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3706?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3706 +/- ##
==========================================
- Coverage 78.47% 77.65% -0.83%
==========================================
Files 106 106
Lines 17930 17904 -26
==========================================
- Hits 14071 13903 -168
- Misses 3859 4001 +142
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3706?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/3706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `89.42% <ø> (-0.20%)` | :arrow_down: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `95.29% <ø> (-0.04%)` | :arrow_down: |
| [src/transformers/tokenization\_bert\_japanese.py](https://codecov.io/gh/huggingface/transformers/pull/3706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9qYXBhbmVzZS5weQ==) | `67.07% <ø> (-0.79%)` | :arrow_down: |
| [src/transformers/tokenization\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/3706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY2FtZW1iZXJ0LnB5) | `36.25% <ø> (+0.88%)` | :arrow_up: |
| [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <ø> (-0.08%)` | :arrow_down: |
| [src/transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `96.82% <ø> (-0.05%)` | :arrow_down: |
| [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.71% <ø> (-0.12%)` | :arrow_down: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `40.67% <ø> (-0.29%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/3706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `83.33% <ø> (-0.14%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3706/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `97.61% <ø> (-0.06%)` | :arrow_down: |
| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/3706/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3706?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3706?src=pr&el=footer). Last update [f0c96fa...5b54450](https://codecov.io/gh/huggingface/transformers/pull/3706?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Ok here is a new version @mfuntowicz.
I'll add some tests later before merging. |
transformers | 3,705 | closed | Update tokenizers to 0.7.0-rc5 | This includes some bug fix (around added tokens), and a small breaking change. | 04-08-2020 20:32:06 | 04-08-2020 20:32:06 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3705?src=pr&el=h1) Report
> Merging [#3705](https://codecov.io/gh/huggingface/transformers/pull/3705?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bc65afc4dfac3badf3de3be395d4023b44c61bdd&el=desc) will **decrease** coverage by `0.11%`.
> The diff coverage is `50.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3705?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3705 +/- ##
==========================================
- Coverage 78.14% 78.02% -0.12%
==========================================
Files 104 104
Lines 17723 17710 -13
==========================================
- Hits 13849 13818 -31
- Misses 3874 3892 +18
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3705?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3705/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.50% <0.00%> (ø)` | |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3705/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `40.67% <100.00%> (-0.29%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3705/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `85.76% <0.00%> (-1.77%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/3705/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.32% <0.00%> (-0.58%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3705/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.07% <0.00%> (-0.47%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3705/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.96% <0.00%> (-0.17%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3705/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.84% <0.00%> (-0.13%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/3705/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `85.53% <0.00%> (-0.05%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/3705/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `90.40% <0.00%> (-0.03%)` | :arrow_down: |
| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/3705/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3705?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3705?src=pr&el=footer). Last update [bc65afc...e31554a](https://codecov.io/gh/huggingface/transformers/pull/3705?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,704 | closed | Queries about the Notation and Model training of T5 and ELECTRA sentiment classification. | I have a few questions about the model notation. And also short info about T5 and ELECTRA. I would like to make separate issues but things are not too complex. I mainly working on CV, so sorry if I being so silly.
### 1 Cased or Uncased
What is mean by cased and uncased?
```
bert-base-uncased
bert-base-cased
```
### 2 Suffix
I was trying to run the XLM model but in the pre-train model, I've found the following weights, I understood about XML-MLM but couldn't get the rest of the part, ex: `enfr-1024, enro-1024` etc.
```
xlm-mlm-enfr-1024
xlm-mlm-enro-1024
xlm-mlm-tlm-xnli15-1024
```
### 3 Sentiment Analysis using T5 and ELECTRA
Is it possible to use these two models for sentiment classification, simply just a binary classification? How can we implement these two transformers? I have a high-level overview of T5, it transforms both (input/target) as a text. I [found it](https://github.com/google-research/text-to-text-transfer-transformer/issues/109) useful though but bit trouble to implement. Using transformers, is it possible to go within a convenient way? | 04-08-2020 20:24:02 | 04-08-2020 20:24:02 | Hi!
- 1 - casing is the difference between lowercasing and uppercasing. Uncased models do not handle uppercase letters, and therefore lowercase them:
```py
from transformers import AutoTokenizer
uncased_tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
cased_tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
print(uncased_tokenizer.tokenize("Hi, this is Lysandre"))
# ['hi', ',', 'this', 'is', 'l', '##ys', '##and', '##re'] <-- notice how uppercase letters are now lowercased
print(cased_tokenizer.tokenize("Hi, this is Lysandre"))
# ['Hi', ',', 'this', 'is', 'L', '##ys', '##and', '##re']
```
- 2 - These should be clarified with model cards on the [model hub](https://huggingface.co/models) but we haven't gotten to changing them yet.
XLM models are usually multilingual, which is the case for those you mentioned: `ende` means english-german, `enfr`, english-french, `xnli15` means the 15 languages that are used in [XNLI](https://www.nyu.edu/projects/bowman/xnli/).
The following number is the hidden size, e.g. `1024` means that the hidden size of the model is 1024.
- 3 - You may useT5 for sentiment classification, ELECTRA as well but with a bit more additional work.
As @craffel said in the issue you mentioned, T5 was trained with SST-2 so should work out-of-the-box if you follow what he mentioned in this issue.
There is no current `ElectraForSequenceClassification` as ELECTRA is so new, but it will certainly make its way in the library in the coming weeks! Once this head is here (feel free to add it yourself, it would be as easy as copying one head from one other modeling file and putting it for ELECTTRA), ELECTRA can be used for sentiment classification, but it would require you to fine-tune it first to a sentiment classification dataset (like the SST-2 dataset).
If you're looking at easy sentiment classification, please take a look at the pipelines and at the [already-finetuned sequence classification models](https://huggingface.co/models?filter=text-classification) and look for sentiment classification especially.<|||||>@LysandreJik thanks, it was helpful 🙂<|||||>Hi, it is easy to use the pre-trained T5 models for sentiment ID. You could do something like
```Python
MODEL_NAME = "t5-base"
model = transformers.T5ForConditionalGeneration.from_pretrained(MODEL_NAME)
tokenizer = transformers.AutoTokenizer.from_pretrained(MODEL_NAME)
input_text = "sst2 sentence: This movie was great! I loved the acting."
inputs = tokenizer.encode_plus(input_text, return_token_type_ids=False, return_tensors="pt")
print(tokenizer.decode(model.generate(**inputs)[0]))
input_text = "sst2 sentence: The acting was so bad in this movie I left immediately."
inputs = tokenizer.encode_plus(input_text, return_token_type_ids=False, return_tensors="pt")
print(tokenizer.decode(model.generate(**inputs)[0]))
```
The `"sst2 sentence:"` prefix is what we used for the SST-2 task. It is a sentiment ID task. The model needs to see this prefix to know what task you want it to undertake.<|||||>Hi, @craffel Thank for your quick response and the intuitive code snippet. As I said, I am trying to implement **T5** for a `binary sentiment classification` task (label as `1` and `0`). So, if I want to use **T5**, I've to treat my task as a **text-to-text**, in other words, `positive` and `negative`. But I feel a bit confused, if I have the following scenario how should I approach.
## Model loading
```python
MODEL_NAME = "t5-base"
transformer_layer = transformers.T5ForConditionalGeneration.from_pretrained(MODEL_NAME)
tokenizer = transformers.AutoTokenizer.from_pretrained(MODEL_NAME)
```
## A general encoder
```python
def regular_encode(texts, tokenizer, maxlen=512):
enc_di = tokenizer.batch_encode_plus(
texts,
return_attention_masks=False,
return_token_type_ids=False,
pad_to_max_length=True,
max_length=maxlen
)
return np.array(enc_di['input_ids'])
```
## Build the model (as per my task)
```python
def build_model(transformer, max_len=190):
input_word_ids = Input(shape=(max_len,), dtype=tf.int32)
sequence_output = transformer(input_word_ids)[0]
cls_token = sequence_output[:, 0, :]
out = Dense(1, activation='sigmoid')(cls_token)
model = Model(inputs=input_word_ids, outputs=out)
model.compile(Adam(lr=1e-5), loss='binary_crossentropy', metrics=['accuracy'])
return model
```
## Tokenized the data and grab the targets int(1,0)
```python
x_train = regular_encode(data.text, tokenizer, maxlen=190)
y_train = data.target.values # (0, 1)
model = build_model(transformer_layer, max_len=190)
model.fit...
model.predict...
```
I sure I'm missing something crucial part that is not considering `text-to-text` manner. If I convert `1` and `0` of labels as `Positive` and `Negative`...I mean shouldn't the target need to be numeric! And about the prefix, `sst2 sentence:` so, this is, in other words, is a string indicator to inform the model about the goal or task. So, do I have to add this string at the beginning of every text sentence or (samples)?<|||||>> I sure I'm missing something crucial part that is not considering text-to-text manner. If I convert 1 and 0 of labels as Positive and Negative...I mean shouldn't the target need to be numeric!
No, the target should *always* be text for T5. You should map your 0/1 labels to the words "negative" and "positive" and fine-tune T5 to predict those words, and then map them back to 0/1 after the model outputs the text if needed. This is the point of the text-to-text framework - all tasks take text as input and produce text as output. So, for example, your "build model" code should not include a dense layer with a sigmoid output, etc. There is no modification to the model structure necessary whatsoever.
> And about the prefix, sst2 sentence: so, this is, in other words, is a string indicator to inform the model about the goal or task. So, do I have to add this string at the beginning of every text sentence or (samples)?
Yes, that is the intention.<|||||>@LysandreJik @craffel
Please check this issue!
As per the discussion I have a similar approach on binary classification on the text. But it seems that I am doing something wrong. I have also converted the target 0 and 1 to "0" and "1". Don't know where I am getting wrong.
```
MODEL_NAME = "t5-base"
transformer_layer = transformers.TFT5ForConditionalGeneration.from_pretrained(MODEL_NAME)
tokenizer = transformers.AutoTokenizer.from_pretrained(MODEL_NAME)
```
```
def regular_encode(texts, tokenizer, maxlen=512):
enc_di = tokenizer.batch_encode_plus(
texts,
return_attention_masks=False,
return_token_type_ids=False,
pad_to_max_length=True,
max_length=maxlen
)
return np.array(enc_di['input_ids'])
```
```
def build_model(transformer, max_len=190):
input_word_ids = Input(shape=(max_len,), dtype=tf.int32)
sequence_output = transformer(input_word_ids)[0]
cls_token = sequence_output[:, 0, :]
out = Dense(1, activation='sigmoid')(cls_token)
model = Model(inputs=input_word_ids, outputs=out)
model.compile(Adam(lr=1e-5), loss='binary_crossentropy', metrics=['accuracy'])
return model
```
```
x_train = regular_encode(train_df.new_text, tokenizer, maxlen=190)
y_train = train_df.target.values # (0, 1) 0 and 1 convert to string
model = build_model(transformer_layer, max_len=190)
```
```
ValueError: in converted code:
/opt/conda/lib/python3.6/site-packages/transformers/modeling_tf_t5.py:854 call *
encoder_outputs = self.encoder(
/opt/conda/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py:822 __call__
outputs = self.call(cast_inputs, *args, **kwargs)
/opt/conda/lib/python3.6/site-packages/transformers/modeling_tf_t5.py:445 call
raise ValueError("You have to specify either input_ids or inputs_embeds")
ValueError: You have to specify either input_ids or inputs_embeds
```
All inputs are converted to this format
"sst2 sentence: our deeds are the reason for this..."
I used the same things but having trouble with this error. I need to fine-tune the model on my custom dataset.<|||||>Hi @vapyc, this seems to be an unrelated issue. Would you mind opening a new issue? When you do, would it be possible for you to show the entire stack trace, e.g. the line where it fails in your code, alongside all the information you've provided here? Thanks.<|||||>@LysandreJik I'd be very interested in an `ElectraForSequenceClassification` head, as I'm not confident I could implement it myself since I'm quite new to Transformers and still learning how the library is organized. Any chance this is coming soon?<|||||>i just posted a pull request ... was super simple to get it working
https://github.com/huggingface/transformers/pull/4257<|||||>@liuzzi awesome! I look forward to trying it out.<|||||>@liuzzi wonderful, thanks a lot. Well done brother. Can you share a working notebook on this, please? Thank you.<|||||>@innat i did not use a notebook to fine-tune, but for sentiment analysis you can just use the run_glue.py script with the SST-2 task which is a binary sentiment analysis task. You shouldn't even need to change any code, just make sure your dataset follows the format of SST-2.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,703 | closed | Token-level regression mode added in ForTokenClassification models | This is related to the issue #3646 I opened 2 days ago and which was considered interesting by @julien-c
I added the **support for token-level regression in Bert, Roberta, Albert, XLNet, XLM, DistilBert and in the template for adding a new model** when `self.num_labels == 1`, fixing the docstring to match the new changes (and correcting the one for XLNetForTokenClassification which was copied from the XLNForMultipleChoice one).
Given two different approaches to compute `active_labels`, I privileged the one used in more recent models (e.g. Albert). The change was tested on tests and examples (without RUN_SLOW and RUN_CUSTOM_TOKENIZERS though) and passed all tests.
I didn't felt like adding this to Electra since its TokenClassification implementation seems to be missing the `num_labels` variable for some reason. I didn't add new tests since the sentence regression case wasn't covered by the testing suite either.
**Edit:** After the fix on `ElectraForTokenClassification`'s `num_labels` attribute, I added support for token-level regression there too. | 04-08-2020 19:00:34 | 04-08-2020 19:00:34 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3703?src=pr&el=h1) Report
> Merging [#3703](https://codecov.io/gh/huggingface/transformers/pull/3703?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6435b9f908e7361330db89e263a65b0a58060d11&el=desc) will **decrease** coverage by `0.98%`.
> The diff coverage is `59.37%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3703?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3703 +/- ##
==========================================
- Coverage 78.13% 77.14% -0.99%
==========================================
Files 104 104
Lines 17723 17752 +29
==========================================
- Hits 13847 13695 -152
- Misses 3876 4057 +181
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3703?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/3703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `74.50% <0.00%> (-0.75%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.33% <50.00%> (-2.45%)` | :arrow_down: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `87.88% <66.66%> (-0.28%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.04% <66.66%> (-10.67%)` | :arrow_down: |
| [src/transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/3703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `88.78% <66.66%> (-0.38%)` | :arrow_down: |
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/3703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.02% <70.00%> (-0.57%)` | :arrow_down: |
| [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/3703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `72.38% <75.00%> (-0.27%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0.00%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3703/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `95.61% <0.00%> (-2.64%)` | :arrow_down: |
| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/3703/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3703?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3703?src=pr&el=footer). Last update [6435b9f...c6692e6](https://codecov.io/gh/huggingface/transformers/pull/3703?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>LGTM but I'll let others chime in.<|||||>For ELECTRA, the discrepancy should have been fixed by 500aa12318ce5acd289d5edb6cb8266b3c3b162e, so can you propagate your changes there too?<|||||>Related to naming, as it's for the `XXXForTokenClassification` when you have a single label wouldn't you expect to get a cross-entropy loss such as [binary cross-entropy](https://pytorch.org/docs/stable/nn.html#bceloss), rather than regression? Seeing as it's a classification model?<|||||>I'd say the same would apply to `XXXForSentenceClassification`, right? It would probably be best to decouple regression and classification for those two classes instead of having regression as a special case, and make the behavior for `num_labels == 1` the same as `num_labels == 2` for labels that are not one-hot encoded.
Probably worth a dedicated PR to decouple both instead of handling it here!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Any chance of blowing new life into this? Token classification is particularly familiar due to NER, but in many research fields (e.g. psycholinguistic studies) we are interested in a lot more than that. Continuous values for tokens are very common there. I'd love to see regression and multilabel classification for token classification models.<|||||>Sad to see this die :( |
transformers | 3,702 | closed | Add `run_glue_tpu.py` that trains models on TPUs | 04-08-2020 17:17:32 | 04-08-2020 17:17:32 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3702?src=pr&el=h1) Report
> Merging [#3702](https://codecov.io/gh/huggingface/transformers/pull/3702?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f68d22850ced09bb194b30068ff94ca3409f0879&el=desc) will **decrease** coverage by `0.02%`.
> The diff coverage is `41.66%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3702?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3702 +/- ##
==========================================
- Coverage 78.06% 78.03% -0.03%
==========================================
Files 100 100
Lines 17134 17144 +10
==========================================
+ Hits 13375 13378 +3
- Misses 3759 3766 +7
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3702?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3702/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.30% <36.36%> (-0.81%)` | :arrow_down: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3702/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `97.01% <100.00%> (+0.02%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3702/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.45% <0.00%> (ø)` | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3702?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3702?src=pr&el=footer). Last update [f68d228...6e959fd](https://codecov.io/gh/huggingface/transformers/pull/3702?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Hmm don't worry about me, this shouldn't be a problem – feel free to merge if ready @LysandreJik !<|||||>@jysohn23 I'm trying to run a variant of `run_glue_tpu.py` on TPUs and am stuck at an oom error. The first iteration of the below [for loop](https://github.com/jysohn23/transformers/blob/tpu/examples/run_tpu_glue.py#L150) runs fine, but it breaks on the second one. Any pointers on how to fix this?
```
train_dataloader = pl.ParallelLoader(dataloader, [args.device]).per_device_loader(args.device)
epoch_iterator = tqdm(train_dataloader, desc="Iteration", total=len(dataloader), disable=disable_logging)
for step, batch in enumerate(epoch_iterator):
```
I tried reducing the batch-size to 1 and running on a single core, both led to the same error. I'm using this `gcr.io/tpu-pytorch/xla:nightly_3.6` image for my experiments.
full log - shorturl.at/iswxR
few lines of the error log -
```
020-06-30 21:49:29.304998: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] >>> Dumping Computation 0 | 1/6136 [01:16<131:08:36, 76.95s/it]
2020-06-30 21:49:29.305126: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] HloModule SyncTensorsGraph.33776, input_output_alias={ {0}: (250, {}), {1}: (249, {}), {2}: (265, {}), {3}: (248, {}), {4}: (247, {}), {5}: (246, {}), {6}: (245, {}), {7}: (244, {}), {8}: (269, {}), {9}: (243, {}), {10}: (242, {}), {11}: (241, {}), {12}: (240, {}), {13}: (239, {}), {14}: (271, {}), {15}: (238, {}), {16}: (237, {}), {17}: (236, {}), {18}: (235, {}), {19}: (234, {}), {20}: (273, {}), {21}: (233, {}), {22}: (232, {}), {23}: (231, {}), {24}: (230, {}), {25}: (229, {}), {26}: (274, {}), {27}: (228, {}), {28}: (227, {}), {29}: (226, {}), {30}: (225, {}), {31}: (224, {}), {32}: (276, {}), {33}: (223, {}), {34}: (222, {}), {35}: (221, {}), {36}: (220, {}), {37}: (219, {}), {38}: (277, {}), {39}: (218, {}), {40}: (217, {}), {41}: (216, {}), {42}: (215, {}), {43}: (214, {}), {44}: (279, {}), {45}: (213, {}), {46}: (212, {}), {47}: (211, {}), {48}: (210, {}), {49}: (209, {}), {50}: (280, {}), {51}: (208, {}), {52}: (207, {}), {53}: (206, {}), {54}: (205, {}), {55}: (204, {}), {56}: (282, {}), {57}: (203, {}), {58}: (202, {}), {59}: (201, {}), {60}: (200, {}), {61}: (199, {}), {62}: (283, {}), {63}: (198, {}), {64}: (197, {}), {65}: (196, {}), {66}: (195, {}), {67}: (194, {}), {68}: (285, {}), {69}: (193, {}), {70}: (192, {}), {71}: (191, {}), {72}: (190, {}), {73}: (189, {}), {74}: (286, {}), {75}: (188, {}), {76}: (187, {}), {77}: (186, {}), {78}: (185, {}), {79}: (184, {}), {80}: (288, {}), {81}: (183, {}), {82}: (182, {}), {83}: (181, {}), {84}: (180, {}), {85}: (179, {}), {86}: (289, {}), {87}: (178, {}), {88}: (177, {}), {89}: (176, {}), {90}: (175, {}), {91}: (174, {}), {92}: (291, {}), {93}: (173, {}), {94}: (172, {}), {95}: (171, {}), {96}: (170, {}), {97}: (169, {}), {98}: (292, {}), {99}: (168, {}), {100}: (167, {}), {101}: (166, {}), {102}: (165, {}), {103}: (164, {}), {104}: (294, {}), {105}: (163, {}), {106}: (162, {}), {107}: (161, {}), {108}: (160, {}), {109}: (159, {}), {110}: (295, {}), {111}: (158, {}), {112}: (157, {}), {113}: (156, {}), {114}: (155, {}), {115}: (154, {}), {116}: (297, {}), {117}: (153, {}), {118}: (152, {}), {119}: (151, {}), {120}: (150, {}), {121}: (149, {}), {122}: (298, {}), {123}: (148, {}), {124}: (147, {}), {125}: (146, {}), {126}: (145, {}), {127}: (144, {}), {128}: (300, {}), {129}: (143, {}), {130}: (142, {}), {131}: (141, {}), {132}: (140, {}), {133}: (139, {}), {134}: (301, {}), {135}: (138, {}), {136}: (137, {}), {137}: (136, {}), {138}: (135, {}), {139}: (134, {}), {140}: (303, {}), {141}: (133, {}), {142}: (132, {}), {143}: (131, {}), {144}: (130, {}), {145}: (129, {}), {146}: (304, {}), {147}: (128, {}), {148}: (127, {}), {149}: (126, {}), {150}: (125, {}), {151}: (124, {}), {152}: (306, {}), {153}: (123, {}), {154}: (122, {}), {155}: (121, {}), {156}: (120, {}), {157}: (119, {}), {158}: (307, {}), {159}: (118, {}), {160}: (117, {}), {161}: (116, {}), {162}: (115, {}), {163}: (114, {}), {164}: (309, {}), {165}: (113, {}), {166}: (112, {}), {167}: (111, {}), {168}: (110, {}), {169}: (109, {}), {170}: (310, {}), {171}: (108, {}), {172}: (107, {}), {173}: (106, {}), {174}: (105, {}), {175}: (104, {}), {176}: (312, {}), {177}: (103, {}), {178}: (102, {}), {179}: (101, {}), {180}: (100, {}), {181}: (99, {}), {182}: (313, {}), {183}: (98, {}), {184}: (97, {}), {185}: (96, {}), {186}: (95, {}), {187}: (94, {}), {188}: (315, {}), {189}: (93, {}), {190}: (92, {}), {191}: (91, {}), {192}: (90, {}), {193}: (89, {}), {194}: (316, {}), {195}: (88, {}), {196}: (87, {}), {197}: (86, {}), {198}: (85, {}), {199}: (84, {}), {200}: (318, {}), {201}: (83, {}), {202}: (82, {}), {203}: (81, {}), {204}: (80, {}), {205}: (79, {}), {206}: (319, {}), {207}: (78, {}), {208}: (77, {}), {209}: (76, {}), {210}: (75, {}), {211}: (74, {}), {212}: (321, {}), {213}: (73, {}), {214}: (72, {}), {215}: (71, {}), {216}: (70, {}), {217}: (69, {}), {218}: (322, {}), {219}: (68, {}), {220}: (67, {}), {221}: (66, {}), {222}: (65, {}), {223}: (64, {}), {224}: (324, {}), {225}: (63, {}), {226}: (62, {}), {227}: (61, {}), {228}: (60, {}), {229}: (59, {}), {230}: (325, {}), {231}: (58, {}), {232}: (57, {}), {233}: (56, {}), {234}: (55, {}), {235}: (54, {}), {236}: (327, {}), {237}: (53, {}), {238}: (52, {}), {239}: (51, {}), {240}: (50, {}), {241}: (49, {}), {242}: (328, {}), {243}: (48, {}), {244}: (47, {}), {245}: (46, {}), {246}: (45, {}), {247}: (44, {}), {248}: (330, {}), {249}: (43, {}), {250}: (42, {}), {251}: (41, {}), {252}: (40, {}), {253}: (39, {}), {254}: (331, {}), {255}: (38, {}), {256}: (37, {}), {257}: (36, {}), {258}: (35, {}), {259}: (34, {}), {260}: (333, {}), {261}: (33, {}), {262}: (32, {}), {263}: (31, {}), {264}: (30, {}), {265}: (29, {}), {266}: (334, {}), {267}: (28, {}), {268}: (27, {}), {269}: (26, {}), {270}: (25, {}), {271}: (24, {}), {272}: (336, {}), {273}: (23, {}), {274}: (22, {}), {275}: (21, {}), {276}: (20, {}), {277}: (19, {}), {278}: (337, {}), {279}: (18, {}), {280}: (17, {}), {281}: (16, {}), {282}: (15, {}), {283}: (14, {}), {284}: (339, {}), {285}: (13, {}), {286}: (12, {}), {287}: (8, {}), {288}: (7, {}), {289}: (5, {}), {290}: (340, {}), {291}: (346, {}), {292}: (4, {}), {377}: (342, {}) }
2020-06-30 21:49:29.305162: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76]
2020-06-30 21:49:29.305173: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %MaxComputation.2092 (x.2093: f32[], y.2094: f32[]) -> f32[] {
2020-06-30 21:49:29.305181: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %x.2093 = f32[] parameter(0)
2020-06-30 21:49:29.305196: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %y.2094 = f32[] parameter(1)
2020-06-30 21:49:29.305204: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] ROOT %maximum.2095 = f32[] maximum(f32[] %x.2093, f32[] %y.2094)
2020-06-30 21:49:29.305212: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] }
2020-06-30 21:49:29.305221: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76]
2020-06-30 21:49:29.305235: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %AddComputation.2101 (x.2102: f32[], y.2103: f32[]) -> f32[] {
2020-06-30 21:49:29.305244: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %x.2102 = f32[] parameter(0)
2020-06-30 21:49:29.305254: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %y.2103 = f32[] parameter(1)
2020-06-30 21:49:29.305264: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] ROOT %add.2104 = f32[] add(f32[] %x.2102, f32[] %y.2103)
2020-06-30 21:49:29.305273: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] }
2020-06-30 21:49:29.305283: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76]
.
.
.
.
2020-06-30 21:49:29.568300: E 5603 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %subtract.5549 = f32[] subtract(f32[] %constant.5532, f32[] %constant.5533)
2020-06-30 21:49:29.568320: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %constant.20745 = f32[] constant(0.125)
2020-06-30 21:49:29.568321: E 5603 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %broadcast.5550 = f32[1,16,128,128]{3,2,1,0} broadcast(f32[] %subtract.5549), dimensions={}
2020-06-30 21:49:29.568331: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %broadcast.20746 = f32[1024,4096]{1,0} broadcast(f32[] %constant.20745), dimensions={}
2020-06-30 21:49:29.568332: E 5603 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %multiply.5551 = f32[1,16,128,128]{3,2,1,0} multiply(f32[1,16,128,128]{3,2,1,0} %multiply.5548, f32[1,16,128,128]{3,2,1,0} %broadcast.5550)
2020-06-30 21:49:29.568342: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %multiply.20747 = f32[1024,4096]{1,0} multiply(f32[1024,4096]{1,0} %get-tuple-element.20744, f32[1024,4096]{1,0} %broadcast.20746)
2020-06-30 21:49:29.568344: E 5603 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %broadcast.5552 = f32[1,16,128,128]{3,2,1,0} broadcast(f32[] %constant.5533), dimensions={}
2020-06-30 21:49:29.568353: E 6014 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %reshape.25706 = f32[1,1]{1,0} reshape(f32[] %p263.1975)
2020-06-30 21:49:29.568354: E 5603 tensorflow/compiler/xla/xla_client/xla_util.cc:76] %add.5553 = f32[1,16,128,128]{3,2,1,0} add(f32[1,16,128,128]{3,2,1,0} %multiply.5551, f32[1,16,128,128]{3,2,1,0} %broadcast.5552)
.
.
.
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: Ran out of memory in memory space vmem. It should not be possible to run out of vmem - please file a bug against XLA.
Largest program allocations in vmem:
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
[[{{node XRTCompile}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[XRTCompile_G6]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
0 successful operations.
0 derived errors ignored.
Traceback (most recent call last):
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 235, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 229, in _start_fn
fn(gindex, *args)
File "/export/share/akhilesh-gotmare/tpu_gedi/transformers/examples/run_tpu_glue.py", line 797, in _mp_fn
main(args)
File "/export/share/akhilesh-gotmare/tpu_gedi/transformers/examples/run_tpu_glue.py", line 607, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer, disable_logging=disable_logging)
File "/export/share/akhilesh-gotmare/tpu_gedi/transformers/examples/run_tpu_glue.py", line 186, in train
for step, batch in enumerate(epoch_iterator):
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/tqdm/std.py", line 1107, in __iter__
for obj in iterable:
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/parallel_loader.py", line 31, in __next__
return self.next()
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/parallel_loader.py", line 37, in next
xm.mark_step()
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 549, in mark_step
wait=xu.getenv_as('XLA_SYNC_WAIT', bool, False))
RuntimeError: Resource exhausted: From /job:tpu_worker/replica:0/task:0:
2 root error(s) found.
(0) Resource exhausted: Ran out of memory in memory space vmem. It should not be possible to run out of vmem - please file a bug against XLA.
Largest program allocations in vmem:
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
[[{{node XRTCompile}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: Ran out of memory in memory space vmem. It should not be possible to run out of vmem - please file a bug against XLA.
Largest program allocations in vmem:
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
[[{{node XRTCompile}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[XRTCompile_G6]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
0 successful operations.
0 derived errors ignored.
Traceback (most recent call last):
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 235, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 229, in _start_fn
fn(gindex, *args)
File "/export/share/akhilesh-gotmare/tpu_gedi/transformers/examples/run_tpu_glue.py", line 797, in _mp_fn
main(args)
File "/export/share/akhilesh-gotmare/tpu_gedi/transformers/examples/run_tpu_glue.py", line 607, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer, disable_logging=disable_logging)
File "/export/share/akhilesh-gotmare/tpu_gedi/transformers/examples/run_tpu_glue.py", line 186, in train
for step, batch in enumerate(epoch_iterator):
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/tqdm/std.py", line 1107, in __iter__
for obj in iterable:
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/parallel_loader.py", line 31, in __next__
return self.next()
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/parallel_loader.py", line 37, in next
xm.mark_step()
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 549, in mark_step
wait=xu.getenv_as('XLA_SYNC_WAIT', bool, False))
RuntimeError: Resource exhausted: From /job:tpu_worker/replica:0/task:0:
2 root error(s) found.
(0) Resource exhausted: Ran out of memory in memory space vmem. It should not be possible to run out of vmem - please file a bug against XLA.
Largest program allocations in vmem:
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
[[{{node XRTCompile}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: Ran out of memory in memory space vmem. It should not be possible to run out of vmem - please file a bug against XLA.
Largest program allocations in vmem:
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
XLA label: %fusion.4431 = (f32[1024]{0:T(1024)}, f32[24,128]{1,0:T(8,128)}, f32[24,128]{1,0:T(8,128)}, f32[1024]{0:T(1024)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24,128,1024]{2,1,0:T(8,128)}, f32[24...
Allocation type: scoped
[[{{node XRTCompile}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[XRTCompile_G6]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
0 successful operations.
0 derived errors ignored.
/root/anaconda3/envs/pytorch/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
len(cache))
/root/anaconda3/envs/pytorch/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
len(cache))
/root/anaconda3/envs/pytorch/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
len(cache))
/root/anaconda3/envs/pytorch/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
len(cache))
/root/anaconda3/envs/pytorch/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
len(cache))
/root/anaconda3/envs/pytorch/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
len(cache))
Traceback (most recent call last):
File "run_tpu_glue.py", line 806, in <module>
main_cli()
File "run_tpu_glue.py", line 802, in main_cli
xmp.spawn(_mp_fn, args=(args,), nprocs=args.num_cores)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 300, in spawn
start_method=start_method)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes
while not context.join():
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 113, in join
(error_index, exitcode)
Exception: process 6 terminated with exit code 17
```
|
|
transformers | 3,701 | closed | Problem with https://transformer.huggingface.co/doc/gpt2-xl | Hi, when using it, simple switch the button up to xl one. When you try to generate a prediction of words; it will simply never load regardless of what all the other setting are on. Every other gpt2 model size works; expect the xl one. Tried it on Android and Linux same problem and different browsers Firefox and Google Chrome same problem nothing to with addons cause the same problem occurs with them all off. | 04-08-2020 15:55:55 | 04-08-2020 15:55:55 | Duplicate of https://github.com/huggingface/transformers/issues/3452.
We should just remove the option cc @julien-c <|||||>You’re right, I’ll get on it<|||||>Also best username ever, @MarxEngelsLeninStalin <|||||>Oh, I like that model as someone who has problems with spelling/grammar and fatigue it's useful to use xl one has more knowledge on it (yes I know it's not accurate but more accurate then large model). Is there any prospect of it being turned on again in the future?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@Marxist-Leninist, unfortunately adding it back isn't on our near future roadmap. |
transformers | 3,700 | closed | Would the weights for the main body of the pertained GPT2Model and pertained GPT2DoubleHeadsModel be identical? | Hello,
From my understanding, the difference between the GPT2Model and GPT2DoubleHeadsModel is that GPT2Model does not include any output head, whereas GPT2DoubleHeadsModel includes two types of output heads (lmhead and mchead).
I am wondering, would the weights used in the pertained GPT2Model identical to the weights used in the main body (every parts of the model except the output heads) of the pertained GPT2DoubleHeadsModel? or would the two sets of weights be different since GPT2DoubleHeadsModel was trained after including the output heads, whereas GPT2Model was trained without any output head?
Thank you (I hope my question is understandable), | 04-08-2020 14:22:31 | 04-08-2020 14:22:31 | Hi, except if you freeze the transformer's (transformer being the base model) weights during the training, these will be modified during the fine-tuning. I believe none of our scripts in the library freeze the transformer's weight; so the weights would not be identical.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,699 | closed | Bug in ElectraForTokenClassification | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): ElectraForTokenClassification
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [√] the official example scripts: (give details below)
huggingface.co/transformers/model_doc/electra.html#electrafortokenclassification
* [×] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ×] an official GLUE/SQUaD task: (give the name)
* [ √] my own task or dataset: (give details below)
Ran the example given in the documentation
from transformers import ElectraTokenizer, ElectraForTokenClassification
import torch
tokenizer = ElectraTokenizer.from_pretrained('google/electra-small-discriminator')
model = ElectraForTokenClassification.from_pretrained('google/electra-small-discriminator')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute",
add_special_tokens=True)).unsqueeze(0) # Batch size 1
labels = torch.tensor([1] * input_ids.size(1)).unsqueeze(0) # Batch size 1
outputs = model(input_ids, labels=labels)
loss, scores = outputs[:2]
## To reproduce
Steps to reproduce the behavior:
1. Start Google Colab
2. Install Transformer and requriements
3. run the code
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
AttributeError: 'ElectraForTokenClassification' object has no attribute 'num_labels'
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Windows 10
- Python version: 3.7.6
- PyTorch version (GPU?): Colab
- Tensorflow version (GPU?): Colab
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 04-08-2020 13:58:20 | 04-08-2020 13:58:20 | Oh no, it should be `self.config.num_labels` instead of `self.num_labels` [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_electra.py#L665)!<|||||>My bad, it should be fixed now! |
transformers | 3,698 | closed | More doc for model cards | see https://github.com/huggingface/transformers/pull/3679#pullrequestreview-389368270 | 04-08-2020 13:57:34 | 04-08-2020 13:57:34 | |
transformers | 3,697 | closed | Fix force_download of files on Windows | On Windows, `os.rename` gives the error
```
FileExistsError: [WinError 183] Cannot create a file when that file already exists:
```
when trying to re-download a model that already exists in cache using `force_download=True`. | 04-08-2020 11:51:31 | 04-08-2020 11:51:31 | Can't test this right now as I don't have access to a Windows machine - does this still work if the model doesn't already exist?<|||||>>
>
> Can't test this right now as I don't have access to a Windows machine - does this still work if the model doesn't already exist?
Yes, I tested that 😄. There shouldn't be any further differences between `rename` and `replace`. Here's [what the documentation says](https://docs.python.org/3/library/os.html#os.replace). |
transformers | 3,696 | closed | Can not set different token and model dir in `run_glue.py` | # 🐛 Bug
## Information
the original code as follows, when i use args.tokenizer_name different from model_name_or_path, it still call for the model_name_or_path config file.
tokenizer = AutoTokenizer.from_pretrained(
args.tokenizer_name if args.tokenizer_name else args.model_name_or_path,
do_lower_case=args.do_lower_case,
cache_dir=args.cache_dir if args.cache_dir else None,
)
It would be solved by adding config params as follows:
tokenizer = AutoTokenizer.from_pretrained(
args.tokenizer_name if args.tokenizer_name else args.model_name_or_path,
do_lower_case=args.do_lower_case,
config=config,
cache_dir=args.cache_dir if args.cache_dir else None,
)
| 04-08-2020 11:04:47 | 04-08-2020 11:04:47 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,695 | closed | Deserialize BERT Sequence Lassifier Quantized Model & Inferencing Issue | Dynamic Quantization: Loading Quantized Model issue here:
Currently am looking at below collab link for quantization, where all Linear layers are quantized in corresponding BERTForSequence Classification.
Step:1,
Serialized Quantized Model
Step 2:
Deserialized Using below code:
Screenshot#1:
Linear layers of the encoder are converted to DynamicQuantizedLinear Layers just after conversion
Once loaded serialized quantized model for future use couldn't show up DynamicQuantizedLinear layers rather Linear layers for the query, key, values as shown in screenshots.
Also, inference these deserialized models shows error predictions as shown here, so someone guide me how to deserialize quantized models as it is taking hell lot of time to figure out the same.
https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/dynamic_quantization_bert_tutorial.ipynb?authuser=1#scrollTo=dUJ1NGinLAa1
## Information
<img width="922" alt="Screenshot 2020-04-08 at 3 50 23 PM" src="https://user-images.githubusercontent.com/6042186/78773790-5c201480-79b1-11ea-930a-aff268c97394.png">
<img width="681" alt="Screenshot 2020-04-08 at 3 49 08 PM" src="https://user-images.githubusercontent.com/6042186/78773823-6a6e3080-79b1-11ea-88ef-406889880e83.png">
| 04-08-2020 10:30:15 | 04-08-2020 10:30:15 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,694 | closed | Extending XLM Roberta for Question Answering | # ❓ Questions & Help
The AutoModelForQuestionAnswering is supported by many models, but not yet by XLM Roberta. In the current implementation I could see that most task-specific classes for XLM-R, e.g. XLMRobertaForSequenceClassification are just inheriting from Roberta. However, when I try to extend the class analogously, the process fails. This is my extension:
`from transformers.modeling_roberta import (
RobertaForQuestionAnswering,
)`
and
`@add_start_docstrings(
XLM_ROBERTA_START_DOCSTRING,
)`
`class XLMRobertaForQuestionAnswering(RobertaForQuestionAnswering):
config_class = XLMRobertaConfig
pretrained_model_archive_map = XLM_ROBERTA_PRETRAINED_MODEL_ARCHIVE_MAP
`
The error message I get is
> Traceback (most recent call last):
File "..miniconda3/lib/python3.6/site-packages/torch/serialization.py", line 289, in _check_seekable
f.seek(f.tell())
AttributeError: 'NoneType' object has no attribute 'seek'
> During handling of the above exception, another exception occurred:
> Traceback (most recent call last):
File "../miniconda3/lib/python3.6/site-packages/transformers/modeling_utils.py", line 516, in from_pretrained
state_dict = torch.load(resolved_archive_file, map_location="cpu")
File "../miniconda3/lib/python3.6/site-packages/torch/serialization.py", line 525, in load
with _open_file_like(f, 'rb') as opened_file:
File "../miniconda3/lib/python3.6/site-packages/torch/serialization.py", line 217, in _open_file_like
return _open_buffer_reader(name_or_buffer)
File "../miniconda3/lib/python3.6/site-packages/torch/serialization.py", line 202, in __init__
_check_seekable(buffer)
File "../miniconda3/lib/python3.6/site-packages/torch/serialization.py", line 292, in _check_seekable
raise_err_msg(["seek", "tell"], e)
File "/home/anlausch/miniconda3/lib/python3.6/site-packages/torch/serialization.py", line 285, in raise_err_msg
raise type(e)(msg)
AttributeError: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.
> File "../modeling_auto.py", line 968, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "..miniconda3/lib/python3.6/site-packages/transformers/modeling_utils.py", line 519, in from_pretrained
"Unable to load weights from pytorch checkpoint file. "
Any idea what's going on here? | 04-08-2020 09:41:27 | 04-08-2020 09:41:27 | I've worked around this problem now by not extending the huggingface framework directly but implementing XLMRobertaForQuestionAnswering externally (using huggingface classes etc.). <|||||>Would make sense to have it as a new feature though but that's a different issue type. ;)<|||||>Could you share the code or make PR? <|||||>@djstrong The code corresponds to my solution presented above. The issue was related to me changing the library instead of just adding the file externally. |
transformers | 3,693 | closed | How can I pinpoints Logs directory to Google Drive while finetuning GPT-2 model, which helps in visualizing data via tensorboard? |
Hi,
I am facing an issue in retrieving logs, while fine-tuning GPT-2 model by using [Google Colab](https://colab.research.google.com/github/interactive-fiction-class/interactive-fiction-class.github.io/blob/master/homeworks/language-model/hw4_transformer.ipynb).
As the finetunining takes several hours, Google Colab halts the running process at certain point in time, even there are remaining epochs left behind.
In that case, I can be able to successfully continue my finetuning by including a parameter "should_continue" while running the script using [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py).
However, my logs vanished, and I could not be to retrieve data of tensorbaord, which has been run with these commands
```
%load_ext tensorboard
%tensorboard --logdir=runs
```
**Is there a way to pin point my logs to the Google Drive, so that I can retrieve and visualize them with tensor-board at any point in time?**
| 04-08-2020 09:12:14 | 04-08-2020 09:12:14 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.