repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 3,592 | closed | Issues with using SciBERT for Summarizer | Hi All, I am not sure if I am doing this right? But i want my summarizer to use SciBERT SciVocab instead of the traditional BERT vocab. Appreciate any help and advice! This is what I am currently using:
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
from transformers import BertTokenizer, BertModel
model_version = 'scibert_scivocab_uncased'
do_lower_case = True
model = BertModel.from_pretrained(model_version)
tokenizer = BertTokenizer.from_pretrained(model_version, do_lower_case=do_lower_case)
summarizer = pipeline(task="summarization", model = model, tokenizer = tokenizer)
summary = summarizer(readin_df['Text'][0])
I am facing this error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-39-83180e8b1c13> in <module>
1 summarizer = pipeline(task="summarization", model = model, tokenizer = tokenizer)
2
----> 3 summary = summarizer(readin_df['Text'][0])
/opt/conda/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, return_tensors, return_text, clean_up_tokenization_spaces, *documents, **generate_kwargs)
1251
1252 summaries = self.model.generate(
-> 1253 inputs["input_ids"], attention_mask=inputs["attention_mask"], **generate_kwargs,
1254 )
1255
/opt/conda/lib/python3.7/site-packages/torch/autograd/grad_mode.py in decorate_no_grad(*args, **kwargs)
47 def decorate_no_grad(*args, **kwargs):
48 with self:
---> 49 return func(*args, **kwargs)
50 return decorate_no_grad
51
/opt/conda/lib/python3.7/site-packages/transformers/modeling_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, attention_mask, decoder_start_token_id)
788 if self.get_output_embeddings() is None:
789 raise AttributeError(
--> 790 "You tried to generate sequences with a model that does not have a LM Head."
791 "Please use another model class (e.g. `OpenAIGPTLMHeadModel`, `XLNetLMHeadModel`, `GPT2LMHeadModel`, `CTRLLMHeadModel`, `T5WithLMHeadModel`, `TransfoXLLMHeadModel`, `XLMWithLMHeadModel`, `BartForConditionalGeneration` )"
792 )
AttributeError: You tried to generate sequences with a model that does not have a LM Head.Please use another model class (e.g. `OpenAIGPTLMHeadModel`, `XLNetLMHeadModel`, `GPT2LMHeadModel`, `CTRLLMHeadModel`, `T5WithLMHeadModel`, `TransfoXLLMHeadModel`, `XLMWithLMHeadModel`, `BartForConditionalGeneration` ) | 04-02-2020 15:22:02 | 04-02-2020 15:22:02 | I believe only BART and T5 can do summarization for now. See the [documentation regarding the checkpoints that may summarize](https://huggingface.co/transformers/main_classes/pipelines.html#summarizationpipeline).<|||||>Here is [a notebook](https://github.com/Nikoschenk/bert-extractive-summarizer/blob/master/colab/scibert-summaries.ipynb) using the scibert model based on the [great code](https://github.com/dmmiller612/bert-extractive-summarizer) that Derek provided.
|
transformers | 3,591 | closed | Cannot load model in tranformers | Hello, I tried to upload your model and I get this error:
tokenizer = AutoTokenizer.from_pretrained("nlpaueb/bert-base-greek-uncased-v1")
model = AutoModelWithLMHead.from_pretrained("nlpaueb/bert-base-greek-uncased-v1")
ERROR:pytorch_transformers.modeling_utils:Model name 'nlpaueb/bert-base-greek-uncased-v1'
was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased,
bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese,
bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking,
bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc).
We assumed 'nlpaueb/bert-base-greek-uncased-v1'
was a path or url but couldn't find any file associated to this path or url.
If I download the model and upload it from a folder in the system I get this error:
tokenizer = AutoTokenizer.from_pretrained("/home/transformers/huggingface/greekaueb")
model = AutoModelWithLMHead.from_pretrained("/home/transformers/huggingface/greekaueb")
ValueErrorValueError: : Unrecognized model identifier in /home/hatzimin/transformers
/huggingface/greek_transfer_learning/greekaueb.
Should contains one of 'bert', 'openai-gpt', 'gpt2', 'transfo-xl', 'xlnet', 'xlm', 'roberta'
| 04-02-2020 14:51:36 | 04-02-2020 14:51:36 | It seems you have a very old version of this repository (when it was still named `pytorch_transformers`). Please update to the latest version or you won't have all the features (like access to the model hub).<|||||>I fixed the version and the error was fixed. But now I get
AttributeError: 'BertTokenizer' object has no attribute 'encoder'
Do I have another problem with version? I also used
pip3 install --upgrade pytorch-pretrained-bert
Thank you<|||||>Please post the code and full trace that gave you this error.<|||||> tokenizer = AutoTokenizer.from_pretrained("nlpaueb/bert-base-greek-uncased-v1")
model = AutoModelWithLMHead.from_pretrained("nlpaueb/bert-base-greek-uncased-v1")
model.to(args.device)
# Add special tokens if they are not already added
add_special_tokens_(model, tokenizer)
def add_special_tokens_(model, tokenizer):
""" Add special tokens to the tokenizer and the model if they have not already been added. """
orig_num_tokens = len(tokenizer.encoder)
num_added_tokens = tokenizer.add_special_tokens(ATTR_TO_SPECIAL_TOKEN) # doesn't add if they are already there
if num_added_tokens > 0:
model.resize_token_embeddings(new_num_tokens=orig_num_tokens + num_added_tokens)
INFO:transformers.modeling_utils:loading weights file https://s3.amazonaws.com /models.huggingface.co/bert/nlpaueb/bert-base-greek-uncased-v1/pytorch_model.bin from cache at /home/hatzimin/.cache/torch/transformers/3a685f5fa6f50a35a4efc31e9cdc74cfe8e2956002ee5c2df350e5e6c54deaf2.2aad66b9b70b2aa069cb5a695a371c8289c0fc672a34efff6188126824ef3b60
INFO:transformers.modeling_utils:Weights from pretrained model not used in BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']
Traceback (most recent call last):
File "./traingreek.py", line 268, in <module>
train()
File "./traingreek.py", line 161, in train
add_special_tokens_(model, tokenizer)
File "./traingreek.py", line 51, in add_special_tokens_
orig_num_tokens = len(tokenizer.encoder)
AttributeError: 'BertTokenizer' object has no attribute 'encoder'
<|||||>That's because the tokenizer does not have an `encoder` attribute. If you're looking to get the size of the tokenizer, you can do it with `len(tokenizer)`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,590 | closed | min_length parameter in default pipeline summarization produces output smaller than min_length | Hi, I am using pipeline summarization. My code is below:
```
from transformers import pipeline, AutoTokenizer, AutoModel
summarizer = pipeline("summarization")
abstract_dictionary = {'Introduction':'','Methods':'','Results':'','Discussion':''}
for section in article_dictionary:
if section == 'Introduction':
min_length = 100
elif section == 'Methods':
min_length = 200
elif section == 'Results':
min_length = 250
elif section == 'Discussion':
min_length = 100
summary = summarizer(article_dictionary[section], min_length=min_length)[0]['summary_text']
abstract_dictionary[section] = abstract_dictionary[section]+' '+summary
for section in abstract_dictionary:
print(section)
print(abstract_dictionary[section])
print(" ")
```
and I get the following summary. You will notice that each section is smaller than the min length specified.
Introduction
Renal-cell carcinoma is characterized by susceptibility to both immunotherapeutic and antiangiogenic treatment approaches and resistance to cytotoxic chemotherapy. Agents such as sunitinib that target the vascular endothelial growth factor (VEGF) pathway are standard first-line therapy for advanced disease. We conducted the KEYNOTE-426 trial to determine whether pembrolizumab plus axit inib would result in better outcomes than sunit in patients with previously untreated advanced renal- cell carcinoma.
Methods
Pembrolizumab (Keytruda, Merck Sharp & Dohme) plus axitinib (Inlyta, Pfizer) or sunitinIB (Sutent, Pfizers) was used in an open-label, phase 3 trial. Eligible patients were 18 years of age or older; had newly diagnosed or recurrent stage IV clear-cell renal-cell carcinoma; had received no previous systemic therapy for advanced disease; and had a Karnofsky performance-status score of 70 or more. Patients were excluded if they had symptomatic central nervous system metastases, active autoimmune disease, or poorly controlled hypertension. Data on adverse events were collected regularly,
Results
A total of 1062 patients at 129 sites in 16 countries were screened for eligibility. Of these, 861 patients at 124 sites underwent randomization from October 24, 2016, to January 24, 2018. A total of 432 patients were assigned to the pembrolizumab–axitinib group, and 429 patients to the sunit inib group. The median duration of any treatment was 10.4 months in both groups. The estimated percentage of patients who were alive at 12 months was 89.9% (95% CI, 86.4 to 92.4) in the pembrozumab group and 78.3% (75.8 to 82.,
Discussion
Treatment with pembrolizumab plus axitinib resulted in a 47% lower risk of death. The objective response rate was 23.6 percentage points higher in the pembrozumab–axit inib group than in the sunitin ib group. The benefit of pembrology plus ax itinib was observed across all subgroups tested. No deaths related to hepatic adverse events were reported in this trial. However, the overall frequency of toxic effects was similar in the two groups. | 04-02-2020 14:49:04 | 04-02-2020 14:49:04 | Hi @Weilin37,
How did you count the length of your output? The `min_length` corresponds to the minimum number of tokens in the output which is `<=` number of words.
To count the number of tokens you have in your output text, you could use `tokenizer.tokenize(OUTPUT_TEXT)`<|||||>> Hi @Weilin37,
>
> How did you count the length of your output? The `min_length` corresponds to the minimum number of tokens in the output which is `<=` number of words.
> To count the number of tokens you have in your output text, you could use `tokenizer.tokenize(OUTPUT_TEXT)`
Hi. Ah ok thanks for clarifying. I had mistakenly thought it was the # of words.<|||||>Hi @patrickvonplaten,
Is it possible to set the minimum number of words instead of tokens? |
transformers | 3,589 | closed | Evaluation - Output False Positive and False Negative Sentences | # ❓ Questions & Help
Hi, regarding the sequential classification task, after the evaluation on test data, how could I output the actual false positive and false negative sentences? Basically convert BERT embedding back to the actual sentences for error analysis?
| 04-02-2020 14:28:03 | 04-02-2020 14:28:03 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,588 | closed | added model_cards for polish squad models | 04-02-2020 13:35:28 | 04-02-2020 13:35:28 | model pages: https://huggingface.co/models?filter=polish,question-answering
Thank you! |
|
transformers | 3,587 | closed | How to fine tune T5 like for translation tasks? | # ❓ Questions & Help
## Details
<!-- Description of your issue -->
I want to pass 100000's of training instances to this syntax, but it says its limit is only 512. I am passing list of strings.
input_ids = tokenizer.encode('translate English to German: The house is wonderful. </s>', return_tensors='pt')
lm_labels = tokenizer.encode('Das Haus ist wunderbar. </s>', return_tensors='pt')
model(input_ids=input_ids, lm_labels=lm_labels)
And for 512 instances, it is not training, it is finishing in seconds without loading weights and training.
Could you explain how to fine-tune it in the correct way and use the fine-tuned model for the generation?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 04-02-2020 09:53:09 | 04-02-2020 09:53:09 | see https://github.com/huggingface/transformers/issues/3576 |
transformers | 3,586 | closed | when I run transformers in Docker container, it appeared this error | ```
unknown exception: Model name 'bert-base-uncased' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-cased). We assumed 'bert-base-uncased' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
```
OS platform python 3.7.6 transformer 2.4 debian | 04-02-2020 09:12:10 | 04-02-2020 09:12:10 | We would need more information in order to help you debug, namely transformers version, python version, code example, etc.
Have you seen the issue template? Respecting it will ensure you get help :slightly_smiling_face: |
transformers | 3,585 | closed | Reason behind the layers taken for distilbert-multilingual | Hi,
I went across the training code for distilbert. I can see that the distillation process is used to train the distilbert model from bert model.
What is the reason only the **layers [0, 2, 4, 7, 9, 11]** have been taken to train distilbert model?Is there any idea about the layers behind this?
And,what is the **last two layers in distilbert layer** corresponds and are they equivalently similar to the **9 and 11** layer of the original bert model?
This was the training code i referred to:
https://github.com/huggingface/transformers/blob/master/examples/distillation/scripts/extract_distilbert.py | 04-02-2020 08:59:18 | 04-02-2020 08:59:18 | With distillation, the initialization is really important in order to obtain good results. You can read the [paper](https://arxiv.org/pdf/1910.01108.pdf) to have more information, look for section 3 and "Student Initialization"!
The important part is the following:
_Student initialization In addition to the previously described optimization and architectural choices,
an important element in our training procedure is to find the right initialization for the sub-network to
converge. Taking advantage of the common dimensionality between teacher and student networks,
we initialize the student from the teacher by taking one layer out of two._<|||||>Why not [0, 2, 4, 6, 8, 10]? |
transformers | 3,584 | closed | cased -> uncased in BERT GLUE example | similar to https://github.com/huggingface/transformers/issues/3183, the GLUE readme also have this issue, the MRPC example use `bert-base-cased` while at the same time `--do_lower_case`. | 04-02-2020 08:03:02 | 04-02-2020 08:03:02 | additionally, the `xlnet-large-cased` should not combine with `--do_lower_case` as `xlnet` model only has a cased version. |
transformers | 3,583 | closed | Dict in the first positional arguments | Could someone help me figure out what is wrong in the TFBertModel below?
```python
features = next(iter(dataset))
features
```
which prints:
```python
{'attention_mask': <tf.Tensor: shape=(16,), dtype=int32, numpy=array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int32)>,
'input_ids': <tf.Tensor: shape=(16,), dtype=int32, numpy=
array([ 101, 13366, 2131, 1035, 6819, 2094, 1035, 2013, 1035,
24471, 2140, 1006, 24471, 2140, 1007, 102], dtype=int32)>}
```
In turn, I loaded the `TFBertModel` and following the[ documentation page](https://huggingface.co/transformers/model_doc/bert.html#transformers.TFBertForPreTraining) I tried to use `features` as input:
```python
model = TFBertModel.from_pretrained('bert-base-uncased')
model(features)
```
But I'm getting the following error:
```python
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-19-8bcadb504daf> in <module>()
----> 1 model(features)
8 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
966 with base_layer_utils.autocast_context_manager(
967 self._compute_dtype):
--> 968 outputs = self.call(cast_inputs, *args, **kwargs)
969 self._handle_activity_regularization(inputs, outputs)
970 self._set_mask_metadata(inputs, outputs, input_masks)
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_bert.py in call(self, inputs, **kwargs)
706 last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
707 """
--> 708 outputs = self.bert(inputs, **kwargs)
709 return outputs
710
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
966 with base_layer_utils.autocast_context_manager(
967 self._compute_dtype):
--> 968 outputs = self.call(cast_inputs, *args, **kwargs)
969 self._handle_activity_regularization(inputs, outputs)
970 self._set_mask_metadata(inputs, outputs, input_masks)
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_bert.py in call(self, inputs, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, training)
545 # this attention mask is more simple than the triangular masking of causal attention
546 # used in OpenAI GPT, we just need to prepare the broadcast dimension here.
--> 547 extended_attention_mask = attention_mask[:, tf.newaxis, tf.newaxis, :]
548
549 # Since attention_mask is 1.0 for positions we want to attend and 0.0 for
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py in _slice_helper(tensor, slice_spec, var)
982 ellipsis_mask=ellipsis_mask,
983 var=var,
--> 984 name=name)
985
986
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py in strided_slice(input_, begin, end, strides, begin_mask, end_mask, ellipsis_mask, new_axis_mask, shrink_axis_mask, var, name)
1148 ellipsis_mask=ellipsis_mask,
1149 new_axis_mask=new_axis_mask,
-> 1150 shrink_axis_mask=shrink_axis_mask)
1151
1152 parent_name = name
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_array_ops.py in strided_slice(input, begin, end, strides, begin_mask, end_mask, ellipsis_mask, new_axis_mask, shrink_axis_mask, name)
10155 pass # Add nodes to the TensorFlow graph.
10156 except _core._NotOkStatusException as e:
> 10157 _ops.raise_from_not_ok_status(e, name)
10158 # Add nodes to the TensorFlow graph.
10159 if begin_mask is None:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in raise_from_not_ok_status(e, name)
6651 message = e.message + (" name: " + name if name is not None else "")
6652 # pylint: disable=protected-access
-> 6653 six.raise_from(core._status_to_exception(e.code, message), None)
6654 # pylint: enable=protected-access
6655
/usr/local/lib/python3.6/dist-packages/six.py in raise_from(value, from_value)
InvalidArgumentError: Index out of range using input dim 1; input has only 1 dims [Op:StridedSlice] name: tf_bert_model/bert/strided_slice/
```
note: edited to correct typo. | 04-02-2020 05:25:47 | 04-02-2020 05:25:47 | I would assume that you need to unpack the dictionary as you pass it to the model:
```python
model(**features)
```
EDIT: I was wrong, since the TF version should be used differently than the PT version.<|||||>Thank you @BramVanroy, but another error arose:
```python
model(**features)
```
```python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-20-6248eef0b628> in <module>()
----> 1 model(**features)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
798 else:
799 raise ValueError(
--> 800 'The first argument to `Layer.call` must always be passed.')
801
802 call_context = base_layer_utils.call_context()
ValueError: The first argument to `Layer.call` must always be passed.
---------------------------------------------------------------------------
NOTE: Current TensorFlow version is 2.2.0-rc2. To use TF 1.x instead,
restart your runtime (Ctrl+M .) and run "%tensorflow_version 1.x" before
you run "import tensorflow".
---------------------------------------------------------------------------
```<|||||>I believe the error comes from the fact that you''re lacking a dimension in your features. All inputs to the model should have a shape of `[batch_size, sequence_length]`, whereas from your output:
```py
{
'attention_mask': <tf.Tensor: shape=(16,), dtype=int32, numpy=array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int32)>,
'input_ids': <tf.Tensor: shape=(16,), dtype=int32, numpy=array([ 101, 13366, 2131, 1035, 6819, 2094, 1035, 2013, 1035, 24471, 2140, 1006, 24471, 2140, 1007, 102], dtype=int32)>
}
```
Your tensors are of shape `[sequence_length]`. You can unsqueeze those to add a batch dimension (or simply batch them, since you're already making use of the attention mask), and it should work.<|||||>@LysandreJik,
Yes, it was really missing a dimension in `features`.
Using `features = tokenized_dataset.batch(2)` produces the desired input to transformer:
```python
({'attention_mask': <tf.Tensor: shape=(2, 16), dtype=int32, numpy=
array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], dtype=int32)>,
'input_ids': <tf.Tensor: shape=(2, 16), dtype=int32, numpy=
array([[ 101, 13366, 2131, 1035, 6819, 2094, 1035, 2013, 1035,
24471, 2140, 1006, 24471, 2140, 1007, 102],
[ 101, 13366, 8254, 2050, 1035, 20950, 1035, 2000, 1035,
24471, 2140, 1035, 2862, 1006, 20950, 102]], dtype=int32)>},
{'attention_mask': <tf.Tensor: shape=(2, 16), dtype=int32, numpy=
array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], dtype=int32)>,
'input_ids': <tf.Tensor: shape=(2, 16), dtype=int32, numpy=
array([[ 101, 27059, 2678, 8909, 2013, 24471, 2140, 1012, 102,
0, 0, 0, 0, 0, 0, 0],
[ 101, 2358, 2099, 1011, 1028, 2862, 10463, 20950, 2000,
24471, 2140, 2862, 1012, 2013, 12170, 102]], dtype=int32)>})
```
Thank you. |
transformers | 3,582 | closed | Does the BART model support Chinese? Having the pre-trained Chinese model? | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 04-02-2020 02:35:09 | 04-02-2020 02:35:09 | Have the same problem and do you have any url to download the bart-large-cnn or and other pretrained model<|||||>No chinese support, yet.
Download:
```bash
wget https://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-large-cnn/pytorch_model.bin
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,581 | closed | Different outputs in using convert_roberta_original_pytorch_checkpoint_to_pytorch.py | # 🐛 Bug
We have an issue in converting roberta model from fairseq format to huggingface format. The conversion function provided in the transformers library gives you different outputs when you pass sample data through the fairseq and huggingface-transformers versions seperately.
⬇️Problem with our pretrained model
There's a difference between two output tensors. The actual number is⬇️
huggingface transformers output:
> tensor([[[ 2.1787e+01, -4.7770e+00, 6.1631e+00, ..., -4.6316e+00,
> -4.7297e+00, -3.9510e-01],
> [ 2.0051e+00, -2.7158e+00, 5.2598e+00, ..., -2.3681e+00,
> -2.0179e+00, -1.5263e-02],
> [-2.7891e+00, -4.7558e+00, 5.3717e+00, ..., -4.5290e+00,
> -3.8888e+00, -5.7892e-02],
> [ 1.3125e+00, -3.9378e+00, 6.7551e+00, ..., -3.6842e+00,
> -3.4968e+00, 5.4736e-01],
> [-3.4706e+00, -7.7992e+00, 1.6678e+01, ..., -6.1806e+00,
> -7.4419e+00, -8.5062e-02]]], grad_fn=<AddBackward0>)
fairseq output:
> tensor([[[21.2672, -4.8905, 6.2439, ..., -4.8653, -4.9650, -1.6207],
> [ 1.4856, -2.8294, 5.3406, ..., -2.6018, -2.2533, -1.2408],
> [-3.3087, -4.8693, 5.4525, ..., -4.7626, -4.1241, -1.2835],
> [ 0.7930, -4.0513, 6.8359, ..., -3.9179, -3.7322, -0.6782],
> [-3.9902, -7.9127, 16.7589, ..., -6.4142, -7.6773, -1.3106]]],
> grad_fn=<AddBackward0>)
abs difference:
> tensor([[[0.5195, 0.1135, 0.0808, ..., 0.2336, 0.2354, 1.2256],
> [0.5195, 0.1135, 0.0808, ..., 0.2336, 0.2354, 1.2256],
> [0.5195, 0.1135, 0.0808, ..., 0.2336, 0.2354, 1.2256],
> [0.5195, 0.1135, 0.0808, ..., 0.2336, 0.2354, 1.2256],
> [0.5195, 0.1135, 0.0808, ..., 0.2336, 0.2354, 1.2256]]],
> grad_fn=<AbsBackward>)
The same issue happens when we try to convert the default roberta-base model from fairseq format into transformers format.
We have change some source code from fairseq to register our model name and architecture(Just changes in some hyperparameters).
Our initial guess is that there are some parameters isn't or is wrongly loaded.
The error looks like this:
> max_absolute_diff = 1.2255859375
> Do both models output the same tensors? 💩
> Traceback (most recent call last):
> File "convert_roberta_original_pytorch_checkpoint_to_pytorch.py", line 181, in <module>
> args.roberta_checkpoint_path, args.pytorch_dump_folder_path, args.classification_head
> File "convert_roberta_original_pytorch_checkpoint_to_pytorch.py", line 160, in convert_roberta_checkpoint_to_pytorch
> raise Exception("Something went wRoNg")
> Exception: Something went wRoNg
Thanks a lot for helping.
## Information
Model I am using (Bert, XLNet ...): RoBERTa
Language I am using the model on (English, Chinese ...): English
## Expected behavior
Explanation about the function (or further contact in refining the function?)
## Environment info
- `transformers` version: the default version
- Platform: NYU Prince Cluster
- Python version: python 3.7
- PyTorch version (GPU?): No GPU
- Tensorflow version (GPU?): No GPU
- Using GPU in script?: No GPU
- Using distributed or parallel set-up in script?: No GPU
| 04-02-2020 02:22:15 | 04-02-2020 02:22:15 | I had a similar problem. There is a bug in conversion script, this pull request fixes the issue: https://github.com/huggingface/transformers/pull/3642<|||||>@sdadas Thanks a lot. I've pulled the latest change and it works now.<|||||>Great to hear, closing! |
transformers | 3,580 | closed | wrong parameters order in TFTransfoXLMainLayer _update_mems call | # 🐛 Bug
## Information
In file src/transformers/modeling_tf_transfo_xl.py present small typo in method parameters call.
In line 491 present method:
def _update_mems(self, hids, mems, qlen, mlen)
And in line 610 on it's call:
new_mems = self._update_mems(hids, mems, mlen, qlen)
As you can see qlen and mlen placed in wrong order. In result memory size can raise outside of specified value. | 04-01-2020 22:35:46 | 04-01-2020 22:35:46 | Good catch @dmytyar :-) |
transformers | 3,579 | closed | Summarization pipeline max_length parameter seems to just cut the summary rather than generating a complete sentence within the max length | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): default model from pipeline("summarization")
Language I am using the model on (English, Chinese ...): English
I am using the pipeline for summarization in most up to date version of Transformers. I am inputing a long piece of tax and setting the summarizer to be: summarizer(PIECE_OF_TEXT, max_length = 50).
I was expecting the summarizer to generate a summary within 50 words but it seems to only generate a summary that seems cut off (the ending of the summary ends with a comma and doesn't end in a grammatical sensible way. See example below.
**The piece of text to be summarized:**
Renal-cell carcinoma is characterized by susceptibility to both immunotherapeutic and antiangiogenic treatment approaches and resistance to cytotoxic chemotherapy.1 Agents such as sunitinib that target the vascular endothelial growth factor (VEGF) pathway are standard first-line therapy for advanced disease.2-7 Despite the approval of several targeted therapies by entities such as the Food and Drug Administration, the European Medicines Agency, and the Pharmaceuticals and Medical Devices Agency, the survival rate among patients with metastatic renal-cell carcinoma has plateaued.
Both the VEGF receptor tyrosine kinase inhibitor axitinib and the anti–programmed death 1 (PD-1) monoclonal antibody pembrolizumab have shown antitumor activity in patients with previously untreated advanced clear-cell renal-cell carcinoma.6,10 In a phase 1b trial involving patients with previously untreated metastatic renal-cell carcinoma, 73% (95% confidence interval [CI], 59 to 84) of the patients who received pembrolizumab plus axitinib had a response; 65% of patients had at least one treatment-related adverse event.11 We conducted the KEYNOTE-426 trial to determine whether pembrolizumab plus axitinib would result in better outcomes than sunitinib in patients with previously untreated advanced renal-cell carcinoma.
**And the summary:**
Renal-cell carcinoma is characterized by susceptibility to both immunotherapeutic and antiangiogenic treatment approaches. Agents such as sunitinib that target the vascular endothelial growth factor (VEGF) pathway are standard first, axitinib and the anti–programmed death 1 (PD-1) monoclonal antibody pembrolizumab have shown antitumor activity in patients with previously untreated advanced clear-cell renal-cell carcin, | 04-01-2020 21:50:09 | 04-01-2020 21:50:09 | **Try using the T5 summarizer instead like below:**
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('t5-small')
model = T5ForConditionalGeneration.from_pretrained('t5-small')
inputs = tokenizer.batch_encode_plus(["summarize: " + example_text], max_length=1024, return_tensors="pt", pad_to_max_length=True) # Batch size 1
outputs = model.generate(inputs['input_ids'], num_beams=4, max_length=50, early_stopping=True)
print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in outputs])
```
**The above excerpt gave me a summary of:**
*'the survival rate among patients with metastatic renal-cell carcinoma has plateaued . agents such as sunitinib that target the vascular endothelial growth factor pathway are standard first-line therapy for advanced disease'*
**If you still want to use Bart:**
My assumption is that this is not a bug. I may be wrong, but it seems the Bart summarizer just has a bias towards pointing to the first couple sentences of the original text. It's still abstractive, as can be seen by subtle differences in the summary you're getting. If you specify `min_length` as a higher value, like 100, you start to see that there are pointers to sentences that are not just in the first couple sentences.
**Trying a `min_length` of a 100 using `bart-large-cnn` gave me the below summary:**
*'Renal-cell carcinoma is characterized by susceptibility to both immunotherapeutic and antiangiogenic treatment approaches and resistance to cytotoxic chemotherapy. Agents such as sunitinib that target the vascular endothelial growth factor (VEGF) pathway are standard first-line therapy for advanced disease. **We conducted the KEYNOTE-426 trial to determine whether pembrolizumab plus axit inib would result in better outcomes than sunit in patients with previously untreated advanced renal- cell carcinoma.**'`*
You can see that the last sentence is not a part of the initial text excerpt<|||||>As @aychang95 suggested you have to play around with the `generate` method arguments to see what works best for your example. Especially take a look at `num_beams`, `max_length`, `min_length`, `early_stopping` and `length_penalty`.
I just noticed that I forget to add a good default setting to the Bart summarization pipeline. Just uploaded it - see here: https://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-large-cnn/config.json
The summarization pipeline should work better now :-) <|||||>> As @aychang95 suggested you have to play around with the `generate` method arguments to see what works best for your example. Especially take a look at `num_beams`, `max_length`, `min_length`, `early_stopping` and `length_penalty`.
>
> I just noticed that I forget to add a good default setting to the Bart summarization pipeline. Just uploaded it - see here: https://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-large-cnn/config.json
>
> The summarization pipeline should work better now :-)
Thank you! How do I go about updating the model? My code is below but I receive an error:
```
from transformers import pipeline, AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
model = AutoModel.from_pretrained("facebook/bart-large-cnn")
summarizer = pipeline("summarization", model = model, tokenizer = tokenizer)
```
> OSError: Model name 'facebook/bart-large-cnn' was not found in tokenizers model name list (bart-large, bart-large-mnli, bart-large-cnn, bart-large-xsum). We assumed 'facebook/bart-large-cnn' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.
<|||||>```
from transformers import pipeline, AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("bart-large-cnn")
model = AutoModelWithLMHead.from_pretrained("bart-large-cnn")
summarizer = pipeline("summarization", model = model, tokenizer = tokenizer)
```
works :-).
Note that "bart-large-cnn" is the default model for the summarization pipeline. The code above is equivalent to:
```
from transformers import pipeline
summarizer = pipeline("summarization")
```<|||||>I was also able to discover another reason of why the summarization cut off. I believe setting the max_length conflicted with whatever the default min_length was. It looks like max_length takes priority and so the summary was cut off. I think it would be useful if this was managed automatically somehow, or at least display a warning.<|||||>Hi @patrickvonplaten I just found that summarization takes 1024 words into consideration for generating a summary on its default parameters. I would like to know if I can increase the input size in order to consider more words while generating a summary in any case.
I got the following message.
`Your max_length is set to 1300, but you input_length is only 1024. You might consider decreasing max_length manually, e.g. summarizer('...', max_length=50)
`
<|||||>As far as I know for `Bart` the `max_length` is 1024 and for `T5` it's 512. So depending on your model, I don't think you can increase the `max_length` higher than its `max_length` value.<|||||>@patrickvonplaten I got your point. I have another question, what is the maximum token ( or words ) we can provide to Bart for a summary generation. Also, what should I do in order to generate a summary from a large text which contains approximately 100k words in it?<|||||>A text that contains 100k words is probably more of a novel than a "text" :D.
So for these kinds of text using Bart you would need to chunk the text. Your memory would explode anyways at such sizes. In a couple of days we will add Reformer which can handle super long input text. We will soon also have an encoder-decoder model for Reformer which you could then use for summarization. |
transformers | 3,578 | closed | [WIP] Adding model parallelism for T5 (should work for other models as well) | This PR adds:
- a `get_block_list()` utility method which returns a list of the blocks in a Transformers model (currently only added on T5). Block can be Modules or list/tuple of Modules (if a single transformer block is spread in several ModuleList like in XLM).
- a `spread_on_devices(devices: Optional[List] = None)` method to spread a model on several devices by spreading the transformers blocks (roughly) evenly on the provided device list or all visible CUDA devices if no device list is given. The first device will host the remaining non-block modules in addition (the embeddings usually).
Currently, the code is in the T5 model but should be generic enough to be applied to other models if needed.
To use:
``` python
model = T5ForConditionalGeneration.from_pretrained('...')
model.spread_on_devices() # Will spread on all visible CUDA devices by default
input = torch.tensor([...]).to('cuda:0') # Inputs and outputs are on the first device
model(input) # you should probably use only positional arguments for the forward pass (see spread_on_devices's docstring)
```
TODO:
- [ ] try it
- [ ] add tests if possible (on a dummy device list like ['cpu', 'cpu']?)
cc @patrickvonplaten @craffel | 04-01-2020 21:27:55 | 04-01-2020 21:27:55 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3578?src=pr&el=h1) Report
> Merging [#3578](https://codecov.io/gh/huggingface/transformers/pull/3578?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a4ee4da18ad659b196582bbdf40785033ee1d26b?el=desc) will **decrease** coverage by `0.10%`.
> The diff coverage is `12.90%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3578?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3578 +/- ##
==========================================
- Coverage 78.05% 77.94% -0.11%
==========================================
Files 100 100
Lines 17135 17166 +31
==========================================
+ Hits 13374 13380 +6
- Misses 3761 3786 +25
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3578?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3578/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.18% <12.90%> (-4.34%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3578/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.62% <0.00%> (+0.32%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3578?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3578?src=pr&el=footer). Last update [a4ee4da...3bfeebe](https://codecov.io/gh/huggingface/transformers/pull/3578?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hello! Do you have plans to merge this feature to master branch?
I tried to make it locally in clonned repo but I got an error while tried to use it:
<ipython
> -input-22-5591bd8e45c0> in main()
> 143 cache_dir=model_args.cache_dir,
> 144 )
> --> 145 model = model.spread_on_devices(['cpu', 'cpu'])
> 146
> 147 # Get datasets
>
> /usr/local/lib/python3.6/dist-packages/transformers/modeling_t5.py in spread_on_devices(self, devices)
> 936 return
> 937
> --> 938 modules_to_move = set(self.modules)
> 939
> 940 # Evenly spread the blocks on devices
>
> TypeError: 'method' object is not iterable<|||||>Hey @exelents,
At the moment I don't think anybody is working on it and I'm not sure what the importance of this PR is at the moment. Feel free to take over the PR and try to make it work. I would be more than happy to help you if you open a PR :-) <|||||>This is very much related: https://github.com/huggingface/transformers/issues/7526<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This feature was awesome! I think this would be a major improvement to the transformers package! |
transformers | 3,577 | closed | DistilBert not giving hidden states | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): DistilBert
Language I am using the model on (English, Chinese ...): Multilingual
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name) just running inference
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Load modules
2. Create model with output_hidden_layers=True
3. run inference on one sentece
`from transformers import BertModel, BertTokenizerFast, BertConfig`
`from transformers import DistilBertModel, DistilBertTokenizerFast, DistilBertConfig`
`import torch`
`bert_config = BertConfig(output_hidden_states=True)`
`bert = BertModel(bert_config)`
`bert = bert.from_pretrained("bert-base-multilingual-cased")`
`bert_tokenizer = BertTokenizerFast.from_pretrained("bert-base-multilingual-cased")`
`distil_bert_config = DistilBertConfig(output_hidden_states=True)`
`distil_bert = DistilBertModel(distil_bert_config)`
`distil_bert = distil_bert.from_pretrained("distilbert-base-multilingual-cased")`
`distil_bert_tokenizer = DistilBertTokenizerFast.from_pretrained("distilbert-base-multilingual-cased")`
`sentence = "One stupid dummy sentence to test !"`
`input_bert = torch.tensor(bert_tokenizer.encode(sentence)).unsqueeze(0)`
`input_distil_bert = torch.tensor(distil_bert_tokenizer.encode(sentence)).unsqueeze(0)
`
`output_bert = bert(input_bert)`
`outupt_distil_bert = distil_bert(input_distil_bert)`
## Expected behavior
Return a tuple with 2 elements (like Bert)
Exemple of bert and desired behavior of distilbert:
`print(len(output_bert)) # => 2`
`print(output_bert[0].size()) # => 1, 18, 768`
`print(output_bert[1].size()) # => 1, 768`
## Real behavior
Return tuple of only 1 element
`print(len(outupt_distil_bert)) # => 1`
`print(outupt_distil_bert[0].size()) # => 1, 18, 768`
## Environment info
- `transformers` version: 2.7.0
- Platform: Linux 5.5.8-arch1-1
- Python version: Python 3.8.1
- PyTorch version (GPU?): 1.4.0 (GPU)
- Tensorflow version (GPU?): None
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 04-01-2020 21:15:05 | 04-01-2020 21:15:05 | @Ierezell I am facing same issue. How did you fix this issue? |
transformers | 3,576 | closed | T5 fine tune for seq2seq generation | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hi
Is a script available for fine-tuning T5 base or large to to do seq2seq generative tasks like translation or dialog generation?
https://github.com/huggingface/transformers/blob/master/examples/run_generation.py
Doesn't seem to have T5
| 04-01-2020 20:48:05 | 04-01-2020 20:48:05 | @patrickvonplaten <|||||>Not yet :-) For translation you could try to create a `run_train.py` script using the following resources:
- How to run `T5` for translation: https://github.com/huggingface/transformers/tree/master/examples/translation/t5
- How to train `Bart` for summarization (should be very similar to "How to train" T5 for translation): https://github.com/huggingface/transformers/blob/master/examples/summarization/bart/run_train.sh and https://github.com/huggingface/transformers/blob/master/examples/summarization/bart/run_bart_sum.py<|||||>For dialog generation - I would not recommend using T5, but rather a "decoder-only" model like gpt2. You could take a look at this script with implements a SOTA dialogue bot using DialoGPT from Microsoft: https://huggingface.co/microsoft/DialoGPT-medium#how-to-use<|||||>Thanks for answering @patrickvonplaten
One more query: how to create the **data files** and **vocab file** for T5.
If I am not wrong, it requires 4 data files and 1 vocab file. And in train.source and val.source files, each instance should have a prefix like "translate English to German: ". I prepare the data in the same format.
It gives this error.
TypeError: 'NoneType' object is not iterable
<|||||>@sshleifer <|||||>in examples/transformer_base.py change line 105 to
```python
avg_loss = getattr(self.trainer, "avg_loss", 0.0)
```<|||||>@sshleifer Thanks for the reply. It didn't work. and have the same error. Below is the screenshot of the changed line.
<img width="937" alt="Screenshot 2020-04-12 at 9 17 24 PM" src="https://user-images.githubusercontent.com/30004110/79077852-8a308c00-7d04-11ea-92a2-30d9c6fb0c84.png"><|||||>@prabalbansal I think @sshleifer means line 107, so I added this patch in PR #3768<|||||>@hugoabonizio Thanks for the patch. It works.<|||||>@sshleifer @hugoabonizio When I use the model to predict for test set using the following command:
python '/content/transformers-master/examples/summarization/bart/run_bart_sum.py' --data_dir='/content/drive/My Drive/two_keywords/' --model_type=t5 --output_dir=/content/t5 --do_predict --model_name_or_path=t5-small
Error generated:
<img width="1010" alt="Screenshot 2020-04-13 at 6 18 47 PM" src="https://user-images.githubusercontent.com/30004110/79137728-8a3b9500-7db3-11ea-90e4-218cbc3e1e74.png">
<|||||>> Thanks for answering @patrickvonplaten
> One more query: how to create the **data files** and **vocab file** for T5.
>
> If I am not wrong, it requires 4 data files and 1 vocab file. And in train.source and val.source files, each instance should have a prefix like "translate English to German: ". I prepare the data in the same format.
> It gives this error.
> TypeError: 'NoneType' object is not iterable
> 
Hi, I didn't understand why you have to prepare the vocab file. I think the pertrained T5 and its default tokenizer will take care of the tokenization? Thanks for your response.<|||||>@MichaelZhouwang yes we didn't need vocab file here.<|||||>> For dialog generation - I would not recommend using T5, but rather a "decoder-only" model like gpt2. You could take a look at this script with implements a SOTA dialogue bot using DialoGPT from Microsoft: https://huggingface.co/microsoft/DialoGPT-medium#how-to-use
Hi @patrickvonplaten,
I'm trying out T5 and BART for dialogue generation.
I'm wondering why you say that it's better to just have a decoder. Both FacebookAI's Blender and Google's Meena had encoders in their architectures.
What's the reason for decoder-only systems being better?<|||||>1) @Valdegg I think you are correct that it makes sense to use a seq2seq model.
2) We are also currently working on porting blenderbot from parlai, which was trained on dialogue. 3) 3) We have new forums at https://discuss.huggingface.co/ for discussing higher-level things like which model to use. <|||||>The same question is there any example about training T5 to translate multiple sentences? |
transformers | 3,575 | closed | Create README.md | 04-01-2020 20:05:21 | 04-01-2020 20:05:21 | ||
transformers | 3,574 | closed | [Benchmark] QUAERO French Medical Corpus for Named Entity Recognition | # 🖥 Benchmarking `transformers`
## Benchmark
I am trying the Transformers library out on [QUAERO French Medical Corpus](https://quaerofrenchmed.limsi.fr/) NER-dataset (consisting of Medline titles and EMEA documents). Using Camembert-base 'out of the box' with default hyperparameters, I get an F1 measure of 85% on the EMEA dataset and 64% on Medline, while [one of the few papers](http://ceur-ws.org/Vol-1391/158-CR.pdf) I found that did an experiment on the same dataset reported F1 of 70% and 52%, respectively, using a classic CRF.
Since extensive grid search for hyperparameter optimization is computationally expensive even with a GPU, and given that I am relatively new to the field, I was wondering how to actually go about further optimizing the current model that I have. Is it even worth doing a lot of hyperparameter optimization or is it common that out of the box transformer models already do a decent job?
In particular, I'm not so sure which of the following things are worthwhile trying out;
- Making the input for the BERT model longer than 1 sentence
e.g. 3 input sentences [CLS] sentence1 sentence2 sentence3 [SEP]
- Extending the vocab of the tokenizer by adding tokens related to the medical domain? Not sure
if this even makes sense doing..
- Which hyperparameters do I focus on the most?
I assume epochs and batch size are the most important, but which others are worth trying out as well?
- Other suggestions to improve this model?
If you need more information regarding the setup I will be happy to provide it, thanks!
| 04-01-2020 16:24:44 | 04-01-2020 16:24:44 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,573 | closed | How can I use masked_lm_labels correctly? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hi, I have a dataset like :
From Monday to Friday most people are busy working or studying, but in the evenings and weekends they are free and _ themselves.
And there are four candidates for the missing blank area:
["love", "work", "enjoy", "play"], here "enjoy" is the correct answer, it is a cloze-style task, and it looks like the maskLM in the BERT.
I want to train the model so that it can work better. I notice that there is a parameter called masked_lm_labels, and it can computing the masked language modeling loss. What should I do to train the BertForMaskedLM model with it.
Do you have any Example? Or can you teach me how to do that?
Thanks!
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 04-01-2020 15:42:00 | 04-01-2020 15:42:00 | Can any one help me
QAQ<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,572 | closed | BART run run_train.sh RuntimeError: expected device cuda:0 but got device cpu | # 🐛 Bug
## Information
Model I am using (BART):
Language I am using the model on CNN dailymail (English, Chinese ...):
The problem arises when using:
* [* ] the official example scripts: (give details below)
text summarization of bart, run the script run_train.sh, following the guideline
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [*] an official GLUE/SQUaD task: (give the name)
the text summarization task, using cnn dailymail dataset.
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. run the script run_train.sh in examples/summarization/bart/run_train.sh
2. got the error information RuntimeError: expected device cuda:0 but got device cpu
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: I pip install transformers two days ago, I am not sure the version.
- Platform: ubuntu 1604
- Python version: 3.6.7
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
Traceback:
Traceback (most recent call last):
File "run_bart_sum.py", line 166, in <module>
trainer = generic_train(model, args)
File "******/transformer_base.py", line 304, in generic_train
trainer.fit(model)
File "******/anaconda3/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 676, in fit
mp.spawn(self.ddp_train, nprocs=self.num_gpus, args=(model,))
File "******/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 171, in spawn
while not spawn_context.join():
File "******/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 118, in join
raise Exception(msg)
Exception:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "******/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "******/anaconda3/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 341, in ddp_train
self.run_pretrain_routine(model)
File "******/anaconda3/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 924, in run_pretrain_routine
False)
File "******/anaconda3/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 263, in _evaluate
output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)
File "******/anaconda3/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 418, in evaluation_forward
output = model(*args)
File "******/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "******/anaconda3/lib/python3.6/site-packages/pytorch_lightning/overrides/data_parallel.py", line 96, in forward
output = self.module.validation_step(*inputs[0], **kwargs[0])
File "******/run_bart_sum.py", line 58, in validation_step
loss = self._step(batch)
File "******/run_bart_sum.py", line 44, in _step
lm_labels=lm_labels.cuda(),
File "******/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "******/run_bart_sum.py", line 32, in forward
lm_labels=lm_labels,
File "******/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "******/anaconda3/lib/python3.6/site-packages/transformers/modeling_bart.py", line 925, in forward
decoder_cached_states=decoder_cached_states,
File "******/anaconda3/lib/python3.6/site-packages/transformers/modeling_bart.py", line 844, in forward
decoder_cached_states=decoder_cached_states,
File "******/anaconda3/lib/python3.6/site-packages/transformers/modeling_bart.py", line 499, in forward
need_attn_weights=self.output_attentions,
File "******/anaconda3/lib/python3.6/site-packages/transformers/modeling_bart.py", line 372, in forward
attn_mask=attention_mask,
File "******/anaconda3/lib/python3.6/site-packages/transformers/modeling_bart.py", line 629, in forward
attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attn_mask
RuntimeError: expected device cuda:0 but got device cpu
| 04-01-2020 14:58:14 | 04-01-2020 14:58:14 | I meet the same bug when i use bart as an embedding layer.
Have u solve the problem?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>> I meet the same bug when i use bart as an embedding layer.
> Have u solve the problem?
any luck on this? seeing the same "RuntimeError: expected device cuda:0 but got device cpu" |
transformers | 3,571 | closed | transformers pipeline in GCP cloud functions | # ❓ Transformers pipeline in GCP
I am trying to use transformers pipeline in GCP cloud functions. While calling the function the downloading of model is happening everytime. How can we sort this issue.
| 04-01-2020 11:32:47 | 04-01-2020 11:32:47 | You could cache the model once in your environment and then load it from there. Just point `from_pretrained` to the directory containing the model and configuration (or tokenizer file if loading the tokenizer) instead of the S3 link.<|||||>@LysandreJik Completely understood and right but this is different case here. I am trying to make use of pipeline and load using pipeline (). And now how this can be achieved in GCP. <|||||>The pipeline also accepts directories as models and tokenizers. See the [pipeline documentation](https://huggingface.co/transformers/main_classes/pipelines.html#transformers.pipeline):
- model (str or PreTrainedModel or TFPreTrainedModel, optional, defaults to None) –
The model that will be used by the pipeline to make predictions. This can be None, **a string checkpoint identifier** or an actual pre-trained model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.
If None, the default of the pipeline will be loaded.<|||||>Thanks @LysandreJik. The documentation link did helped to get better clarity. Will try and get back. <|||||>Tried this way ```
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased-distilled-squad")
model = AutoModelForQuestionAnswering.from_pretrained("https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-cased-distilled-squad-pytorch_model.bin")
nlp_qa = pipeline('question-answering', model=model, tokenizer=tokenizer)
```
Getting
`UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte`<|||||>You can't put a URL like that. It has to be a local file path, like it is shown in the documentation. You can either fetch them and save them to a directory:
```
mkdir local_path
cd local_path
wget https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-cased-distilled-squad-pytorch_model.bin
wget https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-cased-distilled-squad-config.json
wget https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-vocab.txt
```
or in Python:
```py
model = DistilBertModel.from_pretrained("distilbert-base-cased-distilled-squad")
model.save_pretrained("local_path")
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-cased-distilled-squad")
tokenizer.save_pretrained("local_path")
```
You can then access this model/tokenizer:
```py
nlp = pipeline("question-answering", model="local_path", tokenizer="local_path")
```<|||||>Thanks @LysandreJik <|||||>@vijender412 may I ask how you got a newer version of pytorch to work on cloud function? I'm unable to get mine to build with anything later than torch vesrion 1.0.1 which is stopping me from using pipeline :/<|||||>I tried to run transformer on Cloud Functions v1 but as expected I could not run it due to the lack of it's resources.
@vijender412
Did you make it? |
transformers | 3,570 | closed | tokenizer cannot load form model on disk | I wanted to load model saved on disk but it keeps on throwing this error
File "train.py", line 1, in <module>
import config
File "/media/saurabh/D/code/bert_imdb_sentiment/src/config.py", line 14, in <module>
do_lower_case=True
File "/home/saurabh/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 393, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "/home/saurabh/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 496, in _from_pretrained
lst(cls.vocab_files_names.values()),
OSError: Model name '.../input/bert_based_uncased/' was not found in tokenizers model name list
(bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-
multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased,
bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-
uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-
squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-
dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-
cased). We assumed '..input//bert_based_uncased/' was a path, a model identifier, or url to a
directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at
this path or url. | 04-01-2020 10:24:03 | 04-01-2020 10:24:03 | @makaveli10 how to solve it ? |
transformers | 3,569 | closed | Regarding distilbert-multilingual-uncased model | I am using pretrained distilbert-multilingual-uncased model to get the embeddings of a sentence. I want to ask which layer would be great for taking the semantic embedding of a sentence | 04-01-2020 10:11:08 | 04-01-2020 10:11:08 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,568 | closed | Create README.md | added documentation for our fine-tuned BERT model | 04-01-2020 09:28:12 | 04-01-2020 09:28:12 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3568?src=pr&el=h1) Report
> Merging [#3568](https://codecov.io/gh/huggingface/transformers/pull/3568?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b38d552a92a0a201c005afae0e1b861ae6de9ce0&el=desc) will **increase** coverage by `0.96%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3568?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3568 +/- ##
==========================================
+ Coverage 76.90% 77.87% +0.96%
==========================================
Files 100 100
Lines 17127 17127
==========================================
+ Hits 13172 13338 +166
+ Misses 3955 3789 -166
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3568?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3568/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0.00%> (+1.34%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3568/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0.00%> (+2.29%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3568/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.24% <0.00%> (+2.63%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3568/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3568/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3568?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3568?src=pr&el=footer). Last update [b38d552...2298b3b](https://codecov.io/gh/huggingface/transformers/pull/3568?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Can you please add a
```
---
language: german
---
```
metadata block at the top of the file?
Also, cc'ing @severinsimmler who might be interested in this (if you guys don't know each other already)<|||||>Hey! I added the metablock.
One question: We uploaded our models as described in the huggingface documentation and everything looks okay, but when we try to test them with the suggested code, we get a message that the model is not found (OS Error: Model name was not found in tokenizers model name list). Could you please check if we made some error?
this is the test code we used:
> tokenizer = AutoTokenizer.from_pretrained("redewiedergabe/bert-base-historical-german-rw-cased")
<|||||>Hi @redewiedergabe,
I can't reproduce your error on `transformers` 2.7.0, Python 3.7.6 and macOS 10.15.4. Does loading the model with `AutoModel` work?<|||||>works for me too<|||||>Thanks! [Model page](https://huggingface.co/redewiedergabe/bert-base-historical-german-rw-cased) |
transformers | 3,567 | closed | Add tiny-bert-bahasa-cased model card | 04-01-2020 07:33:11 | 04-01-2020 07:33:11 | ||
transformers | 3,566 | closed | BertJapaneseTokenizer accept options for mecab | Now we can pass `mecab_kwargs` to `BertJapaneseTokenizer.__init__` and set tokenizer more accurately.
Changes:
1. `BertJapaneseTokenizer.__init__` accepts `mecab_kwargs` keyword argument. It is directly passed to `MeCabTokenizer.__init__`
2. Now we can disable `normalize_text` in `MeCabTokenizer` through `mecab_kwargs`
3. Also we can pass the argument to `MeCab.Tagger.__init__` through `mecab_kwargs["mecab_option"]`. It is useful to customize mecab's dictionary. | 04-01-2020 06:57:33 | 04-01-2020 06:57:33 | @singletongue Please give your opinion.<|||||>Great! It looks good to me.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3566?src=pr&el=h1) Report
> Merging [#3566](https://codecov.io/gh/huggingface/transformers/pull/3566?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b38d552a92a0a201c005afae0e1b861ae6de9ce0&el=desc) will **increase** coverage by `0.96%`.
> The diff coverage is `33.33%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3566?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3566 +/- ##
==========================================
+ Coverage 76.90% 77.87% +0.96%
==========================================
Files 100 100
Lines 17127 17127
==========================================
+ Hits 13172 13338 +166
+ Misses 3955 3789 -166
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3566?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_bert\_japanese.py](https://codecov.io/gh/huggingface/transformers/pull/3566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9qYXBhbmVzZS5weQ==) | `67.46% <33.33%> (ø)` | |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0.00%> (+1.34%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0.00%> (+2.29%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.24% <0.00%> (+2.63%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3566?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3566?src=pr&el=footer). Last update [b38d552...15522ac](https://codecov.io/gh/huggingface/transformers/pull/3566?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Looks good to me. @LysandreJik? |
transformers | 3,565 | closed | Language model fine tuning using scibert as the base model | # 🐛 Bug
## Information
I am trying to finetune the Scibert model on Covid dataset. Features for the train data is not being calculated properly.
Model I am using (Bert, XLNet ...): allenai/scibert_scivocab_uncased
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: Language Modelling
* [ ] my own task or dataset: my own dataset in text files
## To reproduce
Steps to reproduce the behavior:
1. python examples/run_language_modeling.py --output_dir=./lm_finetune --model_name_or_path=allenai/scibert_scivocab_uncased --do_train --train_data_file=lm_data/train.txt --do_eval --eval_data_file=lm_data/dev.txt --mlm --tokenizer_name allenai/scibert_scivocab_uncased --model_type bert
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Finetuned model and perplexity score on evaluation data
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.6
- Platform: "CentOS Linux 7"
- Python version: 3.6.9
- PyTorch version (GPU?): yes
- Tensorflow version (GPU?): No
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
What should I provide the `model_type` as?
| 04-01-2020 06:52:21 | 04-01-2020 06:52:21 | SciBert is a BERT model.
Please be more descriptive. Saying that features are not calculated correctly is not very helpful. Please describe the problem in full.<|||||>Hi @BramVanroy , following is the error stack
```code
04/01/2020 09:10:02 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, block_size=1000000000000, cache_dir=None, config_name=None, device=device(type='cuda'), do_eval=True, do_train=True, eval_all_checkpoints=False, eval_data_file='/media/data1/ravi/covid-challenge/lm_data/dev.txt', evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=5e-05, line_by_line=False, local_rank=-1, logging_steps=500, max_grad_norm=1.0, max_steps=-1, mlm=True, mlm_probability=0.15, model_name_or_path='allenai/scibert_scivocab_uncased', model_type='bert', n_gpu=1, no_cuda=False, num_train_epochs=1.0, output_dir='./lm_finetune', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=4, per_gpu_train_batch_size=4, save_steps=500, save_total_limit=None, seed=42, server_ip='', server_port='', should_continue=False, tokenizer_name='allenai/scibert_scivocab_uncased', train_data_file='/media/data1/ravi/covid-challenge/lm_data/train.txt', warmup_steps=0, weight_decay=0.0)
04/01/2020 09:10:02 - INFO - __main__ - Creating features from dataset file at /media/data1/ravi/covid-challenge/lm_data
04/01/2020 09:28:10 - INFO - __main__ - Saving features into cached file /media/data1/ravi/covid-challenge/lm_data/bert_cached_lm_999999999998_train.txt
Traceback (most recent call last):
File "examples/run_language_modeling.py", line 781, in <module>
main()
File "examples/run_language_modeling.py", line 731, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "examples/run_language_modeling.py", line 224, in train
train_sampler = RandomSampler(train_dataset) if args.local_rank == -1 else DistributedSampler(train_dataset)
File "/media/data2/anaconda/envs/covid/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 94, in __init__
"value, but got num_samples={}".format(self.num_samples))
ValueError: num_samples should be a positive integer value, but got num_samples=0
```
In the `bert_cached_lm_999999999998_train.txt` file there is only one line. `train.txt` file is of size **345MB**.
Thank you for your help!!<|||||>This seems to indicate a problem with the dataset. Can you post the contents of bert_cached_lm_999999999998_train.txt? Maybe you chose a block size that is larger than your data size.<|||||>This is probably the same error as https://github.com/huggingface/transformers/issues/3443#issuecomment-607422291<|||||>@graviraja Had the same issue which is most likely related to
`model = torch.nn.DataParallel(model)` in [trainer.py](https://github.com/huggingface/transformers/blob/8e093e5981e573a0b591dc57e8d52cc3efe82230/src/transformers/trainer.py#L250)
Uncommenting this line or using only one GPU
`export CUDA_VISIBLE_DEVICES=1`
works in my case:
<|||||>> CUDA_VISIBLE_DEVICES
Note that `CUDA_VISIBLE_DEVICES=1` does not mean to use "just one GPU", but it means to specifically use GPU with ID#1. However, device are zero-indexed, so the first GPU on your system will typically be #0: `CUDA_VISIBLE_DEVICES=0`<|||||>Thanks, @BramVanroy , makes perfect sense.
@graviraja I've set up a [notebook](https://github.com/Nikoschenk/language_model_finetuning/blob/master/scibert_fine_tuner.ipynb) with the required functionality. Previous comments regarding `block_size` were in fact crucial.
<|||||>Thank you @Nikoschenk @BramVanroy for the support. |
transformers | 3,564 | closed | Tokenizers: setting bos_token_id = 0 and adding language_pair_codes | I am unable to set bos_token_id=0 for a new SentencePiece tokenizer (MBART).
Here is what I'm doing?
```bash
wget https://s3.amazonaws.com/models.huggingface.co/bert/facebook/mbart-large-en-ro/sentence.bpe.model
```
```python
from transformers import T5Tokenizer
vocab_file = 'sentence.bpe.model'
t2 = T5Tokenizer(vocab_file, bos_token='<s>', bos_token_id=0)
t2.bos_token_id # => 1
```
The following also returns 1
```python
t2 = T5Tokenizer(vocab_file, bos_token='<s>', bos_token_id=0,
additional_special_tokens=['<s>'])
t2.bos_token_id
```
Help much appreciated! | 04-01-2020 05:30:09 | 04-01-2020 05:30:09 | You can't set the ids, they are set automatically from the sentence piece model.
But (1) why are you using the T5Tokenizer for a Bart checkpoint and (2) why do you want to tweak the id?<|||||>(1) I used the `T5Tokenizer` in order to make a runnable example that did not require checking out my `mbart` branch.
(2) Fairseq's MBART logic is split into two stages:
- use `spm_encode --model sentence.bpe.model` to preprocess. (this is like encode_as_pieces in python).
- use a `vocab.json` style lookup to convert each token to an ID.
I'm trying to do that in one step, using `sp_model.encode_as_ids`, but my ids are off by 1, because the special tokens (sp_model.bos_token, etc) are different than fairseq's dictionary object:

So I need to either manipulate the sp_model, retrain it with correct control codes, or try a different approach.
<|||||>Yes you can check how we do these token index offset stuff (it’s specific to fairseq + sentencepiece) in Camembert and XLMRoberta tokenizers.<|||||>Extremely helpful! Mbart also adds a language code like en_XX and ro_RO to the end of the source and target sentences. So the sentences are like `[tokens]+[<eos>, <language_id>]`
Do we have any tokenizers that do that? <|||||>can't find an easy way to generate examples like
```python
input_ids = [src_tokens]+[<eos>, <src_language_id>]
decoder_input_ids = [tgt_tokens]+[<eos>, <tgt_language_id>]
```
where the special tokens depend on the language.
My best idea is to add a method
```python
def prepare_language_pair_batch(self, source_sentences, source_lang, target_sentences=None, target_lang=None):
# encode source sentence
# if target_sentence is None ignore it else process it
return {input_ids=encoded_source_ids, attention_mask=attention_mask, decoder_input_ids=processed_target}
```
(Could also overwrite `prepare_inputs_for_model` and add arguments.)
Two other ideas that don't quite work:
- Try to stuff the language codes into the string as text in `prepare_text_for_tokenization`. The problem is this would go before EOS.
- Try to do the magic in `build_inputs_with_special_tokens`. the problem is that you still can't use `prepare_for_model` because it doesn't pass kwargs to `build_inputs_with_special_tokens`.
We could also instantiate two tokenizers with different special tokens, but that feels wasteful.
@LysandreJik @patrickvonplaten <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>> Yes you can check how we do these token index offset stuff (it’s specific to fairseq + sentencepiece) in Camembert and XLMRoberta tokenizers.
For posterity, I think Thomas means this:
```
https://huggingface.co/transformers/v4.6.0/_modules/transformers/models/camembert/tokenization_camembert.html
https://huggingface.co/transformers/v3.5.1/_modules/transformers/tokenization_xlm_roberta.html
```
|
transformers | 3,563 | closed | update run_language_modeling.py for high efficiency in Multi GPUs | The code "model(inputs, masked_lm_labels=labels)" will return all outputs which causes out of GPU memory in train() function. After modifying the code, batch size per GPU increases from 4 to 32 in Multi GPUs. | 04-01-2020 03:44:47 | 04-01-2020 03:44:47 | |
transformers | 3,562 | closed | can not init tokenizers from third party model , on albert model | # 🐛 Bug
## Information
Model I am using (albert.):
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [ *] the official example scripts: (give details below)
follow the instructions on :
`https://huggingface.co/models`
such as use "voidful/albert_chinese_tiny" model,
`AutoTokenizer.from_pretrained('voidful/albert_chinese_tiny')`
will raise
` Model name 'voidful/albert_chinese_tiny' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'voidful/albert_chinese_tiny' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.` | 04-01-2020 03:40:28 | 04-01-2020 03:40:28 | Hi, could you specify which version of `transformers` you're running?<|||||>I encountered the same problem when using Albert. @voidful
```
AutoTokenizer.from_pretrained('voidful/albert_chinese_xxlarge')
```
will raise
```
04/04/2020 14:21:28 - INFO - Model name 'voidful/albert_chinese_xxlarge' not found in model shortcut name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). Assuming 'voidful/albert_chinese_xxlarge' is a path, a model identifier, or url to a directory containing tokenizer files.
Traceback (most recent call last):
File "preprocess.py", line 353, in <module>
main()
File "preprocess.py", line 303, in main
tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_path, do_lower_case=not args.cased, cache_dir=args.cache_dir)
File "/data0/username/anaconda3/lib/python3.7/site-packages/transformers/tokenization_auto.py", line 192, in from_pretrained
return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/data0/username/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 393, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "/data0/username/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 496, in _from_pretrained
list(cls.vocab_files_names.values()),
OSError: Model name 'voidful/albert_chinese_xxlarge' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'voidful/albert_chinese_xxlarge' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.
```
```
>>> torch.__version__
'1.3.1'
>>> transformers.__version__
'2.7.0'
```
<|||||>@LysandreJik @WiseDoge
the problem is that the model type is different with the tokenizer type.
eg. the model use albert model type and tokenizer use bert tokenizer, so the autoken class won know about it
you should let others can specify the tokenizer class or tokenizer model type if nessuary<|||||>waiting for confirm or feature requests<|||||>Thank you. I use BERT tokenizer instead, and it works.<|||||>Since sentencepiece is not used in albert_chinese model
you have to call BertTokenizer instead of AlbertTokenizer !!! we can eval it using an example on MaskedLM
[colab trial](https://colab.research.google.com/drive/1Wjz48Uws6-VuSHv_-DcWLilv77-AaYgj)
```python
from transformers import *
import torch
from torch.nn.functional import softmax
pretrained = 'voidful/albert_chinese_large'
tokenizer = BertTokenizer.from_pretrained(pretrained)
model = AlbertForMaskedLM.from_pretrained(pretrained)
inputtext = "今天[MASK]情很好"
maskpos = tokenizer.encode(inputtext, add_special_tokens=True).index(103)
input_ids = torch.tensor(tokenizer.encode(inputtext, add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids, masked_lm_labels=input_ids)
loss, prediction_scores = outputs[:2]
logit_prob = softmax(prediction_scores[0, maskpos]).data.tolist()
predicted_index = torch.argmax(prediction_scores[0, maskpos]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
print(predicted_token,logit_prob[predicted_index])
```
Result: `心 0.9422469735145569`<|||||>close for now<|||||>> Since sentencepiece is not used in albert_chinese model
> you have to call BertTokenizer instead of AlbertTokenizer !!! we can eval it using an example on MaskedLM
>
> [colab trial](https://colab.research.google.com/drive/1Wjz48Uws6-VuSHv_-DcWLilv77-AaYgj)
>
> ```python
> from transformers import *
> import torch
> from torch.nn.functional import softmax
>
> pretrained = 'voidful/albert_chinese_large'
> tokenizer = BertTokenizer.from_pretrained(pretrained)
> model = AlbertForMaskedLM.from_pretrained(pretrained)
>
> inputtext = "今天[MASK]情很好"
>
> maskpos = tokenizer.encode(inputtext, add_special_tokens=True).index(103)
>
> input_ids = torch.tensor(tokenizer.encode(inputtext, add_special_tokens=True)).unsqueeze(0) # Batch size 1
> outputs = model(input_ids, masked_lm_labels=input_ids)
> loss, prediction_scores = outputs[:2]
> logit_prob = softmax(prediction_scores[0, maskpos]).data.tolist()
> predicted_index = torch.argmax(prediction_scores[0, maskpos]).item()
> predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
> print(predicted_token,logit_prob[predicted_index])
> ```
>
> Result: `心 0.9422469735145569`
I have tried this code
from transformers import TFAutoModel, BertTokenizer
pretrained = 'voidful/albert_chinese_xlarge'
tokenizer = BertTokenizer.from_pretrained(pretrained)
model = TFAutoModel.from_pretrained(pretrained)
inputs = tokenizer("我喜欢你!", return_tensors="tf")
outputs = model(**inputs)
print(outputs)
it encounters
OSError: Can't load weights for 'voidful/albert_chinese_xlarge'. Make sure that:
- 'voidful/albert_chinese_xlarge' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'voidful/albert_chinese_xlarge' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.<|||||>> > Since sentencepiece is not used in albert_chinese model
> > you have to call BertTokenizer instead of AlbertTokenizer !!! we can eval it using an example on MaskedLM
> > [colab trial](https://colab.research.google.com/drive/1Wjz48Uws6-VuSHv_-DcWLilv77-AaYgj)
> > ```python
> > from transformers import *
> > import torch
> > from torch.nn.functional import softmax
> >
> > pretrained = 'voidful/albert_chinese_large'
> > tokenizer = BertTokenizer.from_pretrained(pretrained)
> > model = AlbertForMaskedLM.from_pretrained(pretrained)
> >
> > inputtext = "今天[MASK]情很好"
> >
> > maskpos = tokenizer.encode(inputtext, add_special_tokens=True).index(103)
> >
> > input_ids = torch.tensor(tokenizer.encode(inputtext, add_special_tokens=True)).unsqueeze(0) # Batch size 1
> > outputs = model(input_ids, masked_lm_labels=input_ids)
> > loss, prediction_scores = outputs[:2]
> > logit_prob = softmax(prediction_scores[0, maskpos]).data.tolist()
> > predicted_index = torch.argmax(prediction_scores[0, maskpos]).item()
> > predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
> > print(predicted_token,logit_prob[predicted_index])
> > ```
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Result: `心 0.9422469735145569`
>
> I have tried this code
> from transformers import TFAutoModel, BertTokenizer
> pretrained = 'voidful/albert_chinese_xlarge'
> tokenizer = BertTokenizer.from_pretrained(pretrained)
> model = TFAutoModel.from_pretrained(pretrained)
>
> inputs = tokenizer("我喜欢你!", return_tensors="tf")
> outputs = model(**inputs)
>
> print(outputs)
>
> it encounters
>
> OSError: Can't load weights for 'voidful/albert_chinese_xlarge'. Make sure that:
>
> * 'voidful/albert_chinese_xlarge' is a correct model identifier listed on 'https://huggingface.co/models'
> * or 'voidful/albert_chinese_xlarge' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.
You need to add `from_pt=True` in order to load a pytorch checkpoint.
```python
from transformers import TFAutoModel, BertTokenizer
pretrained = './albert_chinese_tiny'
tokenizer = BertTokenizer.from_pretrained(pretrained)
model = TFAutoModel.from_pretrained(pretrained, from_pt=True)
inputs = tokenizer("我喜欢你!", return_tensors="tf")
outputs = model(**inputs)
``` |
transformers | 3,561 | closed | Evaluation of labelled test set? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hi,
I have a labelled test set of QQP data set. What arguments do i need to input if i want to report accuracy and F1 on test set not just the development set?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 04-01-2020 02:56:32 | 04-01-2020 02:56:32 | Hi, you could use model.eval() or load pre-tuned model and run against test set
# Load a trained model and vocabulary that you have fine-tuned
model = model_class.from_pretrained(output_dir)
tokenizer = tokenizer_class.from_pretrained(output_dir)
See this post: https://mccormickml.com/2019/07/22/BERT-fine-tuning/#a1-saving--loading-fine-tuned-model
<|||||>+1
Is there something similar for testing to the `--do_train` or `--do_eval` flag in the glue examples?<|||||>@Mahmedturk I tried it like this now:
```python
from transformers import BertTokenizer, BertForSequenceClassification
import torch
import pandas as pd
tokenizer = BertTokenizer.from_pretrained('./my-model')
model = BertForSequenceClassification.from_pretrained('./my-model')
labels = [ ... ]
df_test = pd.read_csv('./my-data/test.tsv', sep='\t', names=['label', 'sentence'])
df_test['prediction'] = None
for row in df_test.itertuples():
inputs = tokenizer.encode_plus(row.sentence, add_special_tokens=True, return_tensors='pt')
pred = model(inputs['input_ids'], token_type_ids=inputs['token_type_ids'])[0].argmax().item()
df_test.loc[row.Index, 'prediction'] = labels[pred]
```
Then you can filter the pandas dataframe, i.e. `df_test[df_test['label'] == df_test['prediction']]` to see the true positives.<|||||>@olastor is there a way to print confusion matrix?
<|||||>Also for the evaluation set, i get the following three metrics. What is "acc_and_f1" in the below?
acc = 0.9455445544554455
acc_and_f1 = 0.8709204253758709
f1 = 0.7962962962962963
<|||||>@Mahmedturk Not with a built in function in my example, but manually. Let's say you have the labels `True` and `False`, then the correct way to calculate the absolute values of the confusion matrix would be like this I think:
- number of true positves: `len(df[(df['label'] == True]) & (df['prediction'] == True)])`
- number of false positves: `len(df[(df['label'] == False]) & (df['prediction'] == True)])`
- number of false negatives: `len(df[(df['label'] == True]) & (df['prediction'] == False)])`
- number of true negatives: `len(df[(df['label'] == False]) & (df['prediction'] == False)])`<|||||>@Mahmedturk From [here](https://github.com/huggingface/transformers/blob/master/src/transformers/data/metrics/__init__.py#L41):
```python
def acc_and_f1(preds, labels):
acc = simple_accuracy(preds, labels)
f1 = f1_score(y_true=labels, y_pred=preds)
return {
"acc": acc,
"f1": f1,
"acc_and_f1": (acc + f1) / 2, # <---
}
```<|||||>@sunyangfu
After loading the saved model and vocabulary how do i run against the test set?
Sorry if this sounds silly. I am very new to PyTorch and deep learning.
The given link shows how to test on CoLa dataset which has only one sentence. Whereas in QQP dataset there are two sentences. What changes do i need to make in the code to test it with QQP dataset?<|||||>
@Mahmedturk Here you can run against the test set:
After loading the model, you can code whatever you want to fit the model.
```python
output_dir = './saved_model_dir/'
MODEL_CLASSES = {
'bert': (BertConfig, BertForSequenceClassification, BertTokenizer),
'xlnet': (XLNetConfig, XLNetForSequenceClassification, XLNetTokenizer),
'xlm': (XLMConfig, XLMForSequenceClassification, XLMTokenizer),
'roberta': (RobertaConfig, RobertaForSequenceClassification, RobertaTokenizer),
'distilbert': (DistilBertConfig, DistilBertForSequenceClassification, DistilBertTokenizer),
'albert': (AlbertConfig, AlbertForSequenceClassification, AlbertTokenizer)
}
# Config class and load a trained model and vocabulary
config_class, model_class, tokenizer_class = MODEL_CLASSES['bert']
model = model_class.from_pretrained(output_dir)
tokenizer = tokenizer_class.from_pretrained(output_dir)
# Copy the model to the GPU.
model.to(device)
# Put model in evaluation mode
model.eval()
#Then you do test data pre-processing and apply the model on test data
with torch.no_grad():
outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask)
```<|||||>Hi @olastor,
I have tried your method, it returns the below error
df_test.loc[row.Index, 'prediction'] = labels[pred]
IndexError: list index out of range. <|||||>@Mahmedturk Did you update the list of labels in my example for your task?<|||||>labels = df_test.label.values<|||||>> labels = df_test.label.values
It needs to be the same set of labels used for training.<|||||>@olastor thanks.
|
transformers | 3,560 | closed | Mean reduce over last hidden state | One of the outputs of [TFBert](https://huggingface.co/transformers/model_doc/bert.html#transformers.TFBertModel.call) is the `last_hidden_state` which is a tensor of shape `(batch_size, sequence_length, hidden_size)`.
How someone could proceed to compute the `mean pooling` of the valid embeddings? I mean, as the `attention_mask` avoids performing attention on padding token indices, it can be used as weight to average only over the real input embeddings ignoring the pad embeddings.
| 04-01-2020 00:23:17 | 04-01-2020 00:23:17 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,559 | closed | How to trace the BertForQuestionAnswering | I followed the example [here](https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering) and want to convert BertForQuestionAnswering to TorchScript.
Here is my code
```
from transformers import BertTokenizer, BertForQuestionAnswering
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad', torchscript=True)
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
input_ids = tokenizer.encode(question, text)
token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))]
input_tensor = torch.tensor([input_ids])
token_type_ids_tensor = torch.tensor([token_type_ids])
# The way I traced the model could be wrong here
traced_model = torch.jit.trace(model, (input_tensor, token_type_ids_tensor))
traced_model.eval()
start_scores, end_scores = traced_model(input_tensor, token_type_ids_tensor)
all_tokens = tokenizer.convert_ids_to_tokens(input_ids)
answer = ' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1])
print(answer)
```
The answer I got is `jim henson was a nice puppet [SEP]`
Do you have any idea how to make it correct?
Thanks | 04-01-2020 00:09:23 | 04-01-2020 00:09:23 | I found a workaround solution, just to pass the complete inputs including `input_ids, token_type_ids and attention_mask` to the trace method and invoke forward along with those 3 inputs.
But it would be great to know how I can just pass in input_ids and token_type_ids<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,558 | closed | Metrics are coupled to the run_glue.py tasks. | The metrics used for evaluating a run_glue task is coupled to the task itself (i.e. regression or classification). We wrote a `TrinarySentimentProcessor` which grabs a positive, neutral, or negative sentiment from a text, but we found that the simple accuracy measure was not the right one. We wanted to use log loss (cross entropy) and so we added an log loss function to the metrics (`__init__.py`) package and added another elif to the chain.
Why is the metric so coupled to the task itself? What if you wanted to use a different metric for any particular task? I understand that `run_glue.py` is an "example" but we've been building upon the architecture to reduce the workload of training for tasks outside of GLUE and SQUAD. Maybe you could add a flag to the `run_glue.py` file like `--metric=cross-entropy`. Any thoughts? Are we just misusing the library by extending `glue.py`? | 03-31-2020 23:03:38 | 03-31-2020 23:03:38 | As you say yourself, the core of the library is to provide you with models (and recently also pipelines and in the future even more exciting things) to build your own projects. The examples show you some implementations that are in themselves usable but that are by no means meant to be exhaustive for all problems. As you indicate yourself, you are invited to adapt these examples to your own use case.
If you feel that your changes are useful for the whole community, then feel free to request a PR!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,557 | closed | Create model card | Create model card for: distilbert-multi-finetuned-for-xqua-on-tydiqa | 03-31-2020 20:26:06 | 03-31-2020 20:26:06 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3557?src=pr&el=h1) Report
> Merging [#3557](https://codecov.io/gh/huggingface/transformers/pull/3557?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b38d552a92a0a201c005afae0e1b861ae6de9ce0&el=desc) will **increase** coverage by `0.97%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3557?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3557 +/- ##
==========================================
+ Coverage 76.90% 77.88% +0.97%
==========================================
Files 100 100
Lines 17127 17127
==========================================
+ Hits 13172 13339 +167
+ Misses 3955 3788 -167
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3557?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.79% <0.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0.00%> (+1.34%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0.00%> (+2.29%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.24% <0.00%> (+2.63%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3557?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3557?src=pr&el=footer). Last update [b38d552...7e585a0](https://codecov.io/gh/huggingface/transformers/pull/3557?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,556 | closed | [T5, examples] replace heavy t5 models with tiny random models | As first done by @sshleifer in #3488 , this PR puts a tiny T5 model on S3 to save testing time. | 03-31-2020 17:13:54 | 03-31-2020 17:13:54 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3556?src=pr&el=h1) Report
> Merging [#3556](https://codecov.io/gh/huggingface/transformers/pull/3556?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ae6834e028ecdf7fdbe886c1f86d0e02d5fef6f0&el=desc) will **not change** coverage by `%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3556?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3556 +/- ##
=======================================
Coverage 77.80% 77.80%
=======================================
Files 100 100
Lines 17064 17064
=======================================
Hits 13277 13277
Misses 3787 3787
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3556?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3556?src=pr&el=footer). Last update [ae6834e...2e14442](https://codecov.io/gh/huggingface/transformers/pull/3556?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,555 | closed | T5 for summarization: pipeline x T5ForConditionalGeneration different results | I've been doing experiments with text summarization and got some different results between pipeline and T5ForConditionalGeneration.
First I use the model.generate() to generate the resume. It runs very fast (even on CPU) and give poors results.
Second I use the pipeline, passing the same model I build on the first step. This runs slower and give a very good results.
Third, I re-run the first model.generate(). Now, the the model runs slower and produces the same result as pipeline.
I did a colab so that you can check.
I'm loosing any point in using the model x pipeline?
https://colab.research.google.com/drive/15HOerw3mYVCsjedW_dVGeyRX5cYWiNvS | 03-31-2020 17:00:31 | 03-31-2020 17:00:31 | Hi @renatoviolin,
the T5 pipeline uses special input arguments for the `generate()` function that have been shown to work well for summarization. If you take a look at `task_specific_params` and under `summarization` in T5's config: https://s3.amazonaws.com/models.huggingface.co/bert/t5-base-config.json. You can see the `generate()` arguments that are used for pipeline.
<|||||>Hi @patrickvonplaten
Thanks for your attention. Now I got the point where I'm doing wrong.
But the most strange thing that happened is that in third step, after the pipeline was executed, the
model.generate() produces similar results (and take long to run) as the pipeline.
At first glance, given the poor results and how quickly it runs, it seemed to me that the weights were not loaded.<|||||>I think it's probably because beam search is deactivated, no length penalties, no repeat penalties and a very short max length is used |
transformers | 3,554 | closed | resize_token_embeddings error for Transformer-XL | # 🐛 Bug
## Information
Model I am using : Transformer-XL
Language I am using the model on : English
The problem arises when using:
* [ ] my own modified scripts: a fine-tuning script for TransfoXLLMHeadModel
## To reproduce
The following code aims to add two new tokens to the vocabulary, 'wug' and 'wugs'. After doing so to the tokenizer, we call `resize_token_embeddings` with the model in order to update its input embeddings to have correct dimension to account for the new tokens.
``` python
import torch
from transformers import TransfoXLTokenizer, TransfoXLLMHeadModel
model = TransfoXLLMHeadModel.from_pretrained('transfo-xl-wt103')
tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')
tokenizer.add_tokens(['wug', 'wugs'])
model.resize_token_embeddings(len(tokenizer))
```
Running the above gives the following error
```
Traceback (most recent call last):
File "bug.py", line 9, in <module>
model.resize_token_embeddings(len(tokenizer))
File "/home/AD/rdsie/anaconda3/envs/lign251/lib/python3.7/site-packages/transformers/modeling_utils.py", line 198, in resize_token_embeddings
model_embeds = base_model._resize_token_embeddings(new_num_tokens)
File "/home/AD/rdsie/anaconda3/envs/lign251/lib/python3.7/site-packages/transformers/modeling_utils.py", line 213, in _resize_token_embeddings
new_embeddings = self._get_resized_embeddings(old_embeddings, new_num_tokens)
File "/home/AD/rdsie/anaconda3/envs/lign251/lib/python3.7/site-packages/transformers/modeling_utils.py", line 234, in _get_resized_embeddings
old_num_tokens, old_embedding_dim = old_embeddings.weight.size()
File "/home/AD/rdsie/anaconda3/envs/lign251/lib/python3.7/site-packages/torch/nn/modules/module.py", line 576, in __getattr__
type(self).__name__, name))
AttributeError: 'AdaptiveEmbedding' object has no attribute 'weight'
```
It seems that the function `resize_token_embeddings()` does not currently account for the particulars of the input embeddings used for the TransformerXLLMHeadModel.
## Expected behavior
We expect that `resize_token_embeddings` should handle the appropriate updating of the embedding layers for the new vocabulary size, so that the model can be correctly used with the new tokens.
Thank you in advance | 03-31-2020 16:10:44 | 03-31-2020 16:10:44 | Hi @vsieplus ,
This is a known bug and sadly we don't have a solution for this now. TransfoXLLMHead uses adaptive weight embeddings which makes it not very easy to implement this function. Should be implemented in the long run though - I will note it down. @thomwolf @LysandreJik <|||||>@patrickvonplaten Does the same problem apply to XLNet?<|||||>No it should not. XLNet uses the standard `nn.embedding` - so it should be fine.<|||||>Hi, I faced the same issue and wrote some dirty code as a workaround in `modeling_utils.py`. The main idea is to just operate on the last embedding layer:
```
def _resize_token_embeddings(self, new_num_tokens):
old_embeddings = self.get_input_embeddings()
if type(self).__name__ == 'TransfoXLModel':
# since the 'TransfoXLModel' has multiple embedding layers, the last layer is resized
new_num_tokens_last = new_num_tokens
for emb_layer in old_embeddings.emb_layers[:-1]:
new_num_tokens_last -= emb_layer.weight.size(0)
new_embeddings_last = self._get_resized_embeddings(old_embeddings.emb_layers[-1], new_num_tokens_last)
new_embeddings = old_embeddings
new_embeddings.emb_layers[-1] = new_embeddings_last
else:
new_embeddings = self._get_resized_embeddings(old_embeddings, new_num_tokens)
self.set_input_embeddings(new_embeddings)
return self.get_input_embeddings()
```
It workes for me (at least I get no error). Can someone confirm that this makes sense? Maybe @patrickvonplaten ?
<|||||>Sorry for bothering again @patrickvonplaten, but this is important for me: Can you or someone else comment on my "fix" above wether it makes sense?
Thanks in advance!<|||||>This looks okay to me, though I think you can patch a custom `_resize_token_embeddings(self, new_num_tokens)` to [`TransfoXLPreTrainedModel`](https://github.com/huggingface/transformers/blob/3e5928c57d57db3071638e6beaec9349a75b6a22/src/transformers/modeling_transfo_xl.py#L451) to avoid making the test (and leave the default behavior for other models).
Actually adding such a method to `TransfoXLPreTrainedModel` would solve this issue AFAICT. Since you wrote it @RafaelWO, you should make a PR with it :-)<|||||>Thanks for your feedback @sgugger ! I will move the logic into the `TransfoXLPreTrainedModel` and make my first PR :)<|||||>Out of curiosity, why do you go with
```
for emb_layer in old_embeddings.emb_layers[:-1]:
new_num_tokens_last -= emb_layer.weight.size(0)
```
Wouldn't just `emb_layer = old_embeddings.emb_layers[-1]` work out ? Also are `wug` and `wugs` often used ? If they're syntax tokens, which are frequent, you might want to add them to the corresponding embedding group.<|||||>I think the for loop is to make sure `new_num_tokens_last` is accurate by substracting the other embedding sizes.
I agree that ideally, the method written on `TransfoXLPreTrainedModel` should have an argument to decide to which embedding layer add the new tokens (with a default to the last one).<|||||>Yes that's correct @sgugger, thanks for answering.
I understand the idea of your introduced parameter, but for me the question is whether this makes sense? Because if you add the new token into e.g. the first layer, you would have to insert it also at the same position in your tokenizer and shift all tokens after that.
@TevenLeScao
> Also are wug and wugs often used ?
In my case I want to a `cls_token` which is not included in the pretrained tokenizer.<|||||>Ah my bad, misread the `:-1` into `-1:`. I've looked again at the `ProjectedAdaptiveLogSoftmax` and adding elsewhere should be fine if you update the `cutoffs` attribute to make sure it takes into account the changed embedding size.
Adding at the end is a good baseline; the only issue is that you're going to lose out on some of the benefits of the adaptive softmax as you're often going to have to access the bigger softmax layer whereas you usually want to have the frequent tokens (such as `cls`) on smaller ones.<|||||>> update the cutoffs attribute to make sure it takes into account the changed embedding size.
> Adding at the end is a good baseline; the only issue is that you're going to lose out on some of the benefits of the adaptive softmax as you're often going to have to access the bigger softmax layer whereas you usually want to have the frequent tokens (such as cls) on smaller ones.
Yes and yes, that's true.
But as I mentioned above: if you add such a common token into the first smaller layer and adjust the cutoffs (which would be the preferred way to do), you have a conflict with the tokenizer, because there the new token is at the end and not at position `20001` as in your model (default cutoffs `[20000, 40000, 200000]`).
Or am I missing something?<|||||>Yes, that is also going to be a problem, but it shouldn't be too hard to solve with a simple conversion function that shifts the other tokens. The cleanest way to do it would probably be to update the tokenizer yourself but I am not sure how easy that would be. <|||||>Thanks a lot @sgugger for answering here! As @sgugger mentioned, it'd be great if you can add a `_resize_token_embeddings()` function to `TransfoXLPreTrainedModel`.
The solution looks great to me @vsieplus :-)
You could make it a bit more compact, but that's a nitpick:
```python
embeddings = self.get_input_embeddings()
new_num_tokens_last = new_num_tokens - sum([emb.shape[0] for emb in embeddings.emb_layers[:-1])
new_embeddings_last = self._get_resized_embeddings(embeddings.emb_layers[-1], new_num_tokens_last)
embeddings.emb_layers[-1] = new_embeddings_last
self.set_input_embeddings(embeddings)
``` |
transformers | 3,553 | closed | unable to completely load T5 pretrained model; missing/unexpected keys | # 🐛 Bug
## Information
Model I am using: T5
## To reproduce
```
model, info = T5ForConditionalGeneration.from_pretrained('t5-small',output_loading_info=True)
```
info is
`
{'missing_keys': ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight'], 'unexpected_keys': ['encoder.block.0.layer.0.layer_norm.bias', 'encoder.block.0.layer.1.layer_norm.bias', 'encoder.block.1.layer.0.layer_norm.bias', 'encoder.block.1.layer.1.layer_norm.bias', 'encoder.block.2.layer.0.layer_norm.bias', 'encoder.block.2.layer.1.layer_norm.bias', 'encoder.block.3.layer.0.layer_norm.bias', 'encoder.block.3.layer.1.layer_norm.bias', 'encoder.block.4.layer.0.layer_norm.bias', 'encoder.block.4.layer.1.layer_norm.bias', 'encoder.block.5.layer.0.layer_norm.bias', 'encoder.block.5.layer.1.layer_norm.bias', 'encoder.final_layer_norm.bias', 'decoder.block.0.layer.0.layer_norm.bias', 'decoder.block.0.layer.1.layer_norm.bias', 'decoder.block.0.layer.2.layer_norm.bias', 'decoder.block.1.layer.0.layer_norm.bias', 'decoder.block.1.layer.1.layer_norm.bias', 'decoder.block.1.layer.2.layer_norm.bias', 'decoder.block.2.layer.0.layer_norm.bias', 'decoder.block.2.layer.1.layer_norm.bias', 'decoder.block.2.layer.2.layer_norm.bias', 'decoder.block.3.layer.0.layer_norm.bias', 'decoder.block.3.layer.1.layer_norm.bias', 'decoder.block.3.layer.2.layer_norm.bias', 'decoder.block.4.layer.0.layer_norm.bias', 'decoder.block.4.layer.1.layer_norm.bias', 'decoder.block.4.layer.2.layer_norm.bias', 'decoder.block.5.layer.0.layer_norm.bias', 'decoder.block.5.layer.1.layer_norm.bias', 'decoder.block.5.layer.2.layer_norm.bias', 'decoder.final_layer_norm.bias'], 'error_msgs': []}
`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
No keys should be missing or unexpected
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.7.0
- Platform: Ubuntu
- Python version: 3.6
- PyTorch version (GPU?): 1.2.0 (yes)
- Tensorflow version (GPU?): nope
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: nope
| 03-31-2020 15:03:28 | 03-31-2020 15:03:28 | Hi @dhecloud,
Thanks for you issue :-)
Does the model still work fine? <|||||>> Hi @dhecloud,
>
> Thanks for you issue :-)
> Does the model still work fine?
Hi, thanks for your reply.
Using the examples provided in the doc, the model works fine.
Before i used `T5WithLMHeadModel` in version `2.5.1` which did not raise this missing keys warning. After i moved to `T5ForConditionalGeneration` in `2.7.0` there was this warning and my training loss diverged so i thought i might raise this issue in case there was some sort of change in naming in the checkpoint<|||||>I'm gonna take a look :-) <|||||>Hi guys,
Any news on this?
When I try to load t5-base I receive this:
INFO:transformers.modeling_utils:Weights of T5ForConditionalGeneration not initialized from pretrained model: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight']<|||||>> Hi guys,
> Any news on this?
> When I try to load t5-base I receive this:
>
> INFO:transformers.modeling_utils:Weights of T5ForConditionalGeneration not initialized from pretrained model: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight']
i think it's mostly a harmless misplaced error. The model should still work fine. You can test it by trying out the examples <|||||>Yeah this should not be a problem, all these weights are `[encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight']` are weights tied to the input embedding matrix and therefore don't need to be initialized.<|||||>How can we silence the error?<|||||>It should be enough to lower the cli-logger |
transformers | 3,552 | closed | Update README.md | - Show that the last uploaded version was trained on more data (custom_license files) | 03-31-2020 14:15:49 | 03-31-2020 14:15:49 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3552?src=pr&el=h1) Report
> Merging [#3552](https://codecov.io/gh/huggingface/transformers/pull/3552?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/83d1fbcff608f84a27234e20d6531b4404dc059e&el=desc) will **decrease** coverage by `0.49%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3552?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3552 +/- ##
==========================================
- Coverage 78.31% 77.81% -0.50%
==========================================
Files 100 100
Lines 17064 17064
==========================================
- Hits 13363 13278 -85
- Misses 3701 3786 +85
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3552?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/3552/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.89% <0.00%> (-27.60%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3552?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3552?src=pr&el=footer). Last update [83d1fbc...cd4f658](https://codecov.io/gh/huggingface/transformers/pull/3552?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,551 | closed | Recommended preprocessing steps for english sentences in GPT2 | # ❓ Questions & Help
To run inferences in english sentences... I'm not really sure what preprocessing steps I need to do before sending the tokenized text to gpt-2 in order to get the predictions. Any advice?
How to handle non-english words that can be found in an english sentences? Extra punctuation, space, new lines, user's mentions, hashtags, urls, alternative apostrophes, etc...?
## Details
<!-- Description of your issue -->
**A link to original question on Stack Overflow**: https://stackoverflow.com/questions/60807799/gpt2-huggingfaces-transformer-preprocessing | 03-31-2020 14:05:32 | 03-31-2020 14:05:32 | Just pre-process them as you would for any other NLP task. This may include: normalisation of punctuation, dealing with HTML/XML, white-space normalisation, language verification. Something that people used to do in the age of word2vec is creating new entities for specific items. For instance, any number could be replaced with all 1s (1234 becomes 1111) to have relatively small vocabulary that still takes the number of characters into account. Same with users (e.g @user@) and mentions and urls (@URL@), and so on. This might still be worth the effort, but not necessarily. In such cases you may want to ensure that these tokens are not split in the tokenizer.
I hope that you understand that this is a very general question not related to this repository, so I am closing this.<|||||>@BramVanroy I think I forgot to mention that I'm using the gpt-2 training models from this repo. I'm not re-training gpt-2. I think I shouldn't create new entities for specific terms if that process has not happened during training step. Am I right? <|||||>If you are not planning to pretraining the model (and generating a new vocab) then, indeed, you should not try to add new tokens. So in your case you would just need to do some basic normalisation of punctuation, HTML, etc.<|||||>@BramVanroy Thanks for the answer. I'm directly using gpt-2 pretrained models from https://huggingface.co/models . Specifically the ones that are created by Huggingface. I can't find the preprocessing code that was used when those models were trained... so I can replicate the same at inference time. I'm wondering if I should just assume things or it'd be safer to see how it was trained so I can make sure to prepare the sentences in a similar way.<|||||>Has there been any preprocessing during training phase @BramVanroy ? <|||||>I don't know. Note that HuggingFace didn't train GPT2. They ported the weights to their own architecture. You can try to get into contact with the people that created it, OpenAI. https://github.com/openai/gpt-2<|||||>@Damiox have you found the original preprocessing code?<|||||>@don-prog no, I have not 😢 - I am doing some subset of the preprocessing heuristics that @BramVanroy detailed before when serving the model. But I still think it'd be really good to have a consistency between both preprocessing mechanisms: training (whatever it was) vs inference. I just haven't had the time to identify that original preprocessing code from OpenAI |
transformers | 3,550 | closed | [T5, Testst] Add extensive hard-coded integration tests and make sure PT and TF give equal results | A direct comparison to google's official model seems quite hard - not sure if that's absolutely needed @thomwolf, @craffel
But some integration tests for T5 would be very nice, to be sure that changes in the future will not break T5.
This PR adds hard-coded integration tests, where the input for summarization is copied from Bart's summarization tests and the input for translation is taken from Appendix D of the official [paper](https://arxiv.org/pdf/1910.10683.pdf)
Checking the expected output for PT, one can see that the output looks quite good!
- [x] Add PyTorch integration tests
- [x] Verify quality (subjectively for the moment)
- [x] Add TF integration tests
- [x] Same output for PT and TF
UPDATE:
- Found a big bug in TF Beam Search generation (see comment) -> this PR fixes it | 03-31-2020 10:10:25 | 03-31-2020 10:10:25 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3550?src=pr&el=h1) Report
> Merging [#3550](https://codecov.io/gh/huggingface/transformers/pull/3550?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6f5a12a5833d1e3783e4b8a42cb556b64085745e&el=desc) will **decrease** coverage by `0.49%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3550?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3550 +/- ##
==========================================
- Coverage 78.30% 77.80% -0.50%
==========================================
Files 100 100
Lines 17062 17062
==========================================
- Hits 13360 13275 -85
- Misses 3702 3787 +85
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3550?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/3550/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.89% <0.00%> (-27.60%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3550?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3550?src=pr&el=footer). Last update [6f5a12a...c3ce2fe](https://codecov.io/gh/huggingface/transformers/pull/3550?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,549 | closed | model name '../data/bert_models/chinese_finetuned_lm/pytorch_model.bin' was not found in model name list . Creating an empty model card. | nlp = pipeline('fill-mask',
# model=args.bert_model_path,
# config=args.bert_config_path,
# tokenizer=args.bert_model_dir
model = '../data/bert_models/chinese_finetuned_lm/pytorch_model.bin',
config = '../data/bert_models/chinese_finetuned_lm/config.json',
tokenizer = "../data/bert_models/chinese_finetuned_lm/"
)
i use fine-tune model (chinese bert) ,when i end fine-tune i can not load model ! | 03-31-2020 09:23:29 | 03-31-2020 09:23:29 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,548 | closed | How to extract "contiguous tokens" from `NerPipeline` results? | Using `NerPipeline`, I want to be able to input a string (sequence of tokens), and extract *entity groups*, where an entity group is a contiguous series of tokens, having the same *entity type*.
**Example:**
For the ner code below:
```
from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer
# Allocate a pipeline for sentiment-analysis
model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english")
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
nlp = pipeline('ner', model=model, tokenizer=tokenizer)
nlp('Enzo works at the Australian National University (AUN)')
```
This returns:
```
[{'entity': 'I-PER', 'score': 0.9983270168304443, 'word': 'En'},
{'entity': 'I-PER', 'score': 0.9952995777130127, 'word': '##zo'},
{'entity': 'I-ORG', 'score': 0.9984350204467773, 'word': 'Australian'},
{'entity': 'I-ORG', 'score': 0.9967807531356812, 'word': 'National'},
{'entity': 'I-ORG', 'score': 0.9959043264389038, 'word': 'University'},
{'entity': 'I-ORG', 'score': 0.9900023937225342, 'word': 'AU'},
{'entity': 'I-ORG', 'score': 0.9763911366462708, 'word': '##N'}]
```
When I want it to return something like:
```
[{'entity_group': 'I-PER', 'score': 0.9983270168304443, 'word': 'Enzo'},
{'entity_group': 'I-ORG', 'score': 0.9984350204467773, 'word': 'Australian National University'},
{'entity_group': 'I-ORG', 'score': 0.9900023937225342, 'word': 'AUN'}]
```
I should be able to write a function that performs the above transformation if the indices of the word pieces are also indicated in the dictionary output of `NerPipeline`. Is this currently possible? Please advise me on if there's already an easy way to do this.
If not, I can fork the repo and introduce an `index` key to the dictionary output. I can send a PR for this if the use case is general enough. | 03-31-2020 09:15:19 | 03-31-2020 09:15:19 | What do you think @mfuntowicz?<|||||>[Related issue ](https://github.com/huggingface/transformers/issues/2488)<|||||>Actually, I realized that instead of `index`, even better would be the word piece `offsets`, similar to that returned by `tokenizer.encode` from the huggingface [tokenizers package](https://github.com/huggingface/tokenizers).
Will get to implementing this within the week if this isn't supported yet!<|||||>This recently merged [PR](https://github.com/huggingface/transformers/pull/3957) should solve this issue 🙂 |
transformers | 3,547 | closed | [T5, TF 2.2] change tf t5 argument naming | **Problem**:
As shown in #3539, in TF 2.2 errors occur due to the naming of the first argument in the `keras.layer.__call__` function of TF T5.
Previously `decoder_input_ids` was used as the first argument - which did not produce any errors in TF <= 2.1. In TF 2.2, it produces the error:
```
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
797 else:
798 raise ValueError(
--> 799 'The first argument to `Layer.call` must always be passed.')
800
801 call_context = base_layer_utils.call_context()
ValueError: The first argument to `Layer.call` must always be passed.
```
**Conclusion**
It seems that we have to change to a consistent naming, being `inputs` for the first argument of every `keras.layer.__call__` function | 03-31-2020 09:03:22 | 03-31-2020 09:03:22 | |
transformers | 3,546 | closed | Impossible to use T5 11b | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
T5 11B param
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [ x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
summarizer = pipeline(task="summarization", model="t5-11b", tokenizer="t5-11b")
summary = summarizer(
article,
min_length=5,
max_length=100
)
print("The Summary, ",summary[0]['summary_text'])
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
As indicated in [T5's repo](https://github.com/google-research/text-to-text-transfer-transformer), they used Mesh Tensorflow, which is (according to them) the only way to run inference T5. This means the default CPU setting and even 1 GPU device setting would result in the following
1. CPU setting: run forever (my experience)
2. GPU setting: OOM (although I haven't tested this).
Meaning the current implementations of 3b and 11b would most likely render useless.
More testing needs to be done on whether this is the same for smaller models.
Get an output
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.7.0
- Platform: GCP
- Python version: 3.7.7
- PyTorch version (GPU?): 1.1.0 and no GPU
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?: no
| 03-31-2020 08:33:03 | 03-31-2020 08:33:03 | Update
On a regular CPU machine (not using GPUs), here's the benchmarks you'll need to load these models into memory and run them:
Bart-large-cnn (default): 5GB of RAM
T5-small: 14GB
T5-base: 20GB
T5-large: 31GB
T5-3b: 68GB
T5-11b: 120GB
So, I was initially wrong: **You can run this on CPU, but you'll need a lot of RAM, or try out GPUs** |
transformers | 3,545 | closed | [T5, pipeline] fix bug in warnings | Warnings only took `self.model.config.max_length` in consideration, but not the actual passed `max_length` parameter.
This PR fixes this. | 03-31-2020 07:48:13 | 03-31-2020 07:48:13 | |
transformers | 3,544 | closed | [examples] unit test for run_bart_sum | - add lightning to `examples/requirements.txt`
- first lightning unittest! | 03-31-2020 05:10:06 | 03-31-2020 05:10:06 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3544?src=pr&el=h1) Report
> Merging [#3544](https://codecov.io/gh/huggingface/transformers/pull/3544?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e5c393dcebf42eaec9c1e1d619b5a7788a2d7c65&el=desc) will **increase** coverage by `0.97%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3544?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3544 +/- ##
==========================================
+ Coverage 76.84% 77.81% +0.97%
==========================================
Files 100 100
Lines 17064 17064
==========================================
+ Hits 13112 13279 +167
+ Misses 3952 3785 -167
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3544?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.32% <0.00%> (+0.17%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0.00%> (+1.34%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0.00%> (+2.29%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.24% <0.00%> (+2.63%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3544?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3544?src=pr&el=footer). Last update [e5c393d...af9dd75](https://codecov.io/gh/huggingface/transformers/pull/3544?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>LGTM from a superficial glance<|||||>Planning on merging this April 15 at 7pm EST barring objections. |
transformers | 3,543 | closed | [testing] add timeout_decorator | This is a simple way to make sure code doesn't get slower over time. Since it is a new dependency, I wanted to show a tiny PR before I use it more. | 03-31-2020 04:12:09 | 03-31-2020 04:12:09 | Feels like it could be reimplemented in a few lines of code – do we need to add a new dependency for this?<|||||>I copy pasted it. Would love to understand more about the adding a dependency vs maintaining code tradeoff!<|||||>I think that's a great addition! Trying to keep the tests short is very important I think :-) <|||||>This is too long to copy/paste, so I'd see two options:
- add it as a dependency to extras["testing"]
- take just the signals based implem, clean it up/distill it down to a few (10) lines of code and add it in `tests/`<|||||>I think we should do `extras['testing']` (that was my first attempt on this PR).
If we delete the `signals=False` logic, I don't think circleci will work in distributed testing mode.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3543?src=pr&el=h1) Report
> Merging [#3543](https://codecov.io/gh/huggingface/transformers/pull/3543?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b8686174be75220d2c26a961597a39ef4921b616&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3543?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3543 +/- ##
==========================================
+ Coverage 78.84% 78.85% +0.01%
==========================================
Files 114 114
Lines 18691 18691
==========================================
+ Hits 14737 14739 +2
+ Misses 3954 3952 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3543?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.93% <0.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/3543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.90% <0.00%> (+0.34%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3543?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3543?src=pr&el=footer). Last update [b868617...8b22919](https://codecov.io/gh/huggingface/transformers/pull/3543?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,542 | closed | KeyError: 'answers' error when using BioASQ dataset using Huggingface Transformers | # 🐛 Bug
I am using Bert on BioASQ question answering dataset using the script run_squad.py from Huggingface Transformers.
##To reproduce
Steps to reproduce the behavior:
1.I am using run_squad.py https://github.com/huggingface/transformers/blob/master/examples/run_squad.py from Huggingface Transformers for fine-tuning on BioASQ Question Answering dataset.
2.I have converted the tensorflow weights provided by the authors of BioBERT https://github.com/dmis-lab/bioasq-biobert to Pytorch as discussed here https://github.com/huggingface/transformers/issues/312.
3.Further, I am using the preprocessed data of BioASQ [(https://github.com/dmis-lab/bioasq-biobert)] which is converted to the SQuAD form. However, when I am running the run_squad.py script with the below parameters
python3 run_squad.py \
--model_type bert \
--model_name_or_path /scratch/oe7/uk1594/BioBERT/BioBERT-PyTorch/BioBERTv1.1-SQuADv1.1-Factoid-PyTorch/ \
--do_train \
--do_eval \
--save_steps 1000 \
--train_file $data/BioASQ-train-factoid-6b.json \
--predict_file $data/BioASQ-test-factoid-6b-1.json \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /scratch/oe7/uk1594/BioBERT/BioBERT-PyTorch/QA_output_squad/BioASQ-factoid-6b/BioASQ-factoid-6b-1-issue-23mar/
I get the below error:
0%| | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
File "run_squad.py", line 856, in <module>
main()
File "run_squad.py", line 845, in main
result = evaluate(args, model, tokenizer, prefix=global_step)
File "run_squad.py", line 299, in evaluate
dataset, examples, features = load_and_cache_examples(args, tokenizer, evaluate=True, output_examples=True)
File "run_squad.py", line 475, in load_and_cache_examples
examples = processor.get_dev_examples(args.data_dir, filename=args.predict_file)
File "/scratch/oe7/uk1594/lib/python3.7/site-packages/transformers/data/processors/squad.py", line 522, in get_dev_examples
return self._create_examples(input_data, "dev")
File "/scratch/oe7/uk1594/lib/python3.7/site-packages/transformers/data/processors/squad.py", line 549, in _create_examples
answers = qa["answers"]
KeyError: 'answers'
- `transformers` version: Latest
- Platform:
- Python version: python3.7.4
- PyTorch version (GPU?): pytorch 1.4.
- Tensorflow version (GPU?):tensorflow/2.0.0
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?:No
Appreciate your help.
Thanks!!
| 03-31-2020 03:33:33 | 03-31-2020 03:33:33 | Have you checked whether the Bioasq format suits the Huggingface interface/format?
Because the Bioasq does not natively support a reading comprehension task as defined in SquaD<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,541 | closed | forward() got an unexpected keyword argument 'output_all_encoded_layers' | I am getting an error such as forward () got an unexpected keyword argument 'output_all_encoded_layers', how can I fix it?
class BertBinaryClassifier(nn.Module):
def __init__(self, dropout=0.1):
super(BertBinaryClassifier, self).__init__()
self.bert = BertModel.from_pretrained('bert-base-uncased')
self.dropout = nn.Dropout(dropout)
self.linear = nn.Linear(768, 1)
self.sigmoid = nn.Sigmoid()
def forward(self, tokens, masks=None):
_, pooled_output = self.bert(tokens, attention_mask=masks, output_all_encoded_layers=False)
dropout_output = self.dropout(pooled_output)
linear_output = self.linear(dropout_output)
proba = self.sigmoid(linear_output)
return proba
bert_clf = BertBinaryClassifier()
bert_clf = bert_clf.cuda()
x = torch.tensor(train_tokens_ids[:3]).to(device)
y, pooled = bert_clf.bert(x, output_all_encoded_layers=False)
x.shape, y.shape, pooled.shape
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-22-def915600e8d> in <module>()
1 x = torch.tensor(train_tokens_ids[:3]).to(device)
----> 2 y, pooled = bert_clf.bert(x, output_all_encoded_layers=False)
3 x.shape, y.shape, pooled.shape
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
TypeError: forward() got an unexpected keyword argument 'output_all_encoded_layers' | 03-30-2020 21:31:17 | 03-30-2020 21:31:17 | This should be added to the initializer, not the forward method. Also, the correct parameter is `output_hidden_states`. You can also be more explicit by changing the config:
```python
model_config = AutoConfig.from_pretrained('bert-base-uncased', output_hidden_states=True)
self.bert = AutoModel.from_pretrained('bert-base-uncased', config=model_config)
```<|||||>> This should be added to the initializer, not the forward method. Also, the correct parameter is `output_hidden_states`. You can also be more explicit by changing the config:
>
> ```python
> model_config = AutoConfig.from_pretrained('bert-base-uncased', output_hidden_states=True)
> self.bert = AutoModel.from_pretrained('bert-base-uncased', config=model_config)
> ```
Thank you very much for answering.
Sorry, ...can you please write the entire code?<|||||>In your code, replace
```python
self.bert = BertModel.from_pretrained('bert-base-uncased')
```
with
```python
model_config = BertConfig.from_pretrained('bert-base-uncased', output_hidden_states=True)
self.bert = BertModel.from_pretrained('bert-base-uncased', config=model_config)
```
and don't forget to import BertConfig at the top.
In the future, please format your code correctly. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks |
transformers | 3,540 | closed | Quick Tour TF2.0 error: dataclasses.FrozenInstanceError: cannot assign to field 'label' | # 🐛 Bug
## Information
Model I am using: bert-base-cased
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. install TensorFlow 2 with conda `conda install tensorflow`
2. install Transformers either from source or using pip `pip install transformers`
3. run the Quick Tour TF 2 example with the following content:
```python
import tensorflow as tf
import tensorflow_datasets
from transformers import *
# Load dataset, tokenizer, model from pretrained model/vocabulary
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')
data = tensorflow_datasets.load('glue/mrpc')
# Prepare dataset for GLUE as a tf.data.Dataset instance
train_dataset = glue_convert_examples_to_features(data['train'], tokenizer, max_length=128, task='mrpc')
valid_dataset = glue_convert_examples_to_features(data['validation'], tokenizer, max_length=128, task='mrpc')
train_dataset = train_dataset.shuffle(100).batch(32).repeat(2)
valid_dataset = valid_dataset.batch(64)
# Prepare training: Compile tf.keras model with optimizer, loss and learning rate schedule
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
# Train and evaluate using tf.keras.Model.fit()
history = model.fit(train_dataset, epochs=2, steps_per_epoch=115,
validation_data=valid_dataset, validation_steps=7)
# Load the TensorFlow model in PyTorch for inspection
model.save_pretrained('./save/')
```
## Expected behavior
```
Traceback (most recent call last):
File "quick_tour_tf2.py", line 11, in <module>
train_dataset = glue_convert_examples_to_features(data['train'], tokenizer, max_length=128, task='mrpc')
File "C:\Users\Anh Minh\.conda\envs\transformers\lib\site-packages\transformers\data\processors\glue.py", line 86, in glue_convert_examples_to_features
example = processor.tfds_map(example)
File "C:\Users\Anh Minh\.conda\envs\transformers\lib\site-packages\transformers\data\processors\utils.py", line 115, in tfds_map
example.label = self.get_labels()[int(example.label)]
File "<string>", line 4, in __setattr__
dataclasses.FrozenInstanceError: cannot assign to field 'label'
```
### Update:
I have recently installed Pytorch and tried out `examples/run_tf_glue.py` and the same error occured.
```
(transformers) C:\Users\Anh Minh\Workspace\transformers_my_codes>python run_tf_glue.py
2020-03-31 10:43:55.555102: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2
2020-03-31 10:44:02.576281: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2020-03-31 10:44:02.669572: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce RTX 2080 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 46 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 417.29GiB/s
2020-03-31 10:44:02.679337: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2020-03-31 10:44:02.683708: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
2020-03-31 10:44:02.689044: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll
2020-03-31 10:44:02.693280: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll
2020-03-31 10:44:02.762552: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll
2020-03-31 10:44:02.767982: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll
2020-03-31 10:44:02.773095: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-03-31 10:44:02.779069: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-03-31 10:44:02.789070: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
2020-03-31 10:44:02.799782: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce RTX 2080 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 46 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 417.29GiB/s
2020-03-31 10:44:02.809292: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2020-03-31 10:44:02.813862: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
2020-03-31 10:44:02.818889: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll
2020-03-31 10:44:02.823516: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll
2020-03-31 10:44:02.828140: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll
2020-03-31 10:44:02.833958: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll
2020-03-31 10:44:02.839710: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-03-31 10:44:02.845469: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-03-31 10:44:05.483986: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-03-31 10:44:05.489238: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0
2020-03-31 10:44:05.492138: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N
2020-03-31 10:44:05.499953: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6269 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080, pci bus id: 0000:01:00.0, compute capability: 7.5)
2020-03-31 10:44:06.412558: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
INFO:absl:Overwrite dataset info from restored data version.
INFO:absl:Reusing dataset glue (C:\Users\Anh Minh\tensorflow_datasets\glue\mrpc\1.0.0)
INFO:absl:Constructing tf.data.Dataset for split None, from C:\Users\Anh Minh\tensorflow_datasets\glue\mrpc\1.0.0
Traceback (most recent call last):
File "run_tf_glue.py", line 51, in <module>
train_dataset = glue_convert_examples_to_features(data["train"], tokenizer, 128, TASK)
File "C:\Users\Anh Minh\.conda\envs\transformers\lib\site-packages\transformers\data\processors\glue.py", line 86, in glue_convert_examples_to_features
example = processor.tfds_map(example)
File "C:\Users\Anh Minh\.conda\envs\transformers\lib\site-packages\transformers\data\processors\utils.py", line 115, in tfds_map
example.label = self.get_labels()[int(example.label)]
File "<string>", line 4, in __setattr__
dataclasses.FrozenInstanceError: cannot assign to field 'label'
```
The issue has been resolved by reinstalling Transformers 2.5.0
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.7.0
- Platform: Windows 10
- Python version: 3.7.7
- PyTorch version (GPU?): 1.4.0 on GPU
- Tensorflow version (GPU?): 2.1 on GPU
- Using GPU in script?: yes, RTX 2080
- Using distributed or parallel set-up in script?: Unavailable | 03-30-2020 21:00:51 | 03-30-2020 21:00:51 | This was fixed yesterday, can you try installing from master?<|||||>verified that it's fixed in master. However, the bug remains when installing from pip. <|||||>We'll ship a new pip release soon, but in any case we'll try to update the code so that the TF script can run with an immutable `InputExample` (as discussed w/ @jplu)<|||||>I have just had the same problem. Can you show me how to fix it specifically?<|||||>[Install from source](https://github.com/huggingface/transformers#from-source) |
transformers | 3,539 | closed | T5 Summarization | # 🐛 Bug
T5 summarization code in pipelines.py file gives an error.
## Information
Model I am using: T5
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
pipelines.py official documentation example at line 1146
```
# use t5 in tf
summarizer = pipeline("summarization", model="t5-base", tokenizer="t5-base", framework="tf")
summarizer("Sam Shleifer writes the best docstring examples in the whole world.", min_length=5, max_length=20)
```
## To reproduce
In google colab, I did the following:
```
!pip install transformers
from transformers import pipeline
summarizer = pipeline("summarization", model="t5-base", tokenizer="t5-base", framework="tf")
summarizer("Sam Shleifer writes the best docstring examples in the whole world.", min_length=5, max_length=10)
```
And this gets the following error:
```
Your max_length is set to 200, but you input_length is only 18. You might consider decreasing max_length manually, e.g. summarizer('...', max_length=50)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-23-0fc603a01733> in <module>()
----> 1 summarizer("Sam Shleifer writes the best docstring examples in the whole world.", min_length=5, max_length=10)
3 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
797 else:
798 raise ValueError(
--> 799 'The first argument to `Layer.call` must always be passed.')
800
801 call_context = base_layer_utils.call_context()
ValueError: The first argument to `Layer.call` must always be passed.
```
- `transformers` version: 2.7.0
- Platform: Google Colab
- PyTorch version (GPU?): 1.4.0 GPU enabled
- Tensorflow version (GPU?): 2.2.0-rcl GPU enabled
- Using GPU in script?:
- Using distributed or parallel set-up in script?: No
| 03-30-2020 20:35:43 | 03-30-2020 20:35:43 | Hi @cformosa,
Thanks for posting this bug. This seems to be related to the new TF 2.2 release.
Could you instead use TF2.1 for the moment:
```
!pip install transformers
!pip install tensorflow==2.1
from transformers import pipeline
summarizer = pipeline("summarization", model="t5-base", tokenizer="t5-base", framework="tf")
summarizer("Sam Shleifer writes the best docstring examples in the whole world.", min_length=5, max_length=10
```
Please let me know if you still encounter problems.
|
transformers | 3,538 | closed | [Docs] Add usage examples for translation and summarization | Adds docs. Fastest way to check is the changes in "rich diff" format. | 03-30-2020 20:33:02 | 03-30-2020 20:33:02 | |
transformers | 3,537 | closed | Add model cards | Add IMDB tuned classifier and LMs model cards. | 03-30-2020 19:44:09 | 03-30-2020 19:44:09 | |
transformers | 3,536 | closed | [Encoder-Decoder] Force models outputs to always have batch_size as their first dim | This PR remove the hard-coded variable `encoder_outputs_batch_dim_idx` from Bart and T5 by transposing BART's `encoder_outputs` dimensions before returning them.
**Reasons:**
- When adding more encoder-decoder models, we would always force the newly added model to have this variable
- When adding the modeling_encoder_decoder.py file, models that could be used in an encoder-decoder structure would also need to have this attribute, e.g. we would have to add it to Bert for example
- `encoder_outputs_batch_dim_idx` is a hard-coded variable that I don't think is very pretty
**Trade-off:**
- Now every encoder output in a encoder-decoder model has to have `batch_size` as their first dimension.
This PR is related to a question that already came up before (see #3120):
*Should we force all model outputs to have `batch_size` as their first dimension* ?
I think it would be good to always have `batch_size` as the first dimension (to the user) exposed output @thomwolf @LysandreJik @sshleifer @julien-c | 03-30-2020 19:28:22 | 03-30-2020 19:28:22 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3536?src=pr&el=h1) Report
> Merging [#3536](https://codecov.io/gh/huggingface/transformers/pull/3536?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6f5a12a5833d1e3783e4b8a42cb556b64085745e&el=desc) will **decrease** coverage by `0.49%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3536?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3536 +/- ##
==========================================
- Coverage 78.30% 77.80% -0.50%
==========================================
Files 100 100
Lines 17062 17062
==========================================
- Hits 13360 13275 -85
- Misses 3702 3787 +85
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3536?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `81.48% <ø> (-0.05%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `97.59% <100.00%> (+0.01%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.80% <100.00%> (-0.02%)` | :arrow_down: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/3536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.89% <0.00%> (-27.60%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3536?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3536?src=pr&el=footer). Last update [6f5a12a...89d0945](https://codecov.io/gh/huggingface/transformers/pull/3536?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,535 | closed | Error on fine-tuning XLM like model on SQUaD like dataset | # 🐛 Bug
## Information
I am trying to fine-tune the model [xlm-mlm-100-1280](https://huggingface.co/xlm-mlm-100-1280) on SQuAD v1 dataset like (tydiQA) with the script provided for this task (transformers/examples/run_squad.py) and I get the following error:
```python
Epoch: 0% 0/5 [00:00<?, ?it/s]
Iteration: 0% 0/2383 [00:00<?, ?it/s]Traceback (most recent call last):
File "/content/transformers/examples/run_squad.py", line 829, in <module>
main()
File "/content/transformers/examples/run_squad.py", line 768, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "/content/transformers/examples/run_squad.py", line 204, in train
outputs = model(**inputs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'cls_index'
Epoch: 0% 0/5 [00:00<?, ?it/s]
Iteration: 0% 0/2383 [00:00<?, ?it/s]
```
## To reproduce
```bash
git clone https://github.com/huggingface/transformers
pip install -q ./transformers
# wget "your SQuAD v1 like dataset
python /content/transformers/examples/run_squad.py \
--model_type xlm \
--model_name_or_path xlm-mlm-100-1280 \
--do_lower \
--do_train \
--do_eval \
--train_file /content/dataset/tydiqa-goldp-v1.0-train.json \
--predict_file /content/dataset/tydiqa-goldp-v1.0-dev.json \
--per_gpu_train_batch_size 24 \
--per_gpu_eval_batch_size 128 \
--learning_rate 3e-5 \
--num_train_epochs 5 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /content/model_output \
--overwrite_output_dir \
--save_steps 2000 \
--threads 400
```
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.7.0
- Platform: Linux-4.14.137+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.2.0-rc1 (True)
- Using GPU in script?: Yes. Nvidia Tesla P100
- Using distributed or parallel set-up in script?: No
| 03-30-2020 19:26:04 | 03-30-2020 19:26:04 | Hi, any follow-up in this thread? I have received the same error with yours, but a different parameter 'model_name_or_path = xlm-mlm-tlm-xnli15-1024' is used in my experiment.<|||||>I had to revert all the way to 2.5.1 to get this to work (xlnet-base fine-tuning on SQuAD 1.1), FWIW, so it's been broken for a bit...<|||||>Thanks @nelson-liu <|||||>Cc @julien-c <|||||>> I had to revert all the way to 2.5.1
Thanks @nelson-liu
using `run_squad.py` at `huggingface/transformers/v2.5.1/examples`
https://raw.githubusercontent.com/huggingface/transformers/v2.5.1/examples/run_squad.py<|||||>I encountered a similar error fine-tuning a RoBERTa model on a SWAG-like dataset using the example scripts. The problem appears to be that the transformers.Trainer object defined [here](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py) unpacks all the properties of the InputFeatures as arguments to the model's `forward`, like so:
```
for k, v in inputs.items():
inputs[k] = v.to(self.args.device)
...
outputs = model(**inputs)
```
The problem is that the InputFeatures have properties like example_id that are not valid keyword args for `forward`. (Same problem for this ticket: SquadFeatures has cls_index).
As a workaround, I'm removing the example_id property from InputFeatures. Long-term, maybe the Trainer should be more selective about which arguments it passes?
<|||||>`run_squad.py` doesn't currently use the Trainer so this is probably a distinct issue, @steeter-cyclist .<|||||>Same query as @andyweizhao. Any updates? Reverting to v2.5.1 throws ImportError: cannot import name 'MODEL_FOR_QUESTION_ANSWERING_MAPPING'<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Still occurs
```
Iteration: 0%| | 0/44511 [00:00<?, ?it/s]
Epoch: 0%| | 0/4 [00:00<?, ?it/s]
Traceback (most recent call last):
File "run_squad.py", line 820, in <module>
main()
File "run_squad.py", line 763, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_squad.py", line 202, in train
outputs = model(**inputs)
File "C:\Users\erann\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\module.py", line 72
2, in _call_impl
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'cls_index'
```
https://github.com/huggingface/transformers/issues/6360 shows stale-bot already kicked into action in there as well |
transformers | 3,534 | closed | pretrained EsperBERTo | Hi,
I am trying to replicate the pretraining process mentioned in this blog post: https://huggingface.co/blog/how-to-train
I have time-restricted access to the GPU I'm currently working on and so I wanted to know how to save the checkpoint and resume the pretraining process from the latest checkpoint.
Thanks! | 03-30-2020 15:15:24 | 03-30-2020 15:15:24 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,533 | closed | Error when training with distributed training on 4/8 Nvidia v100. | # 🐛 Bug
## Information
I'm getting the following error while training the official implementation in [examples/run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) on [WikiText-2 dataset](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) using multi-gpu setting (4/8 Nvidia GPU's).
```python
Traceback (most recent call last):
File "run_language_modeling.py", line 976, in <module>
main()
File "run_language_modeling.py", line 926, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_language_modeling.py", line 513, in train
outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/distributed.py", line 457, in forward
self.reducer.prepare_for_backward(list(_find_tensors(output)))
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable). (prepare_for_backward at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:518)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7f66375cb273 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)
frame #1: c10d::Reducer::prepare_for_backward(std::vector<torch::autograd::Variable, std::allocator<torch::autograd::Variable> > const&) + 0x734 (0x7f66822b09e4 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #2: <unknown function> + 0x691a4c (0x7f668229fa4c in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #3: <unknown function> + 0x1d3ef4 (0x7f6681de1ef4 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #4: _PyCFunction_FastCallDict + 0x35c (0x5674fc in /usr/bin/python)
frame #5: /usr/bin/python() [0x50abb3]
frame #6: _PyEval_EvalFrameDefault + 0x449 (0x50c5b9 in /usr/bin/python)
frame #7: /usr/bin/python() [0x508245]
```
Model I am using (Bert, XLNet ...): GPT-2
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: [examples/run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: [WikiText-2 dataset](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run [examples/run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) Using this command:
```
python -m torch.distributed.launch --nproc_per_node 4 run_language_modeling.py --output_dir=./output/ --model_type=gpt2 --model_name_or_path=gpt2 --do_train --train_data_file=./data/wiki.train.raw --per_gpu_train_batch_size 2 --num_train_epochs 10 --fp16
```
## Expected behavior
Should run distributed training without any errors.
## Environment info
I'm using this docker:
```
docker pull deepspeed/deepspeed:latest
```
- `transformers` version: 2.6.0
- Platform: Linux
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?: Yes
| 03-30-2020 14:31:53 | 03-30-2020 14:31:53 | Out of curiosity, does the same issue arise without fp16?<|||||>@BramVanroy Yes.<|||||>Someone's help, please?<|||||>Have look here: https://github.com/pytorch/pytorch/issues/22436<|||||>@BramVanroy I'm sorry, the repo is working fine with distributed training. I found the error comes from adding special tokens:
```python
SPECIAL_TOKENS_DICT = {'additional_special_tokens': ['token1', 'token2']}
tokenizer.add_special_tokens(SPECIAL_TOKENS_DICT)
```
It's weird because I didn't get any error with only 1 GPU.
I solved it by doing:
```python
model.resize_token_embeddings(len(tokenizer))
```
@BramVanroy Can you please confirm it's the right way?<|||||>Yes, after modifying the vocabulary of the tokenizer, you also need to propagate those changes to the model's embeddings.
If your problem is fixed, please close this topic.<|||||>Great, thanks! |
transformers | 3,532 | closed | Resizing embedding matrix before sending it to the optimizer. | Hi there,
This is a minor bug when fine-tuning a pre-trained language model with a larger vocabulary using the run_lm_finetuning.py script.
Nothing major but I spent a couple of hours trying to figure out why my token embeddings were not being updated with their corresponding gradient :)
This bug will rise when you add new token types in the Tokenizer.
Since the resizing was done after passing the params to the optimizer, the wrong set of params for the embedding table were optimized.
Cheers | 03-30-2020 14:16:23 | 03-30-2020 14:16:23 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3532?src=pr&el=h1) Report
> Merging [#3532](https://codecov.io/gh/huggingface/transformers/pull/3532?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d38bbb225f7b847e8be4e969cb9b40e7e4d798a6&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3532?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3532 +/- ##
==========================================
- Coverage 77.81% 77.80% -0.01%
==========================================
Files 100 100
Lines 17062 17062
==========================================
- Hits 13276 13275 -1
- Misses 3786 3787 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3532?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3532/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.81% <0.00%> (-0.14%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3532?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3532?src=pr&el=footer). Last update [d38bbb2...0475459](https://codecov.io/gh/huggingface/transformers/pull/3532?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,531 | closed | [T5, docs] remove useless and confusing lm_labels line | Remove useless docstring | 03-30-2020 12:58:36 | 03-30-2020 12:58:36 | |
transformers | 3,530 | closed | TypeError: sequence item 0: expected str instance, NBProgressBar found | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): bert-base-multilingual-cased
Language I am using the model on (English, Chinese ...): German
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: NER
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Google Colab Notebook
2. Run: `!python3 run_tf_ner.py --data_dir ./data --model_type bert --labels labels.txt --model_name_or_path bert-base-multilingual-cased --output_dir germeval-model --max_seq_length 128 --num_train_epochs 3 --per_device_train_batch_size 32 --save_steps 750 --seed 1 --do_train --do_eval --do_predict`
3.
```
I0330 12:15:45.104261 140536926144384 modeling_tf_utils.py:388] loading weights file germeval-model/tf_model.h5
I0330 12:15:46.316838 140536926144384 modeling_tf_utils.py:428] Layers of TFBertForTokenClassification not initialized from pretrained model: ['dropout_75']
I0330 12:15:46.317042 140536926144384 modeling_tf_utils.py:432] Layers from pretrained model not used in TFBertForTokenClassification: ['dropout_37']
I0330 12:15:46.317251 140536926144384 run_tf_ner.py:420] Loading features from cached file ./data/cached_dev_bert-base-multilingual-cased_128.tf_record
I0330 12:15:46.483210 140536926144384 run_tf_ner.py:318] ***** Running evaluation *****
I0330 12:15:46.483375 140536926144384 run_tf_ner.py:319] Num examples = 2200
I0330 12:15:46.483478 140536926144384 run_tf_ner.py:320] Batch size = 8
Traceback (most recent call last):
File "run_tf_ner.py", line 644, in <module>
app.run(main)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 299, in run
_run_main(main, args)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 250, in _run_main
sys.exit(main(argv))
File "run_tf_ner.py", line 579, in main
args, strategy, model, tokenizer, labels, pad_token_label_id, mode="dev"
File "run_tf_ner.py", line 322, in evaluate
for eval_features, eval_labels in eval_iterator:
File "/usr/local/lib/python3.6/dist-packages/fastprogress/fastprogress.py", line 39, in __iter__
if self.total != 0: self.update(0)
File "/usr/local/lib/python3.6/dist-packages/fastprogress/fastprogress.py", line 56, in update
self.update_bar(0)
File "/usr/local/lib/python3.6/dist-packages/fastprogress/fastprogress.py", line 76, in update_bar
else: self.on_update(val, f'{100 * val/self.total:.2f}% [{val}/{self.total} {elapsed_t}<{remaining_t}{end}]')
File "/usr/local/lib/python3.6/dist-packages/fastprogress/fastprogress.py", line 126, in on_update
elif self.parent is not None: self.parent.show()
File "/usr/local/lib/python3.6/dist-packages/fastprogress/fastprogress.py", line 167, in show
self.html_code = '\n'.join([getattr(self.inner_dict[n], 'progress', self.inner_dict[n]) for n in to_show])
TypeError: sequence item 0: expected str instance, NBProgressBar found
```
## Expected behavior
Evaluation of test.txt
## Environment info
- `transformers` version: 2.6.0
- Platform: Google Colab
- Python version: 3.6
- PyTorch version (GPU?): -
- Tensorflow version (GPU?): 2.2.0rc1
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: using example setup
| 03-30-2020 12:45:11 | 03-30-2020 12:45:11 | Whats also strange, the PyTorch version list the labels as text, e.g: "B-ORG, I-LOC, [...] in the Model Config Ouput but the TF-version list them like this bellow. Is this okay?
```
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2",
"3": "LABEL_3",
"4": "LABEL_4",
"5": "LABEL_5",
"6": "LABEL_6",
"7": "LABEL_7",
"8": "LABEL_8",
"9": "LABEL_9",
"10": "LABEL_10",
"11": "LABEL_11",
"12": "LABEL_12",
"13": "LABEL_13",
"14": "LABEL_14",
"15": "LABEL_15",
"16": "LABEL_16",
"17": "LABEL_17",
"18": "LABEL_18",
"19": "LABEL_19",
"20": "LABEL_20",
"21": "LABEL_21",
"22": "LABEL_22",
"23": "LABEL_23",
"24": "LABEL_24",
"25": "LABEL_25"
```<|||||>Any update or fix for this?<|||||>Got it working - the parent progress bar is not needed for evalute/predict as there is a single iteration.
In run_tf_ner.py I changed:
eval_iterator = progress_bar(eval_dataset, total=num_eval_steps,parent=master, display=args["n_device"] > 1)
to
eval_iterator = progress_bar(eval_dataset, total=num_eval_steps, display=args["n_device"] > 1)
and commented out
master = master_bar(range(1))<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,529 | closed | [T5] fix lm lables in docstring | Add better explanation to T5 `lm_labels` dosctring. | 03-30-2020 12:06:26 | 03-30-2020 12:06:26 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3529?src=pr&el=h1) Report
> Merging [#3529](https://codecov.io/gh/huggingface/transformers/pull/3529?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/75ec6c9e3a7de6cc3e2920f3bb531e7c840b8ada&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3529?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3529 +/- ##
==========================================
+ Coverage 77.80% 77.81% +0.01%
==========================================
Files 100 100
Lines 17062 17062
==========================================
+ Hits 13275 13277 +2
+ Misses 3787 3785 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3529?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `81.52% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `94.98% <ø> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.94% <0.00%> (+0.13%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.32% <0.00%> (+0.17%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3529?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3529?src=pr&el=footer). Last update [75ec6c9...8692264](https://codecov.io/gh/huggingface/transformers/pull/3529?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,528 | closed | Unexpected ZeroDivisionError when calling model.prune_heads | # 🐛 Bug
Traceback (most recent call last):
File "/Users/user/anaconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/Users/user/anaconda3/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/Users/user/Desktop/org/text_models_branches/section_extraction/text_models/bert_text_classification/train_bert_ml_mc.py", line 609, in <module>
masking_amount=args.masking_amount,
File "/Users/user/Desktop/org/text_models_branches/section_extraction/text_models/bert_text_classification/train_bert_ml_mc.py", line 288, in train_model
local_rank=transformer_args["local_rank"],
File "/Users/user/Desktop/org/text_models_branches/section_extraction/text_models/bert_text_classification/prune_attention_heads.py", line 488, in prune_model_and_return
metric=metric,
File "/Users/user/Desktop/org/text_models_branches/section_extraction/text_models/bert_text_classification/prune_attention_heads.py", line 396, in prune_heads
model.prune_heads(heads_to_prune)
File "/Users/user/Desktop/org/text_models_branches/section_extraction/text_models/env/lib/python3.7/site-packages/transformers/modeling_utils.py", line 234, in prune_heads
self.base_model._prune_heads(heads_to_prune)
File "/Users/user/Desktop/org/text_models_branches/section_extraction/text_models/env/lib/python3.7/site-packages/transformers/modeling_bert.py", line 635, in _prune_heads
self.encoder.layer[layer].attention.prune_heads(heads)
File "/Users/user/Desktop/org/text_models_branches/section_extraction/text_models/env/lib/python3.7/site-packages/transformers/modeling_bert.py", line 301, in prune_heads
self.output.dense = prune_linear_layer(self.output.dense, index, dim=1)
File "/Users/user/Desktop/org/text_models_branches/section_extraction/text_models/env/lib/python3.7/site-packages/transformers/modeling_utils.py", line 824, in prune_linear_layer
new_layer = nn.Linear(new_size[1], new_size[0], bias=layer.bias is not None).to(layer.weight.device)
File "/Users/user/Desktop/org/text_models_branches/section_extraction/text_models/env/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 81, in __init__
self.reset_parameters()
File "/Users/user/Desktop/org/text_models_branches/section_extraction/text_models/env/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 84, in reset_parameters
init.kaiming_uniform_(self.weight, a=math.sqrt(5))
File "/Users/user/Desktop/org/text_models_branches/section_extraction/text_models/env/lib/python3.7/site-packages/torch/nn/init.py", line 325, in kaiming_uniform_
std = gain / math.sqrt(fan)
ZeroDivisionError: float division by zero
Basically, the error is being thrown on calling model.prune_heads(heads_to_prune). This error is not coming every time I run the script. Not sure what is causing the error.
## Information
Model I am using (Bert, XLNet ...): BERT
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below) - No
* [x] my own modified scripts: (give details below) - Yes
My script calls mask_heads and then prune_heads similar to the original bertology script
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
A multi-class classification task on a proprietary dataset
## To reproduce
Steps to reproduce the behavior:
Still don't know as the error seems to be unexpected. I don't get it every time I run the script.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
model.prune_heads(heads_to_prune) where heads_to_prune -> Dict[int, List] where key is the layer number and value is the list of heads to prune (Calculated by calling the mask_heads function). Expected behaviour is pruning off heads present in heads_to_prune for each layer
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.2.2
- Platform: macOS Catalina 10.15
- Python version: Python 3.7.3
- PyTorch version (GPU?): 1.3.1 (No GPU)
- Tensorflow version (GPU?): 1.14.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 03-30-2020 11:16:08 | 03-30-2020 11:16:08 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Found the solution to this. When you try to prune all the attention heads in a layer, you will run into this error. This is why it sometimes shows up and sometimes does not, because your pruning function may or may not decide to prune all the attention heads in some layer depending on how you are computing the importance of each attention head. If you try the example on the Hugging Face Model page for the prune_heads function, {1: [0, 2], 2: [2, 3]}, it should work without any error (at least, you will not end up with the ZeroDivisionError.
I was able to debug this by printing out what my original heads_to_prune dictionary looked like, and therefore noticed the edge case. With this hunch, testing it out on other cases confirmed the cause. In the future, printing out your inputs to the function that is returning the error is a good practice! Especially when the function is implemented by some entity like Hugging Face and the only thing that could probably go wrong is the input you give it.
Hope this helps! |
transformers | 3,527 | closed | Bart.generate requires config.output_past=True | Is there a way to generate using pre-trained BART like one in
https://huggingface.co/blog/how-to-generate
I am currently using BART for a generation task but finetuning it
I was wondering if it's possible to see generation result from pre-trained BART | 03-30-2020 10:24:29 | 03-30-2020 10:24:29 | Bart is a encoder-decoder model. So it should be rather used as translating one sequence to another one. This means that the generation method expects `input_ids` and creates `decoder_input_ids`.
Maybe you can take a look at this: https://sshleifer.github.io/blog_v2/jupyter/2020/03/12/bart.html<|||||>I think I might have found a potential issue with `BartForConditionalGeneration`. In zero-shot setup, the vanilla `bart-large` model produces gibberish, while the `bart-large-cnn` can generate fluent language. I think the problem is with the default setup on `output_past` attribute of `BartConfig`
Example:
```
from transformers import AutoTokenizer, BartForConditionalGeneration
model_name_or_path = 'bart-large'
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = BartForConditionalGeneration(model_name_or_path)
text = "Trump falsely denied that he claimed governors from certain states"
input_ids = tokenizer.batch_encode_plus([text], return_tensors='pt')['input_ids']
output = model.generate(input_ids=input_ids, max_length=50, num_beams=1)
print(tokenizer.decode(output[0]))
```
If `model_name_or_path="bart-large"`, the result will be `<s>Mr\'<s>Mr\'Mr"Mr""<s>Mr"Mr"\'Mr"<s>Mr"<s>Mr"<s>Mr"<s>Mr"Mr"<s>Mr"<s>Mr\'Mr"\'Mr"Mr"\'Mr"Mr`.
If it is set to `bart-large-cnn`, the result will be `</s><s><s><s>Trump falsely denied that he claimed governors from certain states. Trump falsely denied he claimed that he had been in contact with governors from some states. He also falsely denied saying he had met with governors of certain states in the past. Trump`
But once I override the `output_past` flag in config, the result of `bart-large` will be normal:
```
config = BartConfig.from_pretrained('bart-large')
config.output_past = True
model = BartForConditionalGeneration(model_name_or_path, config=config)
...
```
Result would be: `<s>MrThreatening to deport immigrants from certain states</s>`
This seems to be related to autoregressive decoding where the decoder states need to be cached. Not sure if this is intended so that `bart-large` is always used as a masked language model, correct me if I'm wrong.
<|||||>Thanks Xinyu . I owe you a drink :)<|||||>> I think I might have found a potential issue with `BartForConditionalGeneration`. In zero-shot setup, the vanilla `bart-large` model produces gibberish, while the `bart-large-cnn` can generate fluent language. I think the problem is with the default setup on `output_past` attribute of `BartConfig`
>
> Example:
>
> ```
> from transformers import AutoTokenizer, BartForConditionalGeneration
>
> model_name_or_path = 'bart-large'
>
> tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
> model = BartForConditionalGeneration(model_name_or_path)
>
> text = "Trump falsely denied that he claimed governors from certain states"
> input_ids = tokenizer.batch_encode_plus([text], return_tensors='pt')['input_ids']
> output = model.generate(input_ids=input_ids, max_length=50, num_beams=1)
> print(tokenizer.decode(output[0]))
> ```
>
> If `model_name_or_path="bart-large"`, the result will be `<s>Mr\'<s>Mr\'Mr"Mr""<s>Mr"Mr"\'Mr"<s>Mr"<s>Mr"<s>Mr"<s>Mr"Mr"<s>Mr"<s>Mr\'Mr"\'Mr"Mr"\'Mr"Mr`.
>
> If it is set to `bart-large-cnn`, the result will be `</s><s><s><s>Trump falsely denied that he claimed governors from certain states. Trump falsely denied he claimed that he had been in contact with governors from some states. He also falsely denied saying he had met with governors of certain states in the past. Trump`
>
> But once I override the `output_past` flag in config, the result of `bart-large` will be normal:
>
> ```
> config = BartConfig.from_pretrained('bart-large')
> config.output_past = True
> model = BartForConditionalGeneration(model_name_or_path, config=config)
> ...
> ```
>
> Result would be: `<s>MrThreatening to deport immigrants from certain states</s>`
>
> This seems to be related to autoregressive decoding where the decoder states need to be cached. Not sure if this is intended so that `bart-large` is always used as a masked language model, correct me if I'm wrong.
@sshleifer - maybe you can answer this better than I can<|||||>@patrickvonplaten
```
>>> model = BartForConditionalGeneration(model_name_or_path, config=c)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __init__() got multiple values for argument 'config'
```
Getting this error. Also is there a way to force a generation to contain prefix tokens?
i know fairseq has this feature<|||||>@tuhinjubcse
- to pass a model name, you need to instantiate using `from_pretrained`. You can pass in configuration options as keyword arugments.
```python
BartForConditionalGeneration.from_pretrained(model_name, **c.__dict__)
```
- for prefix tokens, see the `decoder_start_input_ids` kwarg to `generate`<|||||>@XinyuHua you are correct!<|||||>Idk the results look pretty bad to me @sshleifer
```
from transformers import AutoTokenizer, BartForConditionalGeneration ,BartConfig
c = BartConfig.from_pretrained('bart-large')
c.output_past = True
model_name_or_path = 'bart-large'
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = BartForConditionalGeneration.from_pretrained(model_name_or_path, config=c)
text = "Milton scrunched his eyes and moodily turned back to his computer like a"
input_ids = tokenizer.batch_encode_plus([text], return_tensors='pt')['input_ids']
input_ids = tokenizer.batch_encode_plus([text], return_tensors='pt')['input_ids']
output = model.generate(input_ids=input_ids,do_sample=True,max_length=50,top_k=5,temperature=0.7)
print(tokenizer.decode(output[0]))
```
The output I got is *MrMilton*<|||||>I'm not super surprised, since 'bart-large' is not finetuned on a generative task.<|||||>@sshleifer do you suggest using a different checkpoint or model
The reason I am asking is I am fine tuning on a novel dataset created for a task
But I need to have a baseline where I wanted to see how BART pretrained does , coz based on GPT2 it seems it does decently on generative tasks<|||||>I think it depends on the task, but I haven't tried using bart for the "text continuation" type workflow. CTRL, GPT2, T5 could work better.<|||||>@sshleifer Let me be a bit clear
I wanted to do something like
text_input = “Milton scrunched his eyes and moodily turned back to his computer helpless”
text_output = “Milton scrunched his eyes and moodily turned back to his computer like a”
I want my output to contain text_output as a prefix
Normally when I was fine-tuning BART where I had paired data
Milton scrunched his eyes and moodily turned back to his computer helpless----->Milton scrunched his eyes and moodily turned back to his computer like a despondent child
The generation result was
Milton scrunched his eyes and moodily turned back to his computer like a child caught in the headlights
I want to be able to get some results without fine-tuning and just using pretrained BART to compare. How do I do that?
<|||||>The short answer is I don't know, we don't have that use case supported with Bart.
For now I am going to close this, but feel free to open a discussion issue about your task. |
transformers | 3,526 | closed | bug in run_glue.py | # 🐛 Bug
## Information
Model I am using (FlauBert ...):
Language I am using the model on (French ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
the script "run_glue" (I did not modified anything) I took it as it is
The tasks I am working on is:
* [x] an official FLUE task: (CLS = classification tast ) : finetuning model
## To reproduce
Steps to reproduce the behavior:
1. Just run the script run_glue.py from a bash script following commands in the FlauBert tuto (https://github.com/getalp/Flaubert/tree/master/flue).
config='flue/examples/cls_books_lr5e6_hf_base_cased.cfg'
source $config
python ~/transformers/examples/run_flue.py \
--data_dir $data_dir \
--model_type flaubert \
--model_name_or_path $model_name_or_path \
--task_name $task_name \
--output_dir $output_dir \
--max_seq_length 512 \
--do_train \
--do_eval \
--learning_rate $lr \
--num_train_epochs $epochs \
--save_steps $save_steps \
--fp16 \
--fp16_opt_level O1 \
|& tee output.log
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
`
03/27/2020 11:54:48 - WARNING - main - Process rank: -1, device: cuda, n_gpu: 2, distributed training: False, 16-bits training: True
Traceback (most recent call last):
File "/home/getalp/kelodjoe/transformers/examples/run_glue.py", line 693, in
main()
File "/home/getalp/kelodjoe/transformers/examples/run_glue.py", line 613, in main
config_class, model_class, tokenizer_class = MODEL_CLASSES[args.model_type]
KeyError: 'flaubert'
`
Code of line 613
`
# Training
if args.do_train:
train_dataset = load_and_cache_examples(args, args.task_name, tokenizer, evaluate=False)
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
logger.info(" global_step = %s, average loss = %s", global_step, tr_loss)
# Saving best-practices: if you use defaults names for the model, you can reload it using from_pretrained() #613
if args.do_train and (args.local_rank == -1 or torch.distributed.get_rank() == 0):
# Create output directory if needed
if not os.path.exists(args.output_dir) and args.local_rank in [-1, 0]:
os.makedirs(args.output_dir)
logger.info("Saving model checkpoint to %s", args.output_dir)
# Save a trained model, configuration and tokenizer using `save_pretrained()`.
# They can then be reloaded using `from_pretrained()`
model_to_save = (
model.module if hasattr(model, "module") else model
) # Take care of distributed/parallel training
model_to_save.save_pretrained(args.output_dir)
tokenizer.save_pretrained(args.output_dir)
`
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I would Have expected it to work. and train the model.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: => how do I know I git pull before running the script
- Platform: linux
- Python version: 3.6
- PyTorch version (GPU?): 1.4
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: no
| 03-30-2020 09:07:38 | 03-30-2020 09:07:38 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,525 | closed | Issue loading custom tokenizer for fine-tuning gpt2 | I'm trying to fine-tune gpt2 with a custom tokenizer. It was working fine just over 10 days ago, with --tokenizer_name=/path to vocab and merges folder/ and now it cannot load, asking to check if it's a correct model identifier or contains a config.json file. As if instead of a tokenizer it is now trying to load a model? It also asked for an extra model identifier in the config file of my model, which before was not required.
I suppose there was a library update? What would be the workaround? Thanks in advance.
| 03-30-2020 07:38:54 | 03-30-2020 07:38:54 | Can you post a sample code?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,524 | closed | Add shoarora/electra and alectra model cards | Add model cards for recently uploaded models:
- shoarora/electra-small-owt (BERT)
- shoarora/alectra-small-owt (ALBERT) | 03-30-2020 04:00:35 | 03-30-2020 04:00:35 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3524?src=pr&el=h1) Report
> Merging [#3524](https://codecov.io/gh/huggingface/transformers/pull/3524?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/33ef7002e17fe42b276dc6d36c07a3c39b1f09ed&el=desc) will **decrease** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3524?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3524 +/- ##
==========================================
- Coverage 77.80% 77.79% -0.02%
==========================================
Files 100 100
Lines 17051 17051
==========================================
- Hits 13267 13265 -2
- Misses 3784 3786 +2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3524?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3524/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.15% <0.00%> (-0.18%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3524/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.81% <0.00%> (-0.14%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3524?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3524?src=pr&el=footer). Last update [33ef700...daff82d](https://codecov.io/gh/huggingface/transformers/pull/3524?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Model pages:
https://huggingface.co/shoarora/alectra-small-owt
https://huggingface.co/shoarora/electra-small-owt
Thanks for sharing @shoarora
Did you see those models btw @LysandreJik? |
transformers | 3,523 | closed | Why GPT2 train loss and topK accuracy both decrease? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hi,
I use GPT2 LMHeadModel training from scratch. The training loss decreases, however, I test it on the same train dataset, I get top3 accuracy decreasing as well. Moreover, is it normal that my top3 accuracy goes down sharply like from 0.2 to 0.05 only one or two epochs? Because it seems to be stable when it converges. Did anyone meet the same problem?

<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 03-30-2020 02:49:15 | 03-30-2020 02:49:15 | I found a mistake that in GPT2LMHeadModel, the label is shifted, however, I shift it again when preparing the batch. |
transformers | 3,522 | closed | why isn't AlbertForMultipulChioce in modeling_albert? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I just copy code from RobertaForMultipleChoice to modeling_albert and change all 'roberta' to 'albert', but the loss didn't goes dowm evidently. And the result is even worse than articles.
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 03-30-2020 01:38:00 | 03-30-2020 01:38:00 | I have come accross the same problem when I was testing RACE for Albert.
Your implementation might be right because there are few differences betweent "roberta" and "albert" finetune heads.
If you are testing RACE like me, the real problem may lie in the lack of max_qa_length implementation in run_multiple_choice.py.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,521 | closed | [T5] make decoder input ids optional for t5 training | - [x] Make `decoder_input_ids` optional when supplying `lm_labels` for `T5ForConditionalGeneration`
- [x] Add test | 03-29-2020 23:48:19 | 03-29-2020 23:48:19 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3521?src=pr&el=h1) Report
> Merging [#3521](https://codecov.io/gh/huggingface/transformers/pull/3521?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/33ef7002e17fe42b276dc6d36c07a3c39b1f09ed&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `95.23%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3521?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3521 +/- ##
=======================================
Coverage 77.80% 77.81%
=======================================
Files 100 100
Lines 17051 17069 +18
=======================================
+ Hits 13267 13282 +15
- Misses 3784 3787 +3
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3521?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3521/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.15% <ø> (-0.18%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3521/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.81% <ø> (-0.14%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3521/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `81.79% <95.23%> (+0.50%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3521?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3521?src=pr&el=footer). Last update [33ef700...168bed0](https://codecov.io/gh/huggingface/transformers/pull/3521?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@patrickvonplaten Hi Patrick! Could you tell me what is the difference between decoder_input_ids and lm_labels for T5ForConditionalGeneration? For context: I am using T5ForConditionalGeneration for paraphrase generation. I am checking this code:https://github.com/ramsrigouthamg/Paraphrase-any-question-with-T5-Text-To-Text-Transfer-Transformer-/blob/master/t5-pretrained-question-paraphraser.ipynb He uses lm_labels with decoder_attention_mask. Thanks in advance!<|||||>@mengyahuUSTC-PU . When calling the [forward()](https://github.com/ramsrigouthamg/Paraphrase-any-question-with-T5-Text-To-Text-Transfer-Transformer-/blob/9b26db2336d6077cc9d95bc28f123d32298aaf94/train.py#L66) , decoder_input_ids is None as follows:
```
outputs = self(
input_ids=batch["source_ids"],
attention_mask=batch["source_mask"],
lm_labels=lm_labels,
decoder_attention_mask=batch['target_mask']
)
```
decode_input_ids is derived from lm_labels if decode_input_ids is None. [decode_input_ids=](https://github.com/huggingface/transformers/blob/1aec991643a6fec0e7d504626fc68347fe93b658/src/transformers/modeling_t5.py#L1156)
I was wondering in what case I need to feed decode_input_ids.
|
transformers | 3,520 | closed | WIP: haiku bert implementation | Still a work in progress but the contextual embeddings line up with the pytorch version so this is roughly at parity with jax-bert
TODO (mostly notes to myself):
- [x] Add `save_pretrained`
- [ ] Make `from_pretrained` work with names
- [ ] Add dropout at training time, pass through training flag
- [ ] Make sure weight initializations line up when pre-trained state isn't passed
- [ ] Gradually work towards parity with the pytorch version if desired? (target models, BERT variants, etc.)
- [ ] Write HaikuPretrainedModel to take advantage of archive resolution / make saving + loading compatible with pytorch bins?
To use the pre-trained weights cleanly I ended up subclassing `hk.Module` -- unsure how I feel about this decision but I couldn't think of a better method at the time. Feel free to suggest an alternative if you have ideas. | 03-29-2020 22:57:05 | 03-29-2020 22:57:05 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,519 | closed | Resizing embedding matrix before sending it to the optimizer. | This bug will rise when you add new token types in the Tokenizer.
Since the resizing was done **after** passing the params to the optimizer, the wrong set of params for the embedding table were optimized. | 03-29-2020 20:01:30 | 03-29-2020 20:01:30 | Since I just swapped some lines, I guess this code quality check got activated after this file came up in the repo..! 😬<|||||>Hi Nicolas,
You have to install the code style tools and run `make style` and `make quality` on your PR.
Check the contributing guide for the details: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests<|||||>Shoot I'm sorry, totally forgot to check the contributing guidelines. Let me fix this real quick. |
transformers | 3,518 | closed | Argument “never_split” not working on bert tokenizer | I used the ```never_split``` option and tried to retain some tokens. But the tokenizer still divide them into wordpieces.
```
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', never_split=['lol'])
tokenizer.tokenize("lol That's funny")
['lo', '##l', 'that', "'", 's', 'funny']
```
**A link to original question on Stack Overflow**:
https://stackoverflow.com/posts/60914793/edit | 03-29-2020 17:23:36 | 03-29-2020 17:23:36 | Does this problem arise with the fast tokenizer too? Can you try both:
```python
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', never_split=['lol'], use_fast=True)
# and
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', never_split=['lol'], use_fast=False)
```<|||||>> Does this problem arise with the fast tokenizer too? Can you try both:
>
> ```python
> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', never_split=['lol'], use_fast=True)
> # and
> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', never_split=['lol'], use_fast=False)
> ```
Tried. Neither works...
However, if I tried to load customzied vocab which replace "[unused]" toakens with the ones I don't want to split. The tokenizer works.
But the default vocab only allows around 1k new tokens. If I add more, the embedding size will change. But the TF models will raise implementation error if I called this:
```
bert = TFBertModel.from_pretrained('bert-base-uncased')
bert.resize_token_embeddings(36000)
```
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This problem still exists where the "never_split"
```python
from transformers import BertTokenizer
USE_FAST = False
tokenizer = BertTokenizer.from_pretrained("bert-base-cased",
use_fast=USE_FAST,
never_split=['lol'],
do_basic_tokenize=True)
print(tokenizer.tokenize(" lol That's funny"))
```
I started going through the code, but it's a bit of a rabbit hole since you have "never_split" as an init argument of the basic tokenizer as well as the pretrained tokenizer, but also as part of the `tokenize` method. It isn't clear to me exactly where it is used.
Perhaps some doc changes are needed as well, since it is typed as boolean:
https://github.com/huggingface/transformers/blob/35df91148545e09cd199d89e707043eba5434f59/src/transformers/tokenization_bert.py#L133
cc @n1t0 @mfuntowicz <|||||>I just tested the last example provided by @BramVanroy and it seems to work after #4723. Do not hesitate to reopen if needed! |
transformers | 3,517 | closed | [Tokenization] fix edge case for bert tokenization | This PR fixes #3502 .
The reason why the tests fail in #3502 is because of an edge case.
If the input to `tokenizer.batch_encode_plus()` consists of a tokenized string that results in a list of exactly two strings (``[[16], [.]]`` in issue #3502) then it is treated as a pair of input sequences (=> [CLS] input_sequence_1 [SEP] input_sequence_2 [SEP]) but this behavior should only happen if the input list consists of two **untokenized** strings. | 03-29-2020 16:53:12 | 03-29-2020 16:53:12 | @mfuntowicz @n1t0 @LysandreJik - could you check? :-) <|||||>Does this mean that `batch_encode_plus` is supposed to handle "pre-tokenized" inputs? I thought this was something introduced by https://github.com/huggingface/transformers/pull/3185 with a specific flag `is_pretokenized` (cc @mfuntowicz)<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3517?src=pr&el=h1) Report
> Merging [#3517](https://codecov.io/gh/huggingface/transformers/pull/3517?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5aa8a278a3f13b8f83a0deb9b6d743f159cea23c&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3517?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3517 +/- ##
==========================================
+ Coverage 78.03% 78.05% +0.01%
==========================================
Files 104 104
Lines 17708 17709 +1
==========================================
+ Hits 13819 13822 +3
+ Misses 3889 3887 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3517?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3517/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `85.78% <100.00%> (+0.01%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3517/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.23% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3517/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.28% <0.00%> (+0.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3517?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3517?src=pr&el=footer). Last update [5aa8a27...3bde162](https://codecov.io/gh/huggingface/transformers/pull/3517?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>> Does this mean that `batch_encode_plus` is supposed to handle "pre-tokenized" inputs? I thought this was something introduced by #3185 with a specific flag `is_pretokenized` (cc @mfuntowicz)
@mfuntowicz showed me the `is_pretokenized` flag for tokenizers v3.0.0 so this makes everything much easier |
transformers | 3,516 | closed | [Docs] examples/summarization/bart: Simplify CNN/DM preprocessing steps | I added the preprocessed data in S3.
Evidence that it is the correct size:
 | 03-29-2020 14:46:59 | 03-29-2020 14:46:59 | Merging to unblock patrick. |
transformers | 3,515 | closed | Isort installed from github branch does not correspond to circle ci isort | # 🐛 Bug
## Information
Installing isort via:
`$ pip install -U git+git://github.com/timothycrosley/isort.git@e63ae06ec7d70b06df9e528357650281a3d3ec22#egg=isort`
does not correspond to the circle ci version anymore, so that `make style` formatting leads to circle-ci make the code quality test fail.
- `transformers` version: 2.6.0
- Platform: Linux-5.3.0-42-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0+cpu (False)
- Tensorflow version (GPU?): 2.1.0 (False)
| 03-29-2020 13:46:09 | 03-29-2020 13:46:09 | Looking at the CI config, it should install the correct version, though. Not sure what I am missing here.
https://github.com/huggingface/transformers/blob/e5c393dcebf42eaec9c1e1d619b5a7788a2d7c65/.circleci/config.yml#L89<|||||>Hi @BramVanroy,
Thanks for your answer :-)
I get the feeling that it's somehow related to my computer.
Running `pip install git+git://github.com/timothycrosley/isort.git@e63ae06ec7d70b06df9e528357650281a3d3ec22#egg=isort` in my terminal installs the same isort version / the exact same code (isort-4.3.21)
as if running
`pip install isort`
When I `pip uninstalled isort` and reinstalled it with `pip install git+git://github.com/timothycrosley/isort.git@e63ae06ec7d70b06df9e528357650281a3d3ec22#egg=isort` the library was not the same anymore. Any ideas what could be happening here on my computer? I tried deleting the pip cache, but no success yet :-/. <|||||>Ok for some reason, upgrading python3.6 to python3.7 solved the problem for me |
transformers | 3,514 | closed | [Examples] Clean summarization and translation example testing files for T5 and Bart | - Only create temporary files instead of "real" files so that each `test_file` in `examples/summarization` and `examples/translations` creates unique testing output files that cannot be overridden by other test files. | 03-29-2020 13:10:55 | 03-29-2020 13:10:55 | |
transformers | 3,513 | closed | Adding mbart-large-cc25 | # 🌟 New model addition
Multilingual BART model implemented in fairseq introduced by FAIR
## Model description
This issue is to request adding mBART model existing as a part of fairseq lib.
[Link to the fairseq description of the model](https://github.com/pytorch/fairseq/tree/master/examples/mbart
)
[Link to the mBART paper](https://arxiv.org/abs/2001.08210)
Multilingually pretrained BART checkpoint.
<!-- Important information -->
The model code follows the original BART model code which is already a part of ```transformers``` repo. However, it introduces a couple more features like multilingual denoising and translation from pretrained BART.
## Open source status
- [x] _the model implementation is available: (give details)_
[Link to the PR adding mBART to the fairseq](https://github.com/pytorch/fairseq/commit/5e79322b3a4a9e9a11525377d3dda7ac520b921c)
This PR shows the main pieces that were added to the fairseq to make mBART work considering BART which is already existing in the codebase. However, a few additional mBART commits were added afterward.
- [x] _the model weights are available: (give details)_
[Link to the weights](https://github.com/pytorch/fairseq/tree/master/examples/mbart#pre-trained-models)
- [x] _who are the authors: (mention them, if possible by @gh-username)_
Facebook AI Research (@MultiPath) | 03-29-2020 12:32:30 | 03-29-2020 12:32:30 | This is a Work in progress but still a few weeks out :)<|||||>Hi @sshleifer , additional (perhaps bug, or document bug) related to this issue:
This model page suggests that we can load mBart-cc25 :
https://huggingface.co/facebook/mbart-large-cc25
However, using the instructed command with the newest HuggingFace 2.8.0 :
`model = AutoModel.from_pretrained("facebook/mbart-large-cc25")`
is failed :
```
AttributeError: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-4-c034f52e2196> in <module>
11 '''
12
---> 13 model = AutoModel.from_pretrained("facebook/mbart-large-cc25")
14 tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")
/opt/conda/lib/python3.7/site-packages/transformers/modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
421 for config_class, model_class in MODEL_MAPPING.items():
422 if isinstance(config, config_class):
--> 423 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
424 raise ValueError(
425 "Unrecognized configuration class {} for this kind of AutoModel: {}.\n"
/opt/conda/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
625 except Exception:
626 raise OSError(
--> 627 "Unable to load weights from pytorch checkpoint file. "
628 "If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. "
629 )
OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
```<|||||>Yes, the docs are wrong/aspirational at the moment. Will fix today!<|||||>Fixed the docs. That model is currently not supported, but it's on my roadmap to add it in the coming weeks.<|||||>sshleifer, wonder if the mbart-large-cc25 have been added? We are looking to use mbart for a multilingual text classification problem. Thanks for the great work.
Patrick<|||||>Hopefully this weekend!<|||||>What languages are you trying to support?
We have 1,000+ models in the `MarianMTModel` family, 11 of which are multi-lingual.
<|||||>We are blocked for the moment on https://github.com/pytorch/fairseq/issues/2258,
if anybody has any ideas how to fix that it would be much appreciated!
|
transformers | 3,512 | closed | how to get activation weights of a pretrained model? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
how to get activation weights of a pretrained model?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 03-29-2020 11:10:47 | 03-29-2020 11:10:47 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,511 | closed | Update the NER TF script | PR to remove the softmax and make the pad token label id to -1. | 03-29-2020 10:51:14 | 03-29-2020 10:51:14 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3511?src=pr&el=h1) Report
> Merging [#3511](https://codecov.io/gh/huggingface/transformers/pull/3511?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/601ac5b1dc1438f00d09696588f2deb0f045ae3b&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3511?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3511 +/- ##
==========================================
+ Coverage 77.79% 77.80% +0.01%
==========================================
Files 100 100
Lines 17051 17051
==========================================
+ Hits 13265 13267 +2
+ Misses 3786 3784 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3511?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/3511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `79.27% <100.00%> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.94% <0.00%> (+0.13%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3511/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.32% <0.00%> (+0.17%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3511?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3511?src=pr&el=footer). Last update [601ac5b...572e04d](https://codecov.io/gh/huggingface/transformers/pull/3511?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,510 | closed | reproducing the performance of XLM-ROBERTA on MLQA dataset on the zh language | # ❓ Questions & Help
## Details
<!-- Description of your issue -->
I have trouble in reproducing the result of XLM-ROBERTA on MLQA dataset for Chinese languages. The result of the rest of the language seems fine, however on the zh section, I have very low {'exact_match': 4.263188631496982, 'f1': 17.451059178461946})! | 03-29-2020 10:01:18 | 03-29-2020 10:01:18 | Credit to people at Microsoft Asia: Ning Wu and Nan Dua
To achieve the best performance on the "zh" test sets, you just need to add
"final_text = tok_text" after line 497 in squad_metrics.py (only for zh). Because there isn't space and subword in Chinese, so we don't need to execute the get_final_test() function.<|||||>This is very interesting! Thanks for letting us know @nooralahzadeh!<|||||>@LysandreJik Do you think the training model "RobertaForQuestionAnswering" also need to be updated for 'zh' lang. Because when I try to fine-tune the xlm-r on 'zh' language and evaluate on its test set, the results became very lower than not fine-tuning.
<|||||>Just found the same problem, thanks bro!
> Credit to people at Microsoft Asia: Ning Wu and Nan Dua
> To achieve the best performance on the "zh" test sets, you just need to add
> "final_text = tok_text" after line 497 in squad_metrics.py (only for zh). Because there isn't space and subword in Chinese, so we don't need to execute the get_final_test() function.
<|||||>I used huggingface to train & predict then use https://github.com/facebookresearch/MLQA/blob/main/mlqa_evaluation_v1.py to calculate the scores. Seem to match the paper. |
transformers | 3,509 | closed | Fix for continuing training | If `args.should_continue` is true, then there is no way in the current example to reload a checkpoint since it is assumed it exists within the output_dir. Checking here fixes everything as expected. | 03-29-2020 07:32:00 | 03-29-2020 07:32:00 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3509?src=pr&el=h1) Report
> Merging [#3509](https://codecov.io/gh/huggingface/transformers/pull/3509?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/601ac5b1dc1438f00d09696588f2deb0f045ae3b&el=desc) will **not change** coverage by `%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3509?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3509 +/- ##
=======================================
Coverage 77.79% 77.79%
=======================================
Files 100 100
Lines 17051 17051
=======================================
Hits 13265 13265
Misses 3786 3786
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3509?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3509?src=pr&el=footer). Last update [601ac5b...43fda01](https://codecov.io/gh/huggingface/transformers/pull/3509?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,508 | closed | [Bart] when output_paste=False BartForConditionalGeneration raises confusing error | # 🐛 Bug
## Information
i am using BART
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
Summarization
## To reproduce
Steps to reproduce the behavior:
```python
tokenizer = BartTokenizer.from_pretrained('bart-large-mnli')
model = BartForConditionalGeneration.from_pretrained('bart-large-mnli')
article_input_ids = tokenizer.batch_encode_plus([LONG_BORING_TENNIS_ARTICLE], return_tensors='pt', max_length=1024)['input_ids'].to(torch_device)
summary_ids = model.generate(article_input_ids,
num_beams=4,
length_penalty=2.0,
max_length=100,
early_stopping=True)
print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids])
```
i get error
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-27-2df1c6607426> in <module>()
4 length_penalty=2.0,
5 max_length=100,
----> 6 early_stopping=True)
7
8 print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids])
3 frames
/usr/local/lib/python3.6/dist-packages/transformers/modeling_bart.py in _reorder_cache(past, beam_idx)
921 @staticmethod
922 def _reorder_cache(past, beam_idx):
--> 923 ((enc_out, enc_mask), decoder_cached_states) = past
924 reordered_past = []
925 for layer_past in decoder_cached_states:
ValueError: too many values to unpack (expected 2)
```
this works with bart-large-cnn
but gives error with other models? | 03-29-2020 06:49:32 | 03-29-2020 06:49:32 | @sshleifer I'm seeing this as well. It doesn't happen if `num_beams=1`. Might have to do with the recent generation and bart changes. Only started happening in the last week or so. <|||||>Thanks for contributing!
A few thoughts:
1. if you pass `output_past=True` to `BartForConditionalGeneration.from_pretrained`, the code works.
2. We only expect 'bart-large-xsum' and 'bart-large-cnn' to generate high quality summaries.
3. The error message/traceback should be improved. Feel free to send a PR if you'd like.
3. Thanks for copy pasting usable code, it made this really easy to verify :) I added "```python" at the beginning to prettify.
### Working example
copy paste [LONG_BORING_TENNIS_ARTICLE](https://gist.github.com/sshleifer/8d9df1937fec07cf77266e222689e9a9)
```python
model_name = 'bart-large-mnli'
from transformers import *
torch_device='cpu'
tokenizer = BartTokenizer.from_pretrained(model_name)
model = BartForConditionalGeneration.from_pretrained(model_name, output_past=True)
article_input_ids = tokenizer.batch_encode_plus([LONG_BORING_TENNIS_ARTICLE], return_tensors='pt', max_length=1024)['input_ids'].to(torch_device)
summary_ids = model.generate(article_input_ids,
num_beams=4,
length_penalty=2.0,
max_length=100,
early_stopping=True)
print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids])
```<|||||>This wasn't solved. I am using a trainer on BART and I have tried to use ```use_cache```, but it still doesn't work. |
transformers | 3,507 | closed | [T5] Add training documenation | - Fixes T5 docstring regarding pretraining
- Add detailed description on how to process input and target for T5 training | 03-29-2020 01:54:49 | 03-29-2020 01:54:49 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3507?src=pr&el=h1) Report
> Merging [#3507](https://codecov.io/gh/huggingface/transformers/pull/3507?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/601ac5b1dc1438f00d09696588f2deb0f045ae3b&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3507?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3507 +/- ##
=======================================
Coverage 77.79% 77.80%
=======================================
Files 100 100
Lines 17051 17051
=======================================
+ Hits 13265 13266 +1
+ Misses 3786 3785 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3507?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3507/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `97.58% <ø> (ø)` | |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3507/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `81.29% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3507/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `94.98% <ø> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3507/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.94% <0.00%> (+0.13%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3507?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3507?src=pr&el=footer). Last update [601ac5b...ea7e6a8](https://codecov.io/gh/huggingface/transformers/pull/3507?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,506 | closed | No grad feature in model parameters | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I wonder if parameters in model.named_parameters() have the feature grad? If none, how can I add to it ?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 03-29-2020 01:47:31 | 03-29-2020 01:47:31 | This is more of a general PyTorch question than a Transformers-question. Have you tried asking on StackOverflow or the PyTorch forums? |
transformers | 3,505 | closed | Add clear description of how to train T5 | 03-29-2020 01:43:54 | 03-29-2020 01:43:54 | ||
transformers | 3,504 | closed | [Docs] Update usage doc regarding generate fn | Update `model.generate()` docs | 03-29-2020 00:13:49 | 03-29-2020 00:13:49 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3504?src=pr&el=h1) Report
> Merging [#3504](https://codecov.io/gh/huggingface/transformers/pull/3504?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/601ac5b1dc1438f00d09696588f2deb0f045ae3b&el=desc) will **decrease** coverage by `0.02%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3504?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3504 +/- ##
==========================================
- Coverage 77.79% 77.76% -0.03%
==========================================
Files 100 100
Lines 17051 17051
==========================================
- Hits 13265 13260 -5
- Misses 3786 3791 +5
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3504?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.65% <0.00%> (-0.84%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3504/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.32% <0.00%> (+0.17%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3504?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3504?src=pr&el=footer). Last update [601ac5b...5779256](https://codecov.io/gh/huggingface/transformers/pull/3504?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,503 | closed | Distil-BART? | 03-28-2020 23:16:18 | 03-28-2020 23:16:18 | Interesting idea! What do you think @thomwolf ?<|||||>Hi, any update on this, even partial code?<|||||>I'm gunna take a crack this weekend, hopefully, By starting from the [distilbert example](https://github.com/huggingface/transformers/tree/master/examples/distillation) and modifying. I'll post a branch if I make meaningful progress.<|||||>Hi, just checking in to see if there's a branch already (couldn't find it). Thanks!<|||||>Yes, it is will be great! Any updates?
<|||||>I'm gunna wait until the code is stable/reusable to release it, sorry for the change of plans.<|||||>As per https://twitter.com/sam_shleifer/status/1276160367853547522, it looks like distilBART has been released :)<|||||>https://huggingface.co/sshleifer/distilbart-cnn-12-6# the tokenizer by name sshleifer/distilbart-cnn-12-6 leads to an error, works with facebook/bart-cnn-large-tokenizer<|||||>I've faced the same issue with sshleifer/distilbart-cnn-12-6
_________
Best regards,
Vladislav Kozlenko
[image: phone:] +380 685954166
[image: skype:]
[email protected] <[email protected]>
[image: position:] Software Engineer
[image: SP Group]
Software Planet Group
Company No: 9428594
[image: phone:] +44 1483 80 24 23
[image: location:] Ukraine, Cherkasy 18000
[image: site:] softwareplanetgroup.com
This email and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed.
If you have received this email in error please notify the sender
immediately.
On Thu, 25 Jun 2020 at 20:42, Amanpreet Singh <[email protected]>
wrote:
> https://huggingface.co/sshleifer/distilbart-cnn-12-6# the tokenizer by
> name sshleifer/distilbart-cnn-12-6 leads to an error, works with
> facebook/bart-cnn-large-tokenizer
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/3503#issuecomment-649724552>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AJWNBQIUYRMVUXUT6B42V4LRYOD67ANCNFSM4LVX67LQ>
> .
>
<|||||>I can't reproduce the error on master. If somebody can, it would be great if they could make a separate issue and I will try to resolve.
All the distilbart- tokenizers are identical to the is identical to the `facebook/bart-large-cnn` tokenizer, which is identical to the facebook/bart-cnn-xsum` tokenizer. @julien-c is there a fancy AWS way to synchronize/symlink them? <|||||>I've tried several models and I'm getting the same error each time it
creates a tokenizer.
[image: image.png]
_________
Best regards,
Vladislav Kozlenko
[image: phone:] +380 685954166
[image: skype:]
[email protected] <[email protected]>
[image: position:] Software Engineer
[image: SP Group]
Software Planet Group
Company No: 9428594
[image: phone:] +44 1483 80 24 23
[image: location:] Ukraine, Cherkasy 18000
[image: site:] softwareplanetgroup.com
This email and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed.
If you have received this email in error please notify the sender
immediately.
On Thu, 25 Jun 2020 at 21:34, Sam Shleifer <[email protected]> wrote:
> I can't reproduce the error on master. If somebody can, it would be great
> if they could make a separate issue and I will try to resolve.
>
> All the distilbart- tokenizers are identical to the is identical to the
> facebook/bart-large-cnn tokenizer, which is identical to the
> facebook/bart-cnn-xsum` tokenizer. @julien-c <https://github.com/julien-c>
> is there a fancy AWS way to synchronize/symlink them?
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/3503#issuecomment-649748636>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AJWNBQKQZXUC4YAY7WWJQ7TRYOKCVANCNFSM4LVX67LQ>
> .
>
<|||||>@vladislavkoz Please make a new issue with instructions to reproduce, following the issue template. Feel free to assign me.<|||||>Here is an issue https://github.com/huggingface/transformers/issues/5286
_________
Best regards,
Vladislav Kozlenko
[image: phone:] +380 685954166
[image: skype:]
[email protected] <[email protected]>
[image: position:] Software Engineer
[image: SP Group]
Software Planet Group
Company No: 9428594
[image: phone:] +44 1483 80 24 23
[image: location:] Ukraine, Cherkasy 18000
[image: site:] softwareplanetgroup.com
This email and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed.
If you have received this email in error please notify the sender
immediately.
On Thu, 25 Jun 2020 at 21:38, Vladislav Kozlenko <
[email protected]> wrote:
> I've tried several models and I'm getting the same error each time it
> creates a tokenizer.
> [image: image.png]
>
> _________
>
> Best regards,
> Vladislav Kozlenko
> [image: phone:] +380 685954166
>
> [image: skype:]
> [email protected] <[email protected]>
> [image: position:] Software Engineer
>
> [image: SP Group]
> Software Planet Group
> Company No: 9428594
> [image: phone:] +44 1483 80 24 23
>
> [image: location:] Ukraine, Cherkasy 18000
> [image: site:] softwareplanetgroup.com
>
>
> This email and any files transmitted with it are confidential and
> intended solely for the use of the individual or entity to whom they are
> addressed. If you have received this email in error please notify the
> sender immediately.
>
>
>
> On Thu, 25 Jun 2020 at 21:34, Sam Shleifer <[email protected]>
> wrote:
>
>> I can't reproduce the error on master. If somebody can, it would be great
>> if they could make a separate issue and I will try to resolve.
>>
>> All the distilbart- tokenizers are identical to the is identical to the
>> facebook/bart-large-cnn tokenizer, which is identical to the
>> facebook/bart-cnn-xsum` tokenizer. @julien-c
>> <https://github.com/julien-c> is there a fancy AWS way to
>> synchronize/symlink them?
>>
>> —
>> You are receiving this because you commented.
>> Reply to this email directly, view it on GitHub
>> <https://github.com/huggingface/transformers/issues/3503#issuecomment-649748636>,
>> or unsubscribe
>> <https://github.com/notifications/unsubscribe-auth/AJWNBQKQZXUC4YAY7WWJQ7TRYOKCVANCNFSM4LVX67LQ>
>> .
>>
>
<|||||>I didn't assign you. I Just read the message to late.
_________
Best regards,
Vladislav Kozlenko
[image: phone:] +380 685954166
[image: skype:]
[email protected] <[email protected]>
[image: position:] Software Engineer
[image: SP Group]
Software Planet Group
Company No: 9428594
[image: phone:] +44 1483 80 24 23
[image: location:] Ukraine, Cherkasy 18000
[image: site:] softwareplanetgroup.com
This email and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed.
If you have received this email in error please notify the sender
immediately.
On Thu, 25 Jun 2020 at 21:45, Vladislav Kozlenko <
[email protected]> wrote:
> Here is an issue https://github.com/huggingface/transformers/issues/5286
>
> _________
>
> Best regards,
> Vladislav Kozlenko
> [image: phone:] +380 685954166
>
> [image: skype:]
> [email protected] <[email protected]>
> [image: position:] Software Engineer
>
> [image: SP Group]
> Software Planet Group
> Company No: 9428594
> [image: phone:] +44 1483 80 24 23
>
> [image: location:] Ukraine, Cherkasy 18000
> [image: site:] softwareplanetgroup.com
>
>
> This email and any files transmitted with it are confidential and
> intended solely for the use of the individual or entity to whom they are
> addressed. If you have received this email in error please notify the
> sender immediately.
>
>
>
> On Thu, 25 Jun 2020 at 21:38, Vladislav Kozlenko <
> [email protected]> wrote:
>
>> I've tried several models and I'm getting the same error each time it
>> creates a tokenizer.
>> [image: image.png]
>>
>> _________
>>
>> Best regards,
>> Vladislav Kozlenko
>> [image: phone:] +380 685954166
>>
>> [image: skype:]
>> [email protected] <[email protected]>
>> [image: position:] Software Engineer
>>
>> [image: SP Group]
>> Software Planet Group
>> Company No: 9428594
>> [image: phone:] +44 1483 80 24 23
>>
>> [image: location:] Ukraine, Cherkasy 18000
>> [image: site:] softwareplanetgroup.com
>>
>>
>> This email and any files transmitted with it are confidential and
>> intended solely for the use of the individual or entity to whom they are
>> addressed. If you have received this email in error please notify the
>> sender immediately.
>>
>>
>>
>> On Thu, 25 Jun 2020 at 21:34, Sam Shleifer <[email protected]>
>> wrote:
>>
>>> I can't reproduce the error on master. If somebody can, it would be
>>> great if they could make a separate issue and I will try to resolve.
>>>
>>> All the distilbart- tokenizers are identical to the is identical to the
>>> facebook/bart-large-cnn tokenizer, which is identical to the
>>> facebook/bart-cnn-xsum` tokenizer. @julien-c
>>> <https://github.com/julien-c> is there a fancy AWS way to
>>> synchronize/symlink them?
>>>
>>> —
>>> You are receiving this because you commented.
>>> Reply to this email directly, view it on GitHub
>>> <https://github.com/huggingface/transformers/issues/3503#issuecomment-649748636>,
>>> or unsubscribe
>>> <https://github.com/notifications/unsubscribe-auth/AJWNBQKQZXUC4YAY7WWJQ7TRYOKCVANCNFSM4LVX67LQ>
>>> .
>>>
>>
<|||||>@sshleifer ATM you need to duplicate the tokenizer files in each model if you want them to be loadable by the model hub, the inference API, etc.<|||||>I was able to create tokenizer only with 'distilbart-xsum-12-1' and
'distilbart-xsum-9-6'. Then on the summarization step, I'm getting another
error. I've added a comment here
<https://github.com/huggingface/transformers/issues/5286>
_________
Best regards,
Vladislav Kozlenko
[image: phone:] +380 685954166
[image: skype:]
[email protected] <[email protected]>
[image: position:] Software Engineer
[image: SP Group]
Software Planet Group
Company No: 9428594
[image: phone:] +44 1483 80 24 23
[image: location:] Ukraine, Cherkasy 18000
[image: site:] softwareplanetgroup.com
This email and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed.
If you have received this email in error please notify the sender
immediately.
On Fri, 26 Jun 2020 at 02:11, Julien Chaumond <[email protected]>
wrote:
> @sshleifer <https://github.com/sshleifer> ATM you need to duplicate the
> tokenizer files in each model if you want them to be loadable by the model
> hub, the inference API, etc.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/3503#issuecomment-649862320>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AJWNBQMZYWYTBBAM5PXRRHDRYPKTPANCNFSM4LVX67LQ>
> .
>
<|||||>Hey @sshleifer , thanks for the distilled BART version I was able to fine tune it with the same script on BillSum dataset as T5 but the numbers are way different between the two. I just wanted to understand if I might be doing something wrong with regards to fine tuning distilBART, does it require student training everytime?
Reference numbers on BillSum Dataset:
T5-base:
avg_train_loss = tensor(1.5333, device='cuda:0')
avg_val_loss = tensor(1.4528, device='cuda:0')
epoch = 1
loss = tensor(1.6734, device='cuda:0')
rouge1 = 0.49188267841912325
rouge2 = 0.26436589848185027
rougeL = 0.3591894400892483
train_loss = tensor(1.6734, device='cuda:0')
val_loss = tensor(1.4528, device='cuda:0')
dBART-cnn-12-6:
avg_train_loss = tensor(1.3013, device='cuda:0')
avg_val_loss = tensor(1.4013, device='cuda:0')
epoch = 1
loss = tensor(1.4901, device='cuda:0')
rouge1 = 0.3681518923769047
rouge2 = 0.15683286277623087
rougeL = 0.23453727441540043
train_loss = tensor(1.4901, device='cuda:0')
val_loss = tensor(1.4013, device='cuda:0')
PS. I am using a modified version of the older finetune.py so it doesn't have Rouge for validation epochs.
Thanks<|||||>@amanpreet692 I moved your issue [here](https://github.com/huggingface/transformers/issues/5336) and will reply there.
Others, I am closing this since the model is released and I don't want to spam everyone. This shouldn't discourage people making new issues!
|
|
transformers | 3,502 | closed | Bert Batch Encode Plus adding an extra [SEP] | # 🐛 Bug
## Information
I'm using `bert-base-multilingual-cased` tokenizer and model for creating another model. However, the `batch_encode_plus` is adding an extra `[SEP]` token id in the middle.
The problem arises when using:
* Specific strings to encode, e.g. `16.`, `3.`, `10.`,
* The `bert-base-multilingual-cased` tokenizer is used beforehand to tokenize the previously described strings and
* The `batch_encode_plus` is used to convert the tokenized strings
In fact, `batch_encode_plus` will generate an `input_ids` list containing two `[SEP]`, such as in `[101, 10250, 102, 119, 102]`
I have seen similar issues, but they don't indicate the version of transformers:
https://github.com/huggingface/transformers/issues/2658
https://github.com/huggingface/transformers/issues/3037
Thus, I'm not sure if it is related to transformers version `2.6.0`
## To reproduce
Steps to reproduce the behavior (simplified steps):
1. Have a string of type `16.` or `6.`
2. Use `tokens = bert_tokenizer.tokenize("16.")`
3. Use `bert_tokenizer.batch_encode_plus([tokens])`
You can reproduce the error with this code
```python
from transformers import BertTokenizer
import unittest
class TestListElements(unittest.TestCase):
def setUp(self):
bert_tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
problematic_string = "16."
tokens = bert_tokenizer.tokenize(problematic_string)
self.encoded_batch_1 = bert_tokenizer.batch_encode_plus([tokens]) #list[list[]]
self.encoded_batch_2 = bert_tokenizer.batch_encode_plus([problematic_string]) #list[]
self.encoded_tokens_1 = bert_tokenizer.encode_plus(problematic_string)
self.encoded_tokens_2 = bert_tokenizer.encode_plus(tokens)
def test_tokens_vs_tokens(self):
self.assertListEqual(self.encoded_tokens_1["input_ids"], self.encoded_tokens_2["input_ids"])
def test_tokens_vs_batch_string(self):
self.assertListEqual(self.encoded_tokens_1["input_ids"], self.encoded_batch_2["input_ids"][0])
def test_tokens_vs_batch_list_tokens(self):
self.assertListEqual(self.encoded_tokens_1["input_ids"], self.encoded_batch_1["input_ids"][0])
if __name__ == "__main__":
unittest.main(verbosity=2)
```
The code will break at test `test_tokens_vs_batch_list_tokens`, with the following summarized output:
```
- [101, 10250, 119, 102]
+ [101, 10250, 102, 119, 102]
```
## Expected behavior
The `batch_encode_plus` should always produce the same `input_ids` no matter whether we pass them a list of tokens or a list of strings.
For instance, for the string `16.` we should get always `[101, 10250, 119, 102]`. However, using `batch_encode_plus` we get `[101, 10250, 102, 119, 102]` if we pass them an input already tokenized.
## Environment info
- `transformers` version: 2.6.0
- Platform: Linux (Manjaro)
- Python version: Python 3.8.1 (default, Jan 8 2020, 22:29:32)
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): ---
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
| 03-28-2020 17:33:34 | 03-28-2020 17:33:34 | Hi @creat89,
Thanks for posting this issue!
You are correct there is some inconsistent behavior here.
1. We should probably in general not allow using `batch_encode_plus()` of a simple string. For this the `encode_plus()` function should be used.
2. It seems like there is an inconsistency between `encode_plus([string])` and `encode_plus(string)`. This should probably be fixed.<|||||>Well, the issue not only happens with a simple string. In my actual code I was using a batch of size 2. However, I just used a simple example to demonstrate the issue.
I didn't find any inconsistency between `encode_plus([string])` and `encode_plus(string)` but `batch_encode_plus([strings])` and `batch_encode_plus([[tokens]])` <|||||>Sorry, I was skimming through your problem too quickly - I see what you mean now.
I will take a closer look at this.<|||||>Created a PR this fixes this behavior. Thanks for pointing this out @creat89 :-) <|||||>There has been a big change in tokenizers recently :-) which adds a `is_pretokenized` flag to the input which makes everything much easier. This should then be used as follows:
``
bert_tokenizer.batch_encode_plus([tokens], is_pretokenized=True))
``<|||||>Cool, that's awesome and yes, I'm sure that makes everything easier. Cheers! |
transformers | 3,501 | closed | [BART] Update encoder and decoder on set_input_embedding | Since _resize_token_embeddings will create a new embedding layer,
resizing the input embeddings for BART will currently break, as the
model.shared will refer to the new embedding created but the
model.encoder.embed_tokens and the model.decoder.embed_tokens will
still refer to the old embedding created.
We need to re-assign the encoder/decoder or just their weights, I opted for the second option.
Unfortunately can't see how to write a test in the test_resize_tokens_embeddings to capture this without putting a BART-specific if statement there, but this is also related to https://github.com/huggingface/transformers/issues/3378
Run tests:
733 passed, 319 skipped, 80 warnings in 269.62s (0:04:29) | 03-28-2020 16:30:44 | 03-28-2020 16:30:44 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3501?src=pr&el=h1) Report
> Merging [#3501](https://codecov.io/gh/huggingface/transformers/pull/3501?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/601ac5b1dc1438f00d09696588f2deb0f045ae3b&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3501?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3501 +/- ##
=======================================
Coverage 77.79% 77.80%
=======================================
Files 100 100
Lines 17051 17053 +2
=======================================
+ Hits 13265 13268 +3
+ Misses 3786 3785 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3501?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3501/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `97.59% <100.00%> (+0.01%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3501/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.94% <0.00%> (+0.13%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3501?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3501?src=pr&el=footer). Last update [601ac5b...5e11181](https://codecov.io/gh/huggingface/transformers/pull/3501?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,500 | closed | [Wait to merge] [Bart] Rename lm_labels argument to masked_lm_labels | While the docstring correctly lists masked_lm_labels, forward
was expecting lm_labels. Renaming to match BERT and the rest of
the MLM-based models
| 03-28-2020 15:58:49 | 03-28-2020 15:58:49 | Could you make sure the kwarg in `examples/summarization/bart/run_bart_sum.py` is correct? Thanks!<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3500?src=pr&el=h1) Report
> Merging [#3500](https://codecov.io/gh/huggingface/transformers/pull/3500?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/601ac5b1dc1438f00d09696588f2deb0f045ae3b&el=desc) will **increase** coverage by `0.50%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3500?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3500 +/- ##
==========================================
+ Coverage 77.79% 78.30% +0.50%
==========================================
Files 100 100
Lines 17051 17051
==========================================
+ Hits 13265 13351 +86
+ Misses 3786 3700 -86
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3500?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3500/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `97.58% <100.00%> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3500/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.94% <0.00%> (+0.13%)` | :arrow_up: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/3500/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.49% <0.00%> (+27.59%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3500?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3500?src=pr&el=footer). Last update [601ac5b...70f9258](https://codecov.io/gh/huggingface/transformers/pull/3500?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>> Could you make sure the kwarg in `examples/summarization/bart/run_bart_sum.py` is correct? Thanks!
Sure, updated all occurrences on that file<|||||>@dougian : we are going to merge/adopt this after a few other PRs are merged in order to coordinate the signature with T5's signature. Thanks for your contribution!
<|||||>Fixed on master by other PRs, closing. Thanks! |
transformers | 3,499 | closed | masked_lm_loss in BertForMaskedLM model | Altought I've read the documentation related to BertForMaskedLM class, I still cannot understand how to properly calculate loss for my problem.
Let's suppose that my target sentence is:
"_I will be writing when you arrive._"
I want to calculate loss for all words except 'arrive'.
The documentation says:
> **masked_lm_labels (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None)** Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]
Due to the way I understood, I should pass to the _masked_lm_labels_ argument a tensor that contains following indices:
`tensor([[ 101, 1045, 2097, 2022, 3015, 2043, -100, 7180, 1012, 101]])`
It returns error:
`RuntimeError: Assertion 'cur_target >= 0 && cur_target < n_classes' failed.`
Can you help me and point out what is wrong in my thinking?
| 03-28-2020 15:21:59 | 03-28-2020 15:21:59 | Have a look at the `mask_tokens` method in `run_language_modeling.py`. This takes in the `input_ids`, performs masking on them and returns the masked `input_ids` and corresponding `masked_lm_labels`.<|||||>@Drpulti I am also getting the same error as you, and I believe it is because `-100` exists in the `masked_lm_labels` returned by `mask_tokens`.
These are fed to the `forward` hook of `BertForMaskedLM` (or whatever pre-trained model you are using), and ultimately to `CrossEntropyLoss`, which throws an error for `labels < 0`.
https://github.com/huggingface/transformers/blob/601ac5b1dc1438f00d09696588f2deb0f045ae3b/src/transformers/modeling_bert.py#L1001-L1004
The docstring says:
https://github.com/huggingface/transformers/blob/601ac5b1dc1438f00d09696588f2deb0f045ae3b/src/transformers/modeling_bert.py#L933-L937
but I don't see the logic where `masked_lm_labels == -100` are ignored. You can even see a comment that says `-100` is masked,
https://github.com/huggingface/transformers/blob/601ac5b1dc1438f00d09696588f2deb0f045ae3b/src/transformers/modeling_bert.py#L1002
but again, where is the code that does this? I figure that both of us might be missing the step that properly handles these `-100` values.<|||||>I believe that the `-100` part is handled by `CrossEntropyLoss` (https://pytorch.org/docs/stable/_modules/torch/nn/functional.html#nll_loss)
I think that in your case, you might be having some mismatch between pytorch and transformers versions. Try upgrading to the latest of both and check if the error is still there.<|||||>When my label contains -100, I get this error when running “IndexError: Target -100 is out of bounds.”<|||||>Could you be a bit more specific as to where the error is coming from? Maybe a stack trace would be nice. Also, please upgrade your pytorch and transformers packages. I'm running transformers 2.5.0 and pytorch 1.4.0 and don't get any such issue.<|||||>@Genius1237 in fact,i think i don't relly know what is the meaning of masked_lm_labels, I want to know what he expresses and how can we get him
<|||||>@tom1125 I'm not understanding you. Are you saying that you want to know how `masked_lm_labels` are computed and how it's used in computing the loss?<|||||>@Genius1237 yes ,and i want to know how to get it,thanks<|||||>An input sentence is a sequence of sub-word tokens, represented by their IDs. This is what `input_ids` would represent (before masking). The `mask_tokens` methods takes in this, and chooses 15% of the tokens for a "corruption" process. In this "corruption" process, 80% of the chosen tokens become [MASK], 10% get replaced with a random word and 10% are untouched.
The goal of the bert model will be to take in the "corrupted" `input_ids` and predict the correct token for each token. The correct tokens, `masked_lm_labels` are also produced by the `mask_token` methods. The values of this tensor would ideally be a clone of the "uncorrupted" `input_ids`, but since the loss is computed over only the "corrupted" tokens, the value of `masked_lm_labels` for the 85% of tokens that aren't chosen for "corruption" is set to `-100` so that it gets ignored by `CrossEntropyLoss`.<|||||>@Genius1237 thank you very much,it really helps me.<|||||>> I believe that the `-100` part is handled by `CrossEntropyLoss` (https://pytorch.org/docs/stable/_modules/torch/nn/functional.html#nll_loss)
>
> I think that in your case, you might be having some mismatch between pytorch and transformers versions. Try upgrading to the latest of both and check if the error is still there.
You are right! Thanks. I will try updating both packages<|||||>> I believe that the `-100` part is handled by `CrossEntropyLoss` (https://pytorch.org/docs/stable/_modules/torch/nn/functional.html#nll_loss)
>
> I think that in your case, you might be having some mismatch between pytorch and transformers versions. Try upgrading to the latest of both and check if the error is still there.
You are right, upgrade helped to resolve the issue. I'm closing the thread. |
transformers | 3,498 | closed | XLM-ROBERTA | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): XLM-ROBERTA
In order to fine-tune XLM-ROBERTA from pre_trained file, since the config file does not have the model_type field, all the required files should be in a folder which contains "xlm-robert" in its name! otherwise the "XLM" config will be assigned !
| 03-28-2020 15:03:35 | 03-28-2020 15:03:35 | That's a good point @nooralahzadeh! We should probably add the `model_type` field to the XLM-ROBERTA config.
@julien-c - If it's fine for you I can add the `model_type` `xlm-roberta` to all `xlm-roberta` configs of the following models:
https://huggingface.co/models?search=xlm-rob
As @nooralahzadeh pointed out, the fine-tuned XLM-ROBERTA models would otherwise default to the XLM model (because of its name and the implemented Fallback pattern in https://github.com/huggingface/transformers/blob/f6a23d19116a62bd3c662d0aa381130b49abcff7/src/transformers/configuration_auto.py#L190)
Should be easy with a S3 AWS script. <|||||>Hmm, there is a `xlm-roberta` before `xlm` and `roberta` in [CONFIG_MAPPING](https://github.com/huggingface/transformers/blob/f6a23d19116a62bd3c662d0aa381130b49abcff7/src/transformers/configuration_auto.py#L65) so it should be correctly picked up, no?<|||||>True, @nooralahzadeh I guess in order to avoid falling to 'xlm' you have to name your files/folders 'xlm-roberta-something' (so you need to include the full xlm-roberta name) or you can just manually change the config of your xlm-roberta and add the `model_type`<|||||>Thanks, this is what I said in the first comment: need to have "xlm-roberta" in its name!<|||||>Hello thanks, this command not work! can you help me? please
python run_squad.py \
--model_type xlm-roberta
<|||||>Hey @Forutanrad,
could you please open a new issue for this?<|||||>Hello, yes thanks.
On Thu, 3 Feb 2022, 13:28 Patrick von Platen, ***@***.***>
wrote:
> Hey @Forutanrad <https://github.com/Forutanrad>,
>
> could you please open a new issue for this?
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/3498#issuecomment-1028806963>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ASUDOKXALOAGOJF5NSHE4MDUZJGSRANCNFSM4LVTH64A>
> .
> Triage notifications on the go with GitHub Mobile for iOS
> <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
> or Android
> <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
>
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
|
transformers | 3,497 | closed | REALM | # 🌟 New model addition
REALM is from some of the authors of BERT (I like to think of it as the next BERT :) ) that have found a way to incorporate world knowledge (from Wikipedia) into the model.
They do this by having the concept of a retriever module that retrieves information from wikipedia articles.
<!-- Important information -->
## Open source status
Code not released at the moment but will probably be released by Google soon I'd imagine.
https://arxiv.org/abs/2002.08909
| 03-28-2020 10:43:46 | 03-28-2020 10:43:46 | Code is released for the preceding paper: Latent Retrieval for Weakly Supervised Open Domain Question Answering" https://www.aclweb.org/anthology/P19-1612/
https://github.com/google-research/language/tree/master/language/orqa<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I am also interested on it. Any news? Thanks<|||||>I'm also interested on this, are you planning to include this model in the library??<|||||>Finally the REALM code has been released here https://github.com/google-research/language/tree/master/language/realm.
I think that this issue should be re-opened. @aced125 are you able to re-open it?<|||||>I hope somebody can reopen this and make it happen<|||||>Any updates on this ? <|||||>Any update?
<|||||>I think this could be implemented easily with RAG by making sure that we
both finetune doc encoder and question encoder of the retriever model. This
might be very useful for us.
On Wed, Feb 10, 2021, 23:42 Lysandre Debut <[email protected]> wrote:
> Reopened #3497 <https://github.com/huggingface/transformers/issues/3497>.
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/3497#event-4314060510>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGULJ5VEDTMJD6F4ZXLS6JPJPANCNFSM4LVQ3UVQ>
> .
>
<|||||>Any Update on this?<|||||>@OctoberChang I did extend the RAG in a way that can be used to experiment REALM stuff.
https://paperswithcode.com/paper/fine-tune-the-entire-rag-architecture<|||||>@shamanez , thanks for the pointer to your RAG project [(link)](https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag-end2end-retriever)
Do you have any pre-trained (not fine-tuned on downstream tasks) models with ICT or Salient span masking [[link]](https://arxiv.org/pdf/2002.08909.pdf) that can be load from Huggingface Transformers?<|||||>I haven't extended into the ICT. Sorry.
@OctoberChang I think to get REALM models loaded with the current HF retrieval framework, we might need some workarounds. This is mainly because RAG uses a generative model and REALM consists of extractive LM. <|||||>@qqaatw's WIP PR (https://github.com/huggingface/transformers/pull/13292) adds REALM. |
transformers | 3,496 | closed | How to load BertforSequenceClassification models weights into BertforTokenClassification model? | Initially, I have a fine-tuned BERT base cased model using a text classification dataset and I have used BertforSequenceClassification class for this. Now I want to use this fine-tuned BERT model weights for Named Entity Recognition and I have to use BertforTokenClassification class for this. I'm unable to figure out how to load the fine-tuned BERT model weights into the new model created using BertforTokenClassification. @thomwolf
Thanks in advance....................... | 03-28-2020 05:10:31 | 03-28-2020 05:10:31 | Have you tried loading your checkpoint in the model using `from_pretrained`:
```py
model = BertForTokenClassification.from_pretrained("checkpoint)
```
? It should work out of the box. Please be aware that the NER head will be randomly initialized as it is not in the sequence classification checkpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,495 | closed | CUDA error: CUBLAS_STATUS_ALLOC_FAILED When running language modeling using bert-base-cased | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name) Language modeling
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Create a shell file under example folder called run_lm.sh
2. bash run_lm.sh
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
export TRAIN_FILE=../../data/wikitext-2-raw/wiki.train.raw
export TEST_FILE=../../data/wikitext-2-raw/wiki.test.raw
python run_language_modeling.py \
--output_dir=output \
--model_type=bert \
--model_name_or_path=bert-base-cased \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE \
--mlm
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
It would arouse:
```
RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`
```
However, if you only change ```--model_name_or_path=bert-base-cased \``` to ```--model_name_or_path=bert-base-uncased \```, it will work well. Hence, I'm not sure what went wrong.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Linux-5.3.0-28-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 03-28-2020 04:54:58 | 03-28-2020 04:54:58 | I have encountered the same error running the LM finetuning script on Google Colab, also with the Bert model type but using the pretrained weights from 'allenai/scibert_scivocab_uncased'
```
!python run_language_modeling.py \
--output_dir=models \
--model_type=bert \
--model_name_or_path='allenai/scibert_scivocab_uncased' \
--do_train \
--train_data_file=train_sample.txt \
--do_eval \
--eval_data_file=val_sample.txt \
--mlm \
--line_by_line
```
Environment
- transformers : 2.7.0
- Python: 3.6.9
- PyTorch: 1.4.0
Abbreviated stack trace :
```
Iteration: 0% 0/2971 [00:00<?, ?it/s]/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [116,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [116,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [116,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
...
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [116,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [116,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [116,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
Traceback (most recent call last):
File "run_language_modeling.py", line 799, in <module>
main()
File "run_language_modeling.py", line 749, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_language_modeling.py", line 353, in train
outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py", line 987, in forward
encoder_attention_mask=encoder_attention_mask,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py", line 790, in forward
encoder_attention_mask=encoder_extended_attention_mask,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py", line 407, in forward
hidden_states, attention_mask, head_mask[i], encoder_hidden_states, encoder_attention_mask
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py", line 368, in forward
self_attention_outputs = self.attention(hidden_states, attention_mask, head_mask)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py", line 314, in forward
hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py", line 216, in forward
mixed_query_layer = self.query(hidden_states)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1372, in linear
output = input.matmul(weight.t())
RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`
`<|||||>I believe this is because the `allenai/scibert_scivocab_uncased` checkpoint does not have a maximum length for the tokenizer.
Would you mind trying again but this time specifying `--block_size=512` as an additional argument, to limit the size of sequences to 512 tokens?g<|||||>@Rshcaroline do you mind trying the same argument `--block_size=512` and see if that fixes your issue?<|||||>@LysandreJik Thanks very much for the tip, that solved the problem.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I think this is OOM from CUDA side.<|||||>In case somebody is having the same in issue in the latest version, `--max_seq_length 512` fixes this issue for me. |
transformers | 3,494 | closed | TypeError when using Feature Extraction Pipeline with XLM roberta | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): XLM-R
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. after importing the required libraries, run the following snipped of code:
```
xlmr_model = XLMRobertaForSequenceClassification.from_pretrained("xlm-roberta-base")
xlmr_tokenizer = XLMRobertaTokenizer.from_pretrained("xlm-roberta-base")
nlp = pipeline(task ="feature-extraction", model = xlmr_model, tokenizer=xlmr_tokenizer, framework="tf")
features = nlp('We are very happy to include pipeline into the transformers repository.')
print(features)
```
2. You should get the following error: result = self.forward(*input, **kwargs)
**TypeError: forward() got an unexpected keyword argument 'training'**
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I would expect this code to run without this error.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.6.1
- Platform: Ubuntu 16.04.6 LTS
- Python version: 3.6.1
- PyTorch version (GPU?): No GPU
- Tensorflow version (GPU?): No GPU
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 03-28-2020 04:00:56 | 03-28-2020 04:00:56 | |
transformers | 3,493 | closed | Finetuning GPT-2 | It looks like there used to be a script `run_lm_finetuning.py` that has been replaced by `run_language_modeling.py`. It's unclear to me how to use this script to run finetuning. I want to finetune GPT-2 on a variety of downstream tasks, and would love some help! | 03-28-2020 01:39:12 | 03-28-2020 01:39:12 | You can use `run_language_modeling.py` for this purpose. Just set `--model_type` to `gpt2`, set `--model_name_or_path` to the gpt2 model checkpoint you want (`gpt2`) and set `--train_data_file` to your dataset and you should be ready to go.<|||||>Thanks! The issue is that I want to use the pre-trained version of GPT-2. I remember `run_lm_finetuning` had a few lines of code where it would download and load that pre-trained model.<|||||>You can specify `gpt2` in `--model_name_or_path`. That corresponds to one of the pre-trained checkpoints that it'll download and use. The other possible pre-trained models that you can specify there are `gpt2-medium`,`gpt2-large`,`gpt2-xl` and `distilgpt2`.<|||||>If specifying `gpt2` there downloads the checkpoints then how do you train from scratch? I've been specifying that parameter and it seems like it is training from scratch (starting perplexity ~1000).<|||||>I believe that if you want to train from scratch, you'll have to point that to a folder with a config file (with the parameters of the model) and no `pytorch_model.bin` checkpoint file in that folder.<|||||>@Genius1237 Hi there,
I am finetuning the 124M model based on my dataset (almost 2mb) and I am using Colab by Max Wolf. I am wondering if there is a way that I can generate texts based on my trained model + internet context (not only metadata). I wanna generate some notes regarding current issues (such as COVID-19) based on my trained model.
Could you help me with that, please? thanks.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.