repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 11,725 | closed | Fix#11724 | np.sin/np.cos is an inplace op, position_enc will be error | 05-14-2021 10:37:04 | 05-14-2021 10:37:04 | |
transformers | 11,724 | closed | A bug in modeling_tf_marian.py and modeling_tf_pegasus.py SinusoidalPositionalEmbedding _init_weight |
## Information
should create a new np.array and then store np.sin&np.cos result in it
```python
table = np.zeros_like(position_enc)
# index 0 is all zero
table[:, 0 : dim // 2] = np.sin(position_enc[:, 0::2])
table[:, dim // 2 :] = np.cos(position_enc[:, 1::2])
# convert to tensor
table = tf.convert_to_tensor(table)
tf.stop_gradient(table)
```
https://github.com/huggingface/transformers/blob/bd3b599c12cfcf5ef517c5ffe526afbdbaa92539/src/transformers/models/marian/modeling_tf_marian.py#L147-L157
https://github.com/huggingface/transformers/blob/8d43c71a1ca3ad322cc45008eb66a5611f1e017e/src/transformers/models/pegasus/modeling_tf_pegasus.py#L148-L158
| 05-14-2021 10:27:47 | 05-14-2021 10:27:47 | Hey @JunnYu,
Could you give a bit more context on what the issue is exactly? And how your solution solves the issue?<|||||>@patrickvonplaten
After this line `position_enc[:, 0 : dim // 2] = np.sin(position_enc[:, 0::2]) ` and
` position_enc[:, 0 : dim // 2] ` value will be overridden in place.
If we compute `np.cos(position_enc[:, 1::2])` and the result is inconsistent with the expected result.
So we should init a np.array `table = np.zeros_like(position_enc)` to store the sinusoidalposition embeddings.


<|||||>True, good catch! Do you mind opening a PR to fix it? We should then also run the slow tests to be sure the model performance is not affected<|||||>@patrickvonplaten I have opened a PR https://github.com/huggingface/transformers/pull/11897 to fix this.
I think the **pretrained** tf model's performance will **not be affected**. But when we init a **new** tf model, the model's performance will **be affected**!
Because the pretrained `tf_model.h5` contains the correct `embedding weight` (this embedding weight is converted by `pytorch_model.bin`). When we load a pretrained tf model , the tf model will load this correct `embedding weight` .
```python
# old code
from transformers.models.marian import MarianModel, TFMarianModel
import torch
pt_model = MarianModel.from_pretrained(
"Helsinki-NLP/opus-mt-en-de")
tf_model = TFMarianModel.from_pretrained(
"Helsinki-NLP/opus-mt-en-de")
pt_emb_weight = pt_model.encoder.embed_positions.weight
tf_emb_weight = torch.from_numpy(
tf_model.model.encoder.embed_positions.weight.numpy())
print(pt_emb_weight.equal(tf_emb_weight))
# True
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@patrickvonplaten I have opened a PR #11897. Can you take the time to look at this PR?Thanks.<|||||>Thanks for pinging me again & super sorry to have this on wait for so long! |
transformers | 11,723 | closed | Warnings about some weights that were not initialized in Greek BERT | Hello,
in order to use the Greek BERT model, I use `AutoModel` class, and specifically
`greek_bert = AutoModel.from_pretrained("nlpaueb/bert-base-greek-uncased-v1")`
When I load the model with the above command I get this warning:
```
Some weights of the model checkpoint at nlpaueb/bert-base-greek-uncased-v1 were not used when initializing BertModel: ['cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.bias', 'cls.predictions.decoder.bias', 'cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.seq_relationship.weight']
- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
```
Why is this warning thrown? | 05-14-2021 09:43:21 | 05-14-2021 09:43:21 | It tells you that you are initializing a `BertModel` without the heads that were used for pre-training (namely next sentence prediction and masked language modeling). That's not a problem, as you probably don't need these weights for a downstream task of interest (such as question answering or sequence classification). <|||||>Thank you @NielsRogge for your quick reply,
Indeed, I don't need those weighs for my task, but how can you tell that these are the weights of the heads used in pre-training?<|||||>You can see it based on their names:
`['cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.bias', 'cls.predictions.decoder.bias', 'cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.seq_relationship.weight']`
=> `cls.seq_relationship` refers to the linear layer used for next sentence prediction.
=> `cls.predictions` refers to the masked language modeling head. It consists of a `transform` layer followed by a `decoder` (which maps to the vocabulary).
The author of Greek BERT probably used the `BertForPreTraining` model for pre-training the BERT model. You can see the definition of the heads [here](https://github.com/huggingface/transformers/blob/bd3b599c12cfcf5ef517c5ffe526afbdbaa92539/src/transformers/models/bert/modeling_bert.py#L1011).<|||||>I see. I saw the names too, but I wasn't sure that these weights correspond to these heads just by reading the names.
Thank you @NielsRogge, you have been very helpful.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,722 | closed | Plug a custom tokenizer into PreTrainedTokenizer | Is there a clean way to use a customer tokenizer trained from the tokenizers library in a PretrainedTokenizer interface?
```
from tokenizers import Tokenizer
tokenizer = Tokenizer.from_file(config_file)
```
The config file is generated using tokenizer.save() method.
I want to use this tokenizer in a PretrainedTokenizer/PretrainedTokenizerFast class.
The closest thing I found in the existing issues is [this](https://github.com/huggingface/tokenizers/issues/259#issuecomment-625905930)
When I tried the above solution using a PretrainedTokenizer class,
```
class CustomTokenizer(PreTrainedTokenizer):
def __init__(
self,
vocab_file=vocab_file,
merges_file=merges_file,
bos_token="<s>",
eos_token="</s>",
sep_token="</s>",
cls_token="<s>",
unk_token="<unk>",
pad_token="<pad>",
mask_token="<mask>",
**kwargs
):
super().__init__(
tokenizer,
bos_token=bos_token,
eos_token=eos_token,
unk_token=unk_token,
sep_token=sep_token,
cls_token=cls_token,
pad_token=pad_token,
mask_token=mask_token,
**kwargs,
)
```
I got an exception
```
File /path/custom_tokenizer.py", line 37, in __init__
super().__init__(
TypeError: __init__() takes 1 positional argument but 2 were given
```
Is there a solution/workaround? | 05-14-2021 08:55:30 | 05-14-2021 08:55:30 | Would this [documentation](https://huggingface.co/transformers/fast_tokenizers.html) help you out?<|||||>This is exactly what I was looking for! No idea how I missed it. Thank you :) |
transformers | 11,721 | closed | ValueError: could not broadcast input array from shape (2816,384) into shape (2698,384) | the command to reproduce:
cd huggingface-transformers/examples/pytorch/question-answering
python -m torch.distributed.launch --nproc_per_node=8 ./run_qa.py \
--model_name_or_path roberta-large \
--dataset_name squad \
--do_train --do_eval \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 256 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir test_result2/$trials --overwrite_output_dir \
--logging_dir test_result2/$trials/tensorboard --logging_first_step --logging_steps 50 \
--fp16
i tried add "--max_eval_samples 10240", this will fix the error, while the AUC result is quite low(exact_match = 4.9414, f1 = 8.9784). and when i ran with 1gpu, the above command can succeed(exact_match = 88.5336, f1 = 94.3266)
the full error is "File "./transformers/src/transformers/trainer_pt_utils.py", line 410, in _nested_set_tensors
i * slice_len : (i + 1) * slice_len
i * slice_len : (i + 1) * slice_len
ValueError: could not broadcast input array from shape (2816,384) into shape (2698,384)" | 05-14-2021 07:21:59 | 05-14-2021 07:21:59 | @sgugger i saw you have a PR to fix similar error, could you help to take a look?
<|||||>You should upgrade your version of Transformers: this code is not used anymore inside the `Trainer`.<|||||>i am installing transformers from source, the issue still threr.
and issue throw from https://github.com/huggingface/transformers/blob/86d5fb0b360e68de46d40265e7c707fe68c8015b/src/transformers/trainer_pt_utils.py#L411, looks like the code is still used in master?
<|||||>Indeed, the subclass of the `Trainer` for QA still uses the old code! The PR linked above should fix this.
Thanks for flagging! |
transformers | 11,720 | closed | RagRetriever fails to find faiss-gpu installed with pip not conda | - `transformers` version: 4.5.1
- Platform: Linux-4.14.231-173.361.amzn2.x86_64-x86_64-with-glibc2.9
- Python version: 3.6.10
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: yes nVidia V100 16Gb
- Using distributed or parallel set-up in script?: just single task in //
- rag: @patrickvonplaten, @lhoestq
## Information
Model I am using (RAG Retriever ...):
The problem arises when using:
[* ] the official example scripts: (worked!)
[* ] my own modified scripts: (give details below)
The tasks I am working on is:
[* ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run transformers/examples/research_projects/rag/use_own_knowledge_dataset.py
This step worked fine yesterday prior to reboot.
2. Try to inspect output dataset directly using RagRetriever model in python... 3.6 >
```python
from transformers import RagRetriever, RagSequenceForGeneration, RagTokenizer
retriever = RagRetriever.from_pretrained('facebook/dpr-ctx_encoder-single-nq-base', cache_dir=cache_dir, index_name="custom", indexed_dataset='./rag/out')
ImportError:
RagRetriever requires the faiss library but it was not found in your environment. Checkout the instructions on the
installation page of its repo: https://github.com/facebookresearch/faiss/blob/master/INSTALL.md and follow the ones
that match your environment.
```
Also, if you import faiss, then faiss.__version__ does not exist.
Note for our environment we have to pip install faiss-gpu rather than conda since conda repos are blocked at proxy.
qds/NLP/aws_nlp/rag/out
A sample script to query the /path/to/my_knowledge_dataset/ would be handy. | 05-14-2021 01:41:32 | 05-14-2021 01:41:32 | Got it to work with rebuild... and pip install faiss and faiss-gpu
git clone https://...github.../rag
export TOKENIZERS_PARALLELISM=false
pip install torch torchvision ray[default] datasets faiss faiss-gpu matplotlib seaborn pandas transformers awscli s3fs scikit-plot
python use_own_knowledge_dataset.py --csv_path ./text.csv --output_dir ./out/text
|
transformers | 11,719 | closed | parameter `ignore_keys` of `trainer.predict` not accessible in `Trainer` or `TrainingArguments` | # 🚀 Feature request
The [`predict`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.Trainer.predict) and [`evaluate`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.Trainer.evaluate) methods of the Trainer class provide an excellent option of `ignore_keys`. Here is a small example:
```python
trainer.predict(dataset, ignore_keys=["ner_loss", "cls_loss", "ner_logits", "cls_logits"])
```
@sgugger
This option is however, not accessible during the normal setup of defining `TrainingArguments` class nor the `Trainer` class so the a call to `trainer.train()` leads to errors during the mid-training evaluation.
## Motivation
I am unable to evaluate the model metrics on the validation set **during** the training to see if makes sense to continue.
## Your contribution
I am happy to make a PR if this is seen as a genuine problem. Like always, may be I am missing something.
| 05-14-2021 00:42:46 | 05-14-2021 00:42:46 | We could add an argument for this (like `ignore_keys_for_eval`) yes. Let me know if you want to tackle this!<|||||>I very much wanna do this and will get right down to it! 👍 Admittedly though, I am relatively new to making contributions.<|||||>Hi! As a short update: I will not be able to work on this until 23rd of June... so if anyone wants to pick it up good otherwise I need 3 more weeks.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>OK so I'd like to work on this :)<|||||>Sure! Ping me when you open a PR :-) |
transformers | 11,718 | closed | Fix loading the best model on the last stage of training | # What does this PR do?
It fixes the best model loading on the last stage of the training.
Fixes #11666
## Who can review?
@sgugger
| 05-13-2021 18:49:13 | 05-13-2021 18:49:13 | Great! Lat thing is to run `make style` on your branch to make the CI pass. Let me know if you run into any issue, I can also push on your branch.<|||||>> Great! Lat thing is to run `make style` on your branch to make the CI pass. Let me know if you run into any issue, I can also push on your branch.
CI is already fixed<|||||>Thanks again for the fix! |
transformers | 11,717 | closed | Fix T5 beam search when using parallelize | # What does this PR do?
As requested by @patrickvonplaten in conversation on issue https://github.com/huggingface/transformers/issues/9200, this fixes a crash when trying to use beam search on T5 models split across multiple GPUs using `model.parallelize()`. It uses the fix from https://github.com/huggingface/transformers/pull/9219, applied to the T5-specific code (also related is https://github.com/huggingface/transformers/pull/9596 which refactored the `_reorder_cache` functions).
I tested the fix on a t5-small model. Before:
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("allenai/unifiedqa-t5-small")
model = AutoModelForSeq2SeqLM.from_pretrained("allenai/unifiedqa-t5-small")
device_map = {0: range(0,3), 1: range(3, 6)}
input_string = "What was the color of the sky?\\nIt was a dark stormy night."
input_ids = tokenizer.encode(input_string,return_tensors="pt").to("cuda:0")
output = model.generate(input_ids, num_beams=2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/oyvindt/miniconda3/envs/transformers4/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/oyvindt/miniconda3/envs/transformers4/lib/python3.9/site-packages/transformers/generation_utils.py", line 1044, in generate
return self.beam_search(
File "/home/oyvindt/miniconda3/envs/transformers4/lib/python3.9/site-packages/transformers/generation_utils.py", line 1788, in beam_search
model_kwargs["past"] = self._reorder_cache(model_kwargs["past"], beam_idx)
File "/home/oyvindt/miniconda3/envs/transformers4/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1635, in _reorder_cache
layer_past_state.index_select(0, beam_idx),
RuntimeError: Input, output and indices must be on the current device
```
After:
```
...
output = model.generate(input_ids, num_beams=2)
tokenizer.batch_decode(output, skip_special_tokens=True)
--> ['dark stormy']
```
As far as I know this small fix shouldn't have any adverse effects. As to why the tests added in https://github.com/huggingface/transformers/pull/9219 didn't catch this, possibly that's because they're not generally run in multi-GPU setups?
| 05-13-2021 18:05:54 | 05-13-2021 18:05:54 | @OyvindTafjord Hi, I am trying to figure out how to use model parallelization on T5 but having some problems. I tried to reproduce your result but got the following error:
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("allenai/unifiedqa-t5-small")
model = AutoModelForSeq2SeqLM.from_pretrained("allenai/unifiedqa-t5-small")
device_map = {0: range(0,3), 1: range(3, 6)}
model.parallelize(device_map)
input_string = "What was the color of the sky?\\nIt was a dark stormy night."
input_ids = tokenizer.encode(input_string,return_tensors="pt").to("cuda:0")
output = model.generate(input_ids, num_beams=2)
```
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/guest/anaconda3/envs/huggingface_latest/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/home/guest/anaconda3/envs/huggingface_latest/lib/python3.6/site-packages/transformers/generation_utils.py", line 922, in generate
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)
File "/home/guest/anaconda3/envs/huggingface_latest/lib/python3.6/site-packages/transformers/generation_utils.py", line 417, in _prepare_encoder_decoder_kwargs_for_generation
model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)
File "/home/guest/anaconda3/envs/huggingface_latest/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/guest/anaconda3/envs/huggingface_latest/lib/python3.6/site-packages/transformers/models/t5/modeling_t5.py", line 897, in forward
inputs_embeds = self.embed_tokens(input_ids)
File "/home/guest/anaconda3/envs/huggingface_latest/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/guest/anaconda3/envs/huggingface_latest/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 114, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/home/guest/anaconda3/envs/huggingface_latest/lib/python3.6/site-packages/torch/nn/functional.py", line 1724, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: arguments are located on different GPUs at /opt/conda/conda-bld/pytorch_1587428091666/work/aten/src/THC/generic/THCTensorIndex.cu:403
```
* My current environment:
transformers: 4.7.0.dev0
torch: 1.5.0
Could you please help me to figure out the problem and give me some direction that I should start with? I don't have much experience with model parallelization, do I need to modify the `input_ids`?
Thanks in advance.
<|||||>@bing0037 Hm, I tested with 4.7.0 now and the above code works for me. I noticed my initial set of commands was missing the critical `model.parallelize(device_map)` step, but looks like you made sure to include that?
You could double check that `model.encoder.first_device` returns the expected `'cuda:0'`, and then the code at https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/modeling_t5.py#L870 should make sure the embeddings are also on that same device, so you shouldn't get that line 897 error above.<|||||>@OyvindTafjord Thank you for your reply. The problem was the inconsistency of my command and the above command works well.
BTW, the above command is for parallelized model **inference**, could you please give me some suggestions for parallelized model **training**?
Currently, I am trying to finetune **t5-large** model using `run_summarization.py` on multiple GPUs by using model parallelization.
* My test 1: By adding ```model.parallieze()``` directly in `run_summarization.py`, but got the following error:
```
model = AutoModelForSeq2SeqLM.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
use_auth_token=True if model_args.use_auth_token else None,
)
+ device_map = {0: [0, 1, 2],
+ 1: [3, 4, 5, 6, 7, 8, 9],
+ 3: [10, 11, 12, 13, 14, 15, 16],
+ 4: [17, 18, 19, 20, 21, 22, 23]}
+ model.parallelize(device_map) # Splits the model across several devices
model.resize_token_embeddings(len(tokenizer))
if model.config.decoder_start_token_id is None:
raise ValueError("Make sure that `config.decoder_start_token_id` is correctly defined")
```
```
Traceback (most recent call last):
File "run_summarization.py", line 616, in <module>
main()
File "run_summarization.py", line 540, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/guest/anaconda3/envs/huggingface_latest/lib/python3.6/site-packages/transformers/trainer.py", line 1300, in train
args.max_grad_norm,
File "/home/guest/anaconda3/envs/huggingface_latest/lib/python3.6/site-packages/torch/nn/utils/clip_grad.py", line 30, in clip_grad_norm_
total_norm = torch.norm(torch.stack([torch.norm(p.grad.detach(), norm_type) for p in parameters]), norm_type)
RuntimeError: All input tensors must be on the same device. Received cuda:0 and cuda:7
```
* My test 2: I referred to this question: https://discuss.huggingface.co/t/transformers-4-6-0-on-sagemaker/6217/4, but still can't change the model parallesim status:
```
pip install git+https://github.com/aws/sagemaker-python-sdk.git
pip install sagemaker
```
```
>>> from transformers.file_utils import is_sagemaker_mp_enabled
>>> is_sagemaker_mp_enabled()
False
```
Could you give me some resources that I could refer to? Thank you!<|||||>@bing0037 I haven't tried the parallelize functionality in the context of training, so I'm not of much help on that. |
transformers | 11,716 | closed | Refactor slow sentencepiece tokenizers. | PR for #11646
## ToDo
- [x] `AlbertTokenizer`
- [x] `BarthezTokenizer`
- [x] `BertGenerationTokenizer`
- [x] `BigBirdTokenizer`
- [x] `CamembertTokenizer`
- [x] `DebertaV2Tokenizer`
- [x] `M2M100Tokenizer`
- [x] `MarianTokenizer`
- [x] `MBart50Tokenizer`
- [x] `PegasusTokenizer`
- [x] `ReformerTokenizer`
- [x] `Speech2TextTokenizer`
- [x] `T5Tokenizer`
- [x] `XLMProphetNetTokenizer`
- [x] `XLM RoBERTa`
- [x] `XLNetTokenizer` | 05-13-2021 13:25:45 | 05-13-2021 13:25:45 | `SentencePieceProcessor.decode` is doing "the same but more than `SentencePieceProcessor.decode_pieces`.
That is why we replace `SentencePieceProcessor.decode_pieces` with `SentencePieceProcessor.decode` in this PR.
See here:
https://github.com/google/sentencepiece/blob/6256ef243844e5848499cf519eb2a7e2755e75a1/python/src/sentencepiece/__init__.py#L307<|||||>rebased on upstrem/master<|||||>We need to rebase on master after PR #11737 has been merged.<|||||>Rebased on master - CI is green again. :-) <|||||>Rebased on master to get integration tests - see #11737<|||||>Rebased on master<|||||>> I think generally speaking we'd like to have methods that are common to all tokenizers in the base class - but not methods that are common to some of them only. I'd also like to keep the number of abstraction layers to a minimum, tokenizers are already quite tough to understand.
@LysandreJik
Yes. I also prefer a low number of abstraction layers. At the same time I like dry code. There is 100% duplicate code in the tokenizers impl. that has just been duplicated by copy & paste. IMO that should be removed by an refactoring. That is what I try to introduce here.<|||||>The general approach of the library is to keep the number of abstractions as low as possible, and to keep implementations as separate as possible from each other, hence the high amount of copy-pasted code.
We want users to be able to experiment with single models/tokenizers without their changes impacting other models or tokenizers - and we want them to be able to understand how a model or tokenizer behaves by simply checking a single file, rather than having to hop around multiple files.
We are failing at this with tokenizers as there are already two levels of abstraction, but adding a third one isn't really the direction we want to head to :)
Does that make sense?<|||||>> Does that make sense?
Yes. Sure. Your project, your call.
I will revert my changes and keep it as simple as possible as discussed in the beginning.
<|||||>@LysandreJik I have redone the PR. Everything is green and the changes are as simple as planned in the issue.
This is ready for review.
Averything is tested by setting `test_sentencepiece = True` in the tokenizer test classes and by the following
testfunction: `TokenizerTesterMixin.test_sentencepiece_tokenize_and_convert_tokens_to_string` |
transformers | 11,715 | closed | Request for feature for setting batch size in pipeline when inference | ```
from transformers import pipeline
from transformers import AutoModelWithLMHead, AutoTokenizer
model = AutoModelWithLMHead.from_pretrained("Helsinki-NLP/opus-mt-en-fr")
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-fr")
nlp = pipeline("translation_en_to_fr", model=model, tokenizer=tokenizer, device=0)
nlp(ds.df_train.sample(32)['content'].tolist(), max_length=300)
```
I am suing pipeline instance for inference on trunks of sentences.
When the size of the trunk data is small, like the 32, it is ok to fit into the GPU memory.
However, when increasing the size of this input, memory error comes out:
> RuntimeError: CUDA out of memory.
Is there any way to set the batch size inside the `nlp()` to automatically to fit into the GPU to make trunks of inferences? | 05-13-2021 10:48:39 | 05-13-2021 10:48:39 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,714 | closed | Blender 9B model | Are there any plans to release the blender 9B model in the transformers library? | 05-13-2021 08:39:31 | 05-13-2021 08:39:31 | I think @patil-suraj talked about it at some point?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>
I ported myself :)
|
transformers | 11,713 | closed | Unable to import transformers: ImportError: numpy>=1.17 is required for a normal functioning of this module, but found numpy==1.16.3 | I'd installed transformers via `pip install transformers`
And made sure my python packages is up-to-date, however when I import transformers it shows the error message:
```
ImportError: numpy>=1.17 is required for a normal functioning of this module, but found numpy==1.16.3.
Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git master
```
I'd made sure my numpy is version 1.20.1, and I'd tried as suggested in the message: `pip install transformers -U`
But it doesn't work. Please help me how could I import the package, thank you! | 05-13-2021 08:14:25 | 05-13-2021 08:14:25 | What is your working environment? Is it a Colab notebook, a Linux machine?<|||||>I'm working on my PC with Windows 10 system.
> What is your working environment? Is it a Colab notebook, a Linux machine?
<|||||>Probably your environments have different versions as @LysandreJik mentioned. Can you run the following command **where you get the error message (your working environment)**, and assure you have the correct numpy version.
pip freeze | findstr "numpy"
You may try the following, but without locating your correct environment these probably do not help much.
pip install -I transformers --no-cache-dir --force-reinstall<|||||>> pip freeze | findstr "numpy"
Hello! I'd followed your work, and it returns this:

Why does this happen?<|||||>> Probably your environments have different versions as @LysandreJik mentioned. Can you run the following command **where you get the error message (your working environment)**, and assure you have the correct numpy version.
>
> ```
> pip freeze | findstr "numpy"
> ```
>
> You may try the following, but without locating your correct environment these probably do not help much.
>
> ```
> pip install -I transformers --no-cache-dir --force-reinstall
> ```
Your second instruction works! It seems force-install may install all dependencies for transformer, but I still don't know why it couldn't run with the default numpy = 1.20.0
Thank you for your help!<|||||>Glad it solved your problem. We can close this issue then @laurence-lin.<|||||>> Glad it solved your problem. We can close this issue then @laurence-lin.
OK, thank you!<|||||>ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
daal4py 2021.5.0 requires daal==2021.4.0, which is not installed.
conda-repo-cli 1.0.4 requires pathlib, which is not installed.
anaconda-project 0.10.2 requires ruamel-yaml, which is not installed.
mxnet 1.7.0.post2 requires numpy<1.17.0,>=1.8.2, but you have numpy 1.22.4 which is incompatible.
mxnet 1.7.0.post2 requires requests<2.19.0,>=2.18.4, but you have requests 2.28.0 which is incompatible.
numba 0.55.1 requires numpy<1.22,>=1.18, but you have numpy 1.22.4 which is incompatible.
jupyter-server 1.13.5 requires pywinpty<2; os_name == "nt", but you have pywinpty 2.0.2 which is incompatible.
d2l 0.17.5 requires matplotlib==3.5.1, but you have matplotlib 3.5.2 which is incompatible.
d2l 0.17.5 requires numpy==1.21.5, but you have numpy 1.22.4 which is incompatible.
d2l 0.17.5 requires requests==2.25.1, but you have requests 2.28.0 which is incompatible.
pls help me out<|||||>I have the same issue but with `fastai`. Check out my [Git Issue](https://github.com/fastai/fastai/issues/3708).<|||||>> ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. daal4py 2021.5.0 requires daal==2021.4.0, which is not installed. conda-repo-cli 1.0.4 requires pathlib, which is not installed. anaconda-project 0.10.2 requires ruamel-yaml, which is not installed. mxnet 1.7.0.post2 requires numpy<1.17.0,>=1.8.2, but you have numpy 1.22.4 which is incompatible. mxnet 1.7.0.post2 requires requests<2.19.0,>=2.18.4, but you have requests 2.28.0 which is incompatible. numba 0.55.1 requires numpy<1.22,>=1.18, but you have numpy 1.22.4 which is incompatible. jupyter-server 1.13.5 requires pywinpty<2; os_name == "nt", but you have pywinpty 2.0.2 which is incompatible. d2l 0.17.5 requires matplotlib==3.5.1, but you have matplotlib 3.5.2 which is incompatible. d2l 0.17.5 requires numpy==1.21.5, but you have numpy 1.22.4 which is incompatible. d2l 0.17.5 requires requests==2.25.1, but you have requests 2.28.0 which is incompatible.
>
> pls help me out
I solved it this way:
`pip install -I transformers --no-cache-dir --force-reinstall`, as suggested by @devrimcavusoglu
The this error appeared:
`ImportError: Something is wrong with the numpy installation. While importing we detected an older version of numpy in [.../.../...']. One method of fixing this is to repeatedly uninstall numpy until none is found, then reinstall this version.`
I did `sudo pip3 uninstall numpy` twice, until no numpy version was found, and then it worked. Tbh I have no idea why, but as long as it's working it is fine.
Hope this helps<|||||>find the old version numpy , and delete the old version numpy by youself。<|||||>@devrimcavusoglu I'am facing with the same issue, but the following command does not help: `pip install -I transformers --no-cache-dir --force-reinstall`.
I am on ubuntu, using a conda env named `py36`, and make sure I was operating in the correct env (as the `(py36)` line in the following logs).
```
(py36)
cyx@c9 ~
% pip install -I transformers --no-cache-dir --force-reinstall
xxxxxxx downloading logs xxxxxxxxxxx
Installing collected packages: zipp, typing-extensions, urllib3, pyparsing, importlib-resources, importlib-metadata, idna, charset-normalizer, certifi, tqdm, six, requests, regex, pyyaml, packaging, joblib, filelock, click, tokenizers, sacremoses, numpy, hug gingface-hub, dataclasses, transformers
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
allennlp 1.0.0 requires jsonnet>=0.10.0; sys_platform != "win32", which is not installed. allennlp 1.0.0 requires jsonpickle, which is not installed.
allennlp 1.0.0 requires tensorboardX>=1.2, which is not installed. bminf test requires cupy-cuda9<10,>=9, which is not installed.
torchvision 0.10.0 requires torch==1.9.0, but you have torch 1.10.0 which is incompatible. thinc 8.0.15 requires typing-extensions<4.0.0.0,>=3.7.4.1; python_version < "3.8", but you have typing-extensions 4.1.1 which is incompatible.
sphinx-rtd-theme 0.5.2 requires docutils<0.17, but you have docutils 0.17.1 which is incompatible. spacy 3.2.3 requires typing-extensions<4.0.0.0,>=3.7.4; python_version < "3.8", but you have typing-extensions 4.1.1 which is incompatible.
paddlepaddle-tiny 1.6.1 requires numpy<=1.16.4,>=1.12, but you have numpy 1.19.5 which is incompatible.
flake8 5.0.4 requires importlib-metadata<4.3,>=1.1.0; python_version < "3.8", but you have importlib-metadata 4.8.3 which is incompatible.
datasets 1.2.0 requires tqdm<4.50.0,>=4.27, but you have tqdm 4.64.1 which is incompatible.
argcomplete 1.11.1 requires importlib-metadata<2,>=0.23; python_version == "3.6", but you have importlib-metadata 4.8.3 which is incompatible.
allennlp 1.0.0 requires filelock<3.1,>=3.0, but you have filelock 3.4.1 which is incompatible.
allennlp 1.0.0 requires overrides==3.0.0, but you have overrides 6.1.0 which is incompatible.
allennlp 1.0.0 requires spacy<2.3,>=2.1.0, but you have spacy 3.2.3 which is incompatible.
allennlp 1.0.0 requires torch<1.6.0,>=1.5.0, but you have torch 1.10.0 which is incompatible.
allennlp 1.0.0 requires transformers<2.12,>=2.9, but you have transformers 4.18.0 which is incompatible.
opennmt-py 1.0.0 requires tqdm~=4.30.0, but you have tqdm 4.64.1 which is incompatible.
Successfully installed certifi-2022.12.7 charset-normalizer-2.0.12 click-8.0.4 dataclasses-0.8 filelock-3.4.1 huggingface-hub-0.4.0 idna-3.4 importlib-metadata-4.8.3 importlib-resources-5.4.0 joblib-1.1.1 numpy-1.19.5 packaging-21.3 pyparsing-3.0.9 pyyaml-6.0 regex-2022.10.31 requests-2.27.1 sacremoses-0.0.53 six-1.16.0 tokenizers-0.12.1 tqdm-4.64.1 transformers-4.18.0 typing-extensions-4.1.1 urllib3-1.26.14 zipp-3.6.0
```
Then, I try to import transformers, and get the same error.
```
(py36)
cyx@c9 ~
% python !10078
Python 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31)
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> np.__version__
'1.19.5'
>>> np.__file__ '/home/cyx/.conda/envs/py36/lib/python3.6/site-packages/numpy/__init__.py'
>>> import transformers
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/cyx/.conda/envs/py36/lib/python3.6/site-packages/transformers/__init__.py", line 30, in <module>
from . import dependency_versions_check
File "/home/cyx/.conda/envs/py36/lib/python3.6/site-packages/transformers/dependency_versions_check.py", line 41, in <module>
require_version_core(deps[pkg])
File "/home/cyx/.conda/envs/py36/lib/python3.6/site-packages/transformers/utils/versions.py", line 120, in require_version_core
return require_version(requirement, hint)
File "/home/cyx/.conda/envs/py36/lib/python3.6/site-packages/transformers/utils/versions.py", line 114, in require_version
_compare_versions(op, got_ver, want_ver, requirement, pkg, hint)
File "/home/cyx/.conda/envs/py36/lib/python3.6/site-packages/transformers/utils/versions.py", line 50, in _compare_versions
f"{requirement} is required for a normal functioning of this module, but found {pkg}=={got_ver}.{hint}"
ImportError: numpy>=1.17 is required for a normal functioning of this module, but found numpy==1.16.4.
Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git main
>>>
```<|||||>
I think I found the answer.
While `pip list | grep numpy ` only returns 1.19.5 version, `conda list | grep numpy ` returns multiple versions:
```
(py36)
cyx@c9 ~
% conda list | grep numpy
numpy 1.19.2 py36h54aff64_0
numpy 1.16.4 <pip>
numpy 1.19.5 <pip>
numpy-base 1.19.2 py36hfa32c7d_0
```
Then, I went to the conda env dir: `/data/home/cyx/.conda/envs/py36/lib/python3.6/site-packages`, and find there is a folder named `numpy-1.16.4.dist-info` and a folder named `numpy-1.19.5.dist-info`. After removing the 1.16.4 folder, I can import transformers correctly.
I wonder maybe the version checking function could be updated?<|||||>I had the same issue and ran:
pip show numpy | grep Location
rm -rvf /usr/local/lib/python3.11/site-packages/numpy
python3.11 -m pip install numpy
and this resolved it |
transformers | 11,712 | closed | Reformer for questions answering(squad) | I want to use Reformer for Questions Answering. I tried to use pretrained model 'google/reformer-crime-and-punishment'. I was using this example https://huggingface.co/transformers/custom_datasets.html#qa-squad just replaced with reformer. I get the exception related with pad, cls tokens, sequence length and so on, but that does not matter now, I first want to know: is it even possible to get good results(model to answer questions more than 50 percent accuracy for example? Because I see that you working on this model now and some functions maybe are not implemented yet or so. I saw issues like this: https://github.com/huggingface/transformers/issues/5436 and you wrote that you be very surprised if it will give any good results using reformer for q&a. (but it was year ago, so maybe it changed)
Thanks | 05-13-2021 07:57:12 | 05-13-2021 07:57:12 | It hasn't changed yet. The problem is not code, the problem is the lack of a decent pre-trained model.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,711 | closed | How to accelerate the inference speed when using pipeline | > transformers version: 4.6.0.dev0
> torch version:1.8.1+cu102
I am using the simple API - pipeline, to do inference, where the input are tens of thousands of sentences.
```
nlp = pipeline("text-generation", model= 'gpt2', device=0, return_full_text=False)
results = nlp(df_train['content'].tolist(), max_length=250, do_sample=True, top_p=0.9, top_k=0, \
repetition_penalty=1, num_return_sequences=64)
```
take the generation, for instance, I want to generate new synthesized samples from each sentence from the df_train.
The codes work well, but it is not fast enough. I mean the GPU usage is just 76% ~ 85%.
Is there any trick or parameters I can tune to speed up ?
Another question is that how can I eliminate the info:
> Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
| 05-13-2021 07:51:16 | 05-13-2021 07:51:16 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @yananchen1989 did you get a solution yet |
transformers | 11,710 | closed | AssertionError: internal model should be a reference to self.model | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.2
- Platform: Linux
- Python version: 3.6
- PyTorch version (GPU?): 1.7 CPU
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
When I run `trainer.train()` for the second time in Jyputer, it throws error:
```
AssertionError Traceback (most recent call last)
<ipython-input-7-3435b262f1ae> in <module>
----> 1 trainer.train()
~/data/apps/anaconda3/lib/python3.7/site-packages/transformers/trainer.py in train(self, model_path, trial)
933
934 self.control = self.callback_handler.on_epoch_end(self.args, self.state, self.control)
--> 935 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
936
937 if self.args.tpu_metrics_debug or self.args.debug:
~/data/apps/anaconda3/lib/python3.7/site-packages/transformers/trainer.py in _maybe_log_save_evaluate(self, tr_loss, model, trial, epoch)
1006
1007 if self.control.should_save:
-> 1008 self._save_checkpoint(model, trial, metrics=metrics)
1009 self.control = self.callback_handler.on_save(self.args, self.state, self.control)
1010
~/data/apps/anaconda3/lib/python3.7/site-packages/transformers/trainer.py in _save_checkpoint(self, model, trial, metrics)
1012 # In all cases, including ddp/dp/deepspeed, self.model is always a reference to the model we
1013 # want to save.
-> 1014 assert _model_unwrap(model) is self.model, "internal model should be a reference to self.model"
1015
1016 # Save model checkpoint
AssertionError: internal model should be a reference to self.model
```
https://huggingface.co/transformers/training.html
The tasks I am working on is:
* [ ] sequence classification
* [ ] my own task
## To reproduce
Steps to reproduce the behavior:
1. run `trainer.train()`
2. run `trainer.train()` again.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 05-13-2021 07:49:34 | 05-13-2021 07:49:34 | You should update your version of Transformers to solve this issue.<|||||>thanks<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,709 | closed | Fix gpt-2 warnings | closes https://github.com/huggingface/transformers/issues/11707 | 05-13-2021 07:29:23 | 05-13-2021 07:29:23 | |
transformers | 11,708 | closed | stop at load tokenizer_config.json when run barthez for mrpc | run code
```shell
python examples/text-classification/run_glue_tune.py --model_name_or_path /home2/zhenggo1/checkpoint/barthez_mrpc --task_name $TASK_NAME --do_eval --tune --max_seq_length 512 --output_dir /home2/zhenggo1/checkpoint/barthez_mrpc --tuned_checkpoint="/home2/zhenggo1/checkpoint/barthez_mrpc"
```
stop at here
```shell
[INFO|tokenization_utils_base.py:1618] 2021-05-13 10:11:25,859 >> Model name '/home2/zhenggo1/checkpoint/barthez_mrpc' not found in model shortcut name list (moussaKam/mbarthez, moussaKam/barthez, moussaKam/barthez-orangesum-title). Assuming '/home2/zhenggo1/checkpoint/barthez_mrpc' is a path, a model identifier, or url to a directory containing tokenizer files.
[INFO|tokenization_utils_base.py:1651] 2021-05-13 10:11:25,860 >> Didn't find file /home2/zhenggo1/checkpoint/barthez_mrpc/tokenizer.json. We won't load it.
[INFO|tokenization_utils_base.py:1651] 2021-05-13 10:11:25,860 >> Didn't find file /home2/zhenggo1/checkpoint/barthez_mrpc/added_tokens.json. We won't load it.
[INFO|tokenization_utils_base.py:1714] 2021-05-13 10:11:25,860 >> loading file /home2/zhenggo1/checkpoint/barthez_mrpc/sentencepiece.bpe.model
[INFO|tokenization_utils_base.py:1714] 2021-05-13 10:11:25,860 >> loading file None
[INFO|tokenization_utils_base.py:1714] 2021-05-13 10:11:25,860 >> loading file None
[INFO|tokenization_utils_base.py:1714] 2021-05-13 10:11:25,860 >> loading file /home2/zhenggo1/checkpoint/barthez_mrpc/special_tokens_map.json
[INFO|tokenization_utils_base.py:1714] 2021-05-13 10:11:25,860 >> loading file /home2/zhenggo1/checkpoint/barthez_mrpc/tokenizer_config.json
``` | 05-13-2021 02:17:42 | 05-13-2021 02:17:42 | No problem.It is just loading too long as 9 minute. |
transformers | 11,707 | closed | Loading Basic GPT-2 model gives warning that attention layers weren't loaded from pre-trained weights | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0
- Platform: Linux-4.19.0-16-cloud-amd64-x86_64-with-debian-10.9
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: Yes. One GPU on Google Cloud Compute
- Using distributed or parallel set-up in script?: No
Tagging the following people for assistance
- gpt2: @patrickvonplaten, @LysandreJik
## Information
I'm using the GPT-2 vanilla model. Locally on my MacbookPro, my code runs as expected. When I try to run my Jupyter notebook on GCP, I however encounter an unseen warning. I provided the env details of said GCP instance above. I'm simply trying to load the vanilla GPT-2 model, but I keep getting the warning that the attention layers are not being initialized from the pre-trained weights as intended.
I get the following warning message:
`Some weights of GPT2Model were not initialized from the model checkpoint at gpt2 and are newly initialized: ['h.5.attn.masked_bias', 'h.6.attn.masked_bias', 'h.1.attn.masked_bias', 'h.8.attn.masked_bias', 'h.2.attn.masked_bias', 'h.10.attn.masked_bias', 'h.7.attn.masked_bias', 'h.4.attn.masked_bias', 'h.11.attn.masked_bias', 'h.9.attn.masked_bias', 'h.0.attn.masked_bias', 'h.3.attn.masked_bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.`
This happens when I attempt to execute the super simple model loading statement:
` pretrained_transformer = GPT2Model.from_pretrained('gpt2')`
I've seen similar issues people have posted, however, they tend to be loading a model and trying to load the weights to a different type of model. Here I am strictly trying to load a GPT2 model weight set and then configure the same GPT2 model with said weights.
I'm worried these warnings are real and it seems my experiments on GCP are not looking the same as the ones locally due to the weights not being loaded properly.
## To reproduce
Steps to reproduce the behavior:
1. Make GCP notebook
2. Try to load GPT-2 model
## Expected behavior
The behaviour I desire is no warning messages and being able to use all the fully trained pre-trained weights as I'm able to do locally.
| 05-12-2021 23:50:28 | 05-12-2021 23:50:28 | These warnings mention that buffers are not loaded, which is normal - they're created during the model initialization.
There was an attribute missing on the `GPT2Model` which led the warnings to still be raised, I'm fixing this in #11709!<|||||>Installing from source should resolve the warnings issue :) |
transformers | 11,706 | closed | Add Cloud details to README | # What does this PR do?
Clarifies the date and timezone for retrieving the prices to avoid future complaints about incorrectness.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-12-2021 15:07:29 | 05-12-2021 15:07:29 | |
transformers | 11,705 | closed | [Lazy init] Force fall back to slow init for composite models | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Thanks to the great issue #11704 it was discovered that fast initialization currently breaks for all models whose `XXXPreTrainedModel` does not implement a `_init_weights` function and for which parts of the weights are missing when using `.from_pretrained(...)`. This includes essentially all composite models, being `Rag` and `EncoderDecoder`.
This PR does the vanilla fix of forcing those models to fall back on `_slow_init` since a better fix requires a careful re-design which is left for a future PR.
## Future PR
- [ ] Remove hacky `from_pretrained(...)` methods in RAG and EncoderDecoder
- [ ] Refactor the way "fast_init" calls `model._init_weights` for composite models. For Composite models, each part has to be called directly =>
```python
model.encoder._init_weigths(all_missing_keys_of_encoder)
model.decoder._init_weigths(all_missing_keys_of_decoder)
```
- [ ] Add more tests for RAG & EncoderDecoder
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-12-2021 14:20:22 | 05-12-2021 14:20:22 | |
transformers | 11,704 | closed | [RAG] official facebook example code for RAG is not working anymore. | ## Environment info
- `transformers` version: 4.6.0.dev0
- Platform: Linux-3.10.0-1127.10.1.el7.x86_64-x86_64-with-debian-buster-sid
- Python version: 3.6.13
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
rag: @patrickvonplaten, @lhoestq
Models:
RAG model
## Information
Model I am using RAG
The problem arises when using the official example scripts from https://huggingface.co/facebook/rag-sequence-nq, which I copied here:
```
from transformers import RagTokenizer, RagRetriever, RagSequenceForGeneration
tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq")
retriever = RagRetriever.from_pretrained("facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True)
model = RagSequenceForGeneration.from_pretrained("facebook/rag-sequence-nq", retriever=retriever)
input_dict = tokenizer.prepare_seq2seq_batch("how many countries are in europe", return_tensors="pt")
generated = model.generate(input_ids=input_dict["input_ids"])
print(tokenizer.batch_decode(generated, skip_special_tokens=True)[0])
```
The tasks I am working on is:
I am trying to ran the sample code from above. Note that this same error occurs also when finetuning rag with the official code in transformers/examples/research_projects/rag/finetune_rag.sh
## To reproduce
Steps to reproduce the behavior:
1. run the sample code above
The error is:
```
Traceback (most recent call last):
File "prova_rag.py", line 5, in <module>
model = RagSequenceForGeneration.from_pretrained("facebook/rag-sequence-nq", retriever=retriever, _fast_init=False)
File "/nlu/users/giovanni_bonetta/transformers/src/transformers/modeling_utils.py", line 1208, in from_pretrained
model, state_dict, pretrained_model_name_or_path
File "/nlu/users/giovanni_bonetta/transformers/src/transformers/modeling_utils.py", line 1278, in _load_state_dict_into_model
model._init_weights(module)
File "/nlu/users/giovanni_bonetta/miniconda2/envs/hf_venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 948, in __getattr__
type(self).__name__, name))
AttributeError: 'RagSequenceForGeneration' object has no attribute '_init_weights'
```
Looking at last commits i suppose that the error was introduced in "Pytorch - Lazy initialization of models #11471" a couple of weeks ago, where the line `model._init_weights(module)` was introduced.
## Expected behavior
It should initialize the model without errors.
| 05-12-2021 13:14:35 | 05-12-2021 13:14:35 | |
transformers | 11,703 | closed | remove defaults to None if optional | PR to fix #11687
| 05-12-2021 12:30:51 | 05-12-2021 12:30:51 | |
transformers | 11,702 | closed | channel_len specified but not used | Here `channel_len` is specified but not used. Smells like a possible bug.
https://github.com/huggingface/transformers/blob/f063c56d942737d2c7aac93895cd8310afd9c7a4/src/transformers/models/ibert/quant_modules.py#L133 | 05-12-2021 12:15:19 | 05-12-2021 12:15:19 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@LysandreJik or @sgugger could you please check this before it gets closed by the "stale bot"?<|||||>cc @kssteven418 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,701 | closed | [Flax] Updates README and fixes bug | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds information about costs/pricing for Flax Bert Text Classification example.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-12-2021 11:49:14 | 05-12-2021 11:49:14 | |
transformers | 11,700 | closed | Offline installation of the transformers repo (error message) | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0.dev0
- Platform: kaggle
- Python version: 3.7
- PyTorch version (GPU?): 1.7.0 (yes)
- Tensorflow version (GPU?): NA
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Upload github repo as a kaggle dataset
2. Turn off internet
3. Run pip installation in notebook: !pip install /kaggle/input/transformersgithub/transformers
4. Error message with respect to setuptools. However, setuptools already installed: Requirement already satisfied: setuptools in /opt/conda/lib/python3.7/site-packages (49.6.0.post20210108)
Processing /kaggle/input/transformersgithub/transformers
Installing build dependencies ... error
ERROR: Command errored out with exit status 1:
command: /opt/conda/bin/python3.7 /opt/conda/lib/python3.7/site-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-b04ltufk/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'setuptools>=40.8.0' wheel
cwd: None
Complete output (7 lines):
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f3aaba22390>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/setuptools/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f3aaba22790>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/setuptools/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f3aaba22ad0>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/setuptools/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f3aaba22e10>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/setuptools/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f3aaba16d10>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')': /simple/setuptools/
ERROR: Could not find a version that satisfies the requirement setuptools>=40.8.0
ERROR: No matching distribution found for setuptools>=40.8.0
----------------------------------------
WARNING: Discarding file:///kaggle/input/transformersgithub/transformers. Command errored out with exit status 1: /opt/conda/bin/python3.7 /opt/conda/lib/python3.7/site-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-b04ltufk/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'setuptools>=40.8.0' wheel Check the logs for full command output.
ERROR: Command errored out with exit status 1: /opt/conda/bin/python3.7 /opt/conda/lib/python3.7/site-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-b04ltufk/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'setuptools>=40.8.0' wheel Check the logs for full command output.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 05-12-2021 11:13:57 | 05-12-2021 11:13:57 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,699 | closed | Mixed precision training : link broken | A few weeks ago this link
https://github.com/huggingface/transformers/tree/master/examples/text-classification#mixed-precision-training
showed a comparison between models trained with and without mixed precision for a bunch of sequence/text classification tasks.
Now I think that link is broken and something (examples and similar) has been moved but I do not find the new location of this comparison.
Do you know where can I find that? | 05-12-2021 11:02:06 | 05-12-2021 11:02:06 | Yes, it's here: https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification#mixed-precision-training
<|||||>Cool, thank you! |
transformers | 11,698 | closed | Support ViT model in EncoderDecoder | 05-12-2021 10:33:08 | 05-12-2021 10:33:08 | The idea is to use ViT as the encoder and then a LM as the decoder for Image captioning generation, *e.g.*? @patil-suraj and I were also thinking of using `EncoderDecoder` for Speech2Text.
At the moment, I see two solutions: -> adapt `EncoderDecoder` to be usable for all modalities not just text2text **or** create new classes, *e.g.* a `SpeechEncoderDecoder` and a `VisionEncoderDecoder` since I'm not sure `EncoderDecoder` will be able to handle all the new use-cases. *E.g.* we might end up ending way too many if-else statements that would make the code unreadable... @patil-suraj @abhi1thakur what do you think? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale<|||||>Huhu,
any update at this ? :)
a VisionEncoderDecoderModel would be a great also for models which follow in future
for example this one: [TrOCR](https://arxiv.org/abs/2109.10282)<|||||>We'll add a new `VisionEncoderDecoder` class for this actually<|||||>@patrickvonplaten nice !
Is there any way or need to contribute ? :)<|||||>Closing this as [VisionEncoderDecoder](https://huggingface.co/docs/transformers/main/en/model_doc/vision-encoder-decoder#vision-encoder-decoder-models) now exists. |
|
transformers | 11,697 | closed | add the chinese ref in place to tackle the memory issue | # What does this PR do?
@wlhgtc
It eats my 200+GB memory when adding the chinese refs, OOM killed, can't go further.
My train corpus has 17067704 lines (size: 1GB)
datasets 1.6.2 come with a new in-place add_column function, which saves the heavy coping. | 05-12-2021 10:25:46 | 05-12-2021 10:25:46 | Both two ways work well on my own process, but `add_columns` is 4x faster than the original method.

LGTM!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,696 | closed | [CLIP] fix example in config doc | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-12-2021 10:24:22 | 05-12-2021 10:24:22 | |
transformers | 11,695 | closed | [Flax] Fix BERT initialization & token_type_ids default | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes initialization of FLAX models by disabling `return_dict` since this can sometimes lead to problems in distributed settings. Also `token_type_ids` should be initialized to 0.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-12-2021 09:41:05 | 05-12-2021 09:41:05 | |
transformers | 11,694 | closed | Fix clip docs | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-12-2021 09:17:24 | 05-12-2021 09:17:24 | |
transformers | 11,693 | closed | Flag to disable shuffling for data loader | # 🚀 Feature request
Currently, Trainer is shuffling the train_dataset by default and there is no flag to enable/disable it.
@sgugger
## Motivation
Even if shuffling the dataset brings a lot of benefits like preventing overfitting, at some point, one can need to disable it for experimental motivation. It isn't possible to do it without overwriting the `_get_train_sampler` method of Trainer. :(
## Your contribution
I can work on this issue (maybe next month) if this issue gets positive feedback. | 05-12-2021 09:06:54 | 05-12-2021 09:06:54 | I don't think this is a suitable feature, so I would recommend you override and subclass `get_train_dataloader` to simply return a training dataloader without using a sampler or `shuffle=True`. |
transformers | 11,692 | closed | fix url for CLIP doc | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-12-2021 08:36:18 | 05-12-2021 08:36:18 | |
transformers | 11,691 | closed | BertForSemanticSimilarity | 05-12-2021 07:40:37 | 05-12-2021 07:40:37 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
|
transformers | 11,690 | closed | add --validation_split_percentage for custom dataset | # What does this PR do?
In the current version of the example, the `--validation_split_percentage` only works for datasets loaded from the hub but not for custom datasets. If `--do_eval` is set for custom dataset, it requires `--validation_file`.
This PR makes `--validation_split_percentage` works for a custom dataset if `--validation_file` is not set.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
<!-- Fixes # (issue) -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-12-2021 03:04:08 | 05-12-2021 03:04:08 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,689 | closed | DeBERTa pretraining data preparation | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: 4.6.0.dev0
- Python version: 3.6
- PyTorch version (GPU?): 1.6
- Tensorflow version (GPU?):
- Using GPU in script?: Y
- Using distributed or parallel set-up in script?: Y
### Who can help
--> @LysandreJik @BigBird01
## Information
Model I am using (Bert, XLNet ...): DeBERTa
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name): MLM + SQUAD 1
* [ ] my own task or dataset: (give details below)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I am pretraining DeBERTa Base from scratch on Wikipedia + Book Corpus dataset. After pretraining for 500K steps I observe SQUAD 1.1 score of 76 which is much less than Figure 1(b) in paper (although figure 1b reports squad 2.0 numbers) squad 1.1 numbers would be much better than that as it is easier task. Using same hyperparameters as reported in paper. I would like to confirm the preprocessing steps that authors took to prepare pretraining data.
1. In section 4.4.1, authors report that they used Megatron code base to deduplicate the data. The code provided performs deduplication based on urls. https://github.com/NVIDIA/Megatron-LM/tree/main/tools/openwebtext Was the deduplication performed on url -> document set or on shards of dataset?
2. [This](https://github.com/NVIDIA/Megatron-LM/blob/main/tools/openwebtext/cleanup_dataset.py) codebase also cleans up the dataset based and removes non-english characters. Were these data cleanup steps performed on pretraining data?
3. Is it possible to provide scripts used to generate pretraining data? | 05-12-2021 00:17:45 | 05-12-2021 00:17:45 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,688 | closed | Trainer skips training when continuing training with model.from_pretrained() | I'm fine-tuning the gpt2 model using TFTrainer.
As I have to save some computational power, I needed to part the data set and train it one after another.
The first training (from TFGPT2LMHeadModel.from_pretrained('gpt2')) works well. After storing the model in a folder under a different name, and reloading it again to continue training, the trainer basically skips the training and stores the old model in the new model folder.
I don't understand the reason for this behavior.
transformers: 4.5.0
tensorflow:2.4.1
@Rocketknight1
Training code:
```
def train(file, file_id, batch_size, num_epochs):
tokenizer, vocab_size = load_tokenizer()
max_seq_length = 1000
args = TFTrainingArguments(output_dir='out_gpt',
num_train_epochs=num_epochs,
do_train=True,
per_device_train_batch_size=batch_size,
gradient_accumulation_steps=2,
max_grad_norm=1.0)
data = prepare_pre_training_data_set(file, tokenizer, max_seq_length)
print("Datasets loaded...")
with args.strategy.scope():
model = TFGPT2LMHeadModel.from_pretrained('gpt2_trained_' + str(file_id))
optimizer = tf.keras.optimizers.Adam(lr=0.0005)
cat_loss = tf.losses.CategoricalCrossentropy()
model.compile(optimizer=optimizer, loss=cat_loss)
trainer = TFTrainer(model=model,
train_dataset=data,
args=args)
trainer.train()
trainer.save_model("gpt2_trained_+str(file_id+1)")
``` | 05-11-2021 20:17:08 | 05-11-2021 20:17:08 | I'm really sorry to may have bothered you with that. Turns out, it works when using a different GPU :)
It struck me when I tested the same code with less data on my local machine, the trainer logged epochs (which it did not before), so I tested it on several devices. I was just a bit confused to be honest, since there were no errors/warnings thrown neither by tensorflow nor the transformers library even when using trainer (debug=True).<|||||>Hey! Don't worry about it - TFTrainer is currently not very maintained, and we're looking at switching away from it to a pure Keras framework, so some bits of it can be quite confusing right now. Don't be shy about letting us know if you run into other issues, especially if they look like they might be bugs at our end! |
transformers | 11,687 | closed | Remove "`optional`, defaults to :obj:`None`" | There are some docstrings with "`optional`, defaults to :obj:`None`" arguments.
According to @sgugger this should be avoided: https://github.com/huggingface/transformers/pull/11417#discussion_r629320375
PS: I can provide a PR if wanted... | 05-11-2021 18:51:11 | 05-11-2021 18:51:11 | Please go ahead if you want to clean this! A quick serach shows me 17 \`optional\`, defaults to :obj:\`None\` and two `optional`, defaults to None<|||||>ok - see #11703 @sgugger |
transformers | 11,686 | open | Routing Transformers / Add Google PG-19 Models | # 🌟 New model addition - Google PG-19 Models
## Model description
Model checkpoints finally released as discussed in "Efficient Content-Based Sparse Attention with Routing Transformers'
Aurko Roy, Mohammad Saffar, Ashish Vaswani, David Grangier (https://arxiv.org/abs/2003.05997)
## Open source status
* [X ] the model implementation is available: (same link as below)
* [ X] the model weights are available: ( https://github.com/google-research/google-research/tree/master/routing_transformer)
* [X ] who are the authors: (see above)
Note: These tf2 models require proper conversion to pytorch versions and modifications to scripts to enable training and inference.
| 05-11-2021 16:51:34 | 05-11-2021 16:51:34 | There is an open-source pytorch implementation already - https://github.com/lucidrains/routing-transformer
Can't we adapt RT @lucidrains wrote to HF? <|||||>I've checked the repo before and was hoping with the release of the models
this would be possible.
The original models may be tf1 and not tf2 format. This requires a custom
conversion script to pytorch.
Perhaps coders with advanced python skills will show interest in solving
the above issues.
On Wed, Jul 14, 2021 at 7:53 AM vblagoje ***@***.***> wrote:
> There is an open-source pytorch implementation already -
> https://github.com/lucidrains/routing-transformer
> Can't we adapt RT @lucidrains <https://github.com/lucidrains> wrote to HF?
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/11686#issuecomment-879827190>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AFMAWPKXKV5UXM2HFEQ57U3TXV3ERANCNFSM44WG7YGA>
> .
>
|
transformers | 11,685 | closed | [WIP] Add flax generate | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-11-2021 16:32:31 | 05-11-2021 16:32:31 | |
transformers | 11,684 | closed | Add new model RoFormer (use rotary position embedding ) | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Add new model RoFormer
[RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/pdf/2104.09864v1.pdf) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
The original code can be found [here](https://github.com/ZhuiyiTechnology/roformer).
## The abstract from the paper is the following:
*Position encoding in transformer architecture provides supervision for dependency modeling between elements at
different positions in the sequence. We investigate various methods to encode positional information in
transformer-based language models and propose a novel implementation named Rotary Position Embedding(RoPE). The
proposed RoPE encodes absolute positional information with rotation matrix and naturally incorporates explicit relative
position dependency in self-attention formulation. Notably, RoPE comes with valuable properties such as flexibility of
being expand to any sequence lengths, decaying inter-token dependency with increasing relative distances, and
capability of equipping the linear self-attention with relative position encoding. As a result, the enhanced
transformer with rotary position embedding, or RoFormer, achieves superior performance in tasks with long texts. We
release the theoretical analysis along with some preliminary experiment results on Chinese data. The undergoing
experiment for English benchmark will soon be updated.*
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-11-2021 16:25:42 | 05-11-2021 16:25:42 | @patil-suraj I have updated some codes, please review again. Thanks~<|||||>@patil-suraj
- I fixed the docstrings format and the build_doc tests pass
- I have resolved the merge conflicts
- I have run make style and make quality
Thank you for reviewing on this PR. ∩▂∩
<|||||>@patrickvonplaten i have done it,thanks;)<|||||>Tests are fine I think (PyTorch times out :-/).
Good to merge for me<|||||>Thanks a lot @JunnYu, fantastic addition! |
transformers | 11,683 | closed | Issue getting prediction_scores from TransfoXLHeadLM model when labels are provided | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Colab
- Python version: 3.8.1
- PyTorch version (GPU?): 1.8.1+cu101 (Yes), same bug reproduced on CPU side (windows 10)
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): TransfoXL
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Slightly modified example from TransfoXL docs (https://huggingface.co/transformers/model_doc/gpt2.html#gpt2lmheadmodel)
## To reproduce
Steps to reproduce the behavior:
```python
import torch
from transformers import TransfoXLTokenizer, TransfoXLLMHeadModel
tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')
model = TransfoXLLMHeadModel.from_pretrained('transfo-xl-wt103')
with torch.no_grad():
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt",)
print(inputs)
outputs = model(inputs["input_ids"],return_dict=True,labels=inputs["input_ids"])
print(outputs['prediction_scores'])
```
## Expected behavior
```outputs['prediction_scores']``` should return a torch.FloatTensor, not ```()```. In this example
```
tensor([[[ -4.4980, -4.7363, -3.8697, ..., -18.4604, -20.6320, -15.2920],
[ -4.0868, -3.7895, -2.9193, ..., -19.4917, -20.0318, -15.7870],
[ -4.4769, -4.7728, -1.5619, ..., -21.3586, -22.2751, -18.7071],
[ -6.1670, -6.8841, -0.6857, ..., -21.4503, -22.3682, -19.5937],
[ -7.3567, -3.1381, -2.7641, ..., -18.3717, -20.6145, -17.4109],
[ -7.1151, -6.4929, -0.9753, ..., -21.8517, -21.9864, -20.3518]]])
```
is returned correctly when ```labels=None``` but not when ```labels=inputs["input_ids"]```. I've tested almost identical example in GPT2 and it did return (albeit unnormalized) logits regardless of whether labels are provided or not. | 05-11-2021 16:09:49 | 05-11-2021 16:09:49 | Hey @RedneckedCrake,
I think I agree with you here! It should be a very easy fix simply by change this line: https://github.com/huggingface/transformers/blob/6ee1a4fd3e80feef8fe7dc65aabb4c5270524f8a/src/transformers/models/transfo_xl/modeling_transfo_xl.py#L1100. Would you like to give it a try to fix it?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,682 | closed | Test checkpointing | # What does this PR do?
This fixes how weights are loading when resuming training from a checkpoint, in the instances some weights are tied with other (and thus not saved). It also adds a test in the common tests to make sure the mechanism used is not broken by mistake.
Fixes #11666 | 05-11-2021 15:48:05 | 05-11-2021 15:48:05 | |
transformers | 11,681 | closed | Cannot reproduce results from zeroshot demo app | Using the same text and the same labels I cannot "exactly" reproduce the result from the zeroshot app here https://huggingface.co/zero-shot/, using the "XLM Roberta XNLI" option. See below for my attempt to get the same result.
## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-5.6.0-1055-oem-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.7.1+cpu (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@joeddav
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
``` python
from transformers import pipeline
model = "joeddav/xlm-roberta-large-xnli"
classifier = pipeline("zero-shot-classification", model=model, framework="pt")
txt = "There SHOULD be a connection of the GROUND wire to a ground in the breaker box. There also should be a connection of the NEUTRAL wire to a ground in the breaker box. There should be no other place in the building where such a connection occurs (i.e. not in any of the outlet boxes). The NEUTRAL (white) wire is a 'grounding conductor' for the plug, and is NOT safe to touch, while the GROUND (green) wire is a 'protective ground' and carries no current unless some kind of electrical fault has occurred. It's safe to touch the protective ground, but not to touch the grounding conductor (because there is current in the grounding conductor, its outlet-box end will not be at the same ground potential as its breaker-box end)."
template = "This text is about {}"
custom_labels = [
"politics",
"politics and guns",
"politics and middle east",
"religion or christianity or atheism",
"science and cryptography",
"science and electronics",
"science and medicine",
"science and space"
]
res = classifier(txt, candidate_labels=custom_labels, template=template, multi_label=False)
list(zip(res["labels"], res["scores"]))
```
```
[
('science and electronics', 0.17324578762054443),
('religion or christianity or atheism', 0.15423095226287842),
('politics and middle east', 0.12779277563095093),
('science and space', 0.1238853707909584),
('science and cryptography', 0.12293272465467453),
('science and medicine', 0.10926352441310883),
('politics', 0.09960934519767761),
('politics and guns', 0.08903954923152924)
]
```
## Expected behavior
I was hoping to get something similar to the result from pasting the same text and labels into the app, namely
```
[
('science and electronics', 14.4%),
('politics', 14%),
('science and space', 13.7%),
('politics and guns', 13.1%)
('politics and middle east', 12.7%),
('science and medicine', 11.8%),
('science and cryptography', 10.7%),
('religion or christianity or atheism', 9.6%)
]
```
Small differences would be expectable I guess, because of potential platform/framework differences etc., but the fact that the "religion or christianity or atheism" category leads to such different results make me wonder if I'm not using the same model as the app, or a different prompt perhaps?
Neither of the two gives particularly great results in this case, but knowing the origin of this difference would be useful for better evaluating the pipeline.
EDIT: I've noticed I've passed the template with the wrong argument name (which didn't fail since `__call__` accepts arbitrary **kwargs), but using the correct one doesn't make the result any more similar to the one from the app. | 05-11-2021 15:28:03 | 05-11-2021 15:28:03 | It's subtle but frustratingly important: your hypothesis template needs to have a period at the end. `This text is about {}.` Try that and let me know.<|||||>With the period at the end (and the proper argument name `hypothesis_template`), I'm getting
```
[
('science and electronics', 0.1435241997241974),
('science and cryptography', 0.13319259881973267),
('science and space', 0.13153083622455597),
('religion or christianity or atheism', 0.12961038947105408),
('politics and middle east', 0.12887051701545715),
('science and medicine', 0.11486212909221649),
('politics', 0.11052283644676208),
('politics and guns', 0.10788644850254059)
]
```
The code for the streamlit app is not public, is it? To see what other differences there may be...
(In general, but that's perhaps just the nature of this approach to zeroshot, the results seem quite sensitive to how the hypothesis is formulated. Even seemingly harmless changes may mean that the religious category is suddenly the most probable, which is kind of surprising for the sample text, but yeah that's another story...)<|||||>Btw, I'm assuming this message can be ignored
```
Some weights of the model checkpoint at joeddav/xlm-roberta-large-xnli were not used when initializing XLMRobertaForSequenceClassification: ['roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']
- This IS expected if you are initializing XLMRobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing XLMRobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
```
because of how the original model is repurposed?<|||||>Yes you can disregard that message safely. And it's not 100% up to date but the code for the demo is public: https://github.com/joeddav/zero-shot-demo<|||||>Great, thanks! For some reason I didn't find that repo in your account when I looked. In any case, the only difference I can see is in the creation of the pipeline (the more manual creation using model and tokenizer instances instead of simply the model name). But that doesn't affect the result at all in my tests. So unless the app is somehow using a different version of the model I assume the difference is in how it is executed in different environments.<|||||>Hmm so the code in my repo is out of date because we now use the inference API as the backend and it looks like there's a discrepancy between inference API outputs and the pipeline outputs. I'll look into it.<|||||>Ok, let me know if I can test anything (small-scale) here to help.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, I am dealing too with significant differences between streamlit example and my local testing, is there any update regarding this issue?<|||||>@shimonhaf How significant are the changes? It appears that this might be due to the quantization done by the inference API which the demo uses. cc @Narsil <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,680 | closed | [TokenClassification] Label realignment for subword aggregation | # What does this PR do?
Tentative to replace #11622
- Added `AggregationStrategy`
- `ignore_subwords` and `grouped_entities` arguments are now fused
into `aggregation_strategy`. It makes more sense anyway because
`ignore_subwords=True` with `grouped_entities=False` did not have a
meaning anyway.
- Added 2 new ways to aggregate which are MAX, and AVERAGE
- AVERAGE requires a bit more information than the others, for now this
case is slightly specific, we should keep that in mind for future
changes.
- Testing has been modified to reflect new argument, and to check the
correct deprecation and the new aggregation_strategy.
- Put the testing argument and testing results for aggregation_strategy,
close together, so that readers can understand what is supposed to
happen.
- `aggregate` is now only tested on a small model as it does not mean
anything to test it globally for all models.
- Previous tests are unchanged in desired output.
- Added a new test case that showcases better the difference between the
FIRST, MAX and AVERAGE strategies.
Fixes #10263, #10763
See also #10568
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 05-11-2021 14:26:54 | 05-11-2021 14:26:54 | Thank you @Narsil, I'll take a look!
@francescorubbo, @elk-cloner, this PR originated from yours and is based on the same approach. If you approve of this PR, can I add you as co-authors, as you've greatly contributed to its current shape? <|||||>> @francescorubbo, @elk-cloner, this PR originated from yours and is based on the same approach. If you approve of this PR, can I add you as co-authors, as you've greatly contributed to its current shape?
Sure!
<|||||>> Thank you @Narsil, I'll take a look!
>
> @francescorubbo, @elk-cloner, this PR originated from yours and is based on the same approach. If you approve of this PR, can I add you as co-authors, as you've greatly contributed to its current shape?
Sure!<|||||>Great, I will!
Pinging also @joshdevins and @cceyda for feedback<|||||>Looking good, thank you for all the work! I'm wondering if you can include test cases for the original 3 examples provided in https://github.com/huggingface/transformers/issues/10263#issue-811193366 ? The new test examples here look correct but I'm not sure they cover the scope of the first examples. Maybe just stub out a real model and test with the labels for each sub-word token as provided in the example. This will exercise just the aggregation logic then as in theory a model could output any of these example labels.<|||||>@joshdevins
Do you mind giving an example displaying the issue ?
I tried this, but I don't think it exhibits what you mention in the original issue: ( https://github.com/huggingface/transformers/issues/10263#issue-811193366)
```python
NER_MODEL = "elastic/distilbert-base-cased-finetuned-conll03-english"
model = AutoModelForTokenClassification.from_pretrained(NER_MODEL)
tokenizer = AutoTokenizer.from_pretrained(NER_MODEL, use_fast=True)
sentence = """Accenture is a company. Max Mustermann is someone, Elasticsearch is something."""
nlp_ner = pipeline("ner", model=model, tokenizer=tokenizer)
output = nlp_ner(sentence)
print(output)
self.assertEqual(
nested_simplify(output),
[
{"entity": "B-PER", "score": 0.9953969, "index": 9, "word": "Max", "start": 24, "end": 27},
{"entity": "I-PER", "score": 0.9773876, "index": 10, "word": "Must", "start": 28, "end": 32},
{"entity": "I-PER", "score": 0.9924896, "index": 11, "word": "##erman", "start": 32, "end": 37},
{"entity": "I-PER", "score": 0.9860034, "index": 12, "word": "##n", "start": 37, "end": 38},
{"entity": "B-ORG", "score": 0.99201995, "index": 16, "word": "El", "start": 51, "end": 53},
{"entity": "B-ORG", "score": 0.99391395, "index": 17, "word": "##astic", "start": 53, "end": 58},
{"entity": "B-ORG", "score": 0.9962443, "index": 18, "word": "##sea", "start": 58, "end": 61},
{"entity": "B-ORG", "score": 0.9924281, "index": 19, "word": "##rch", "start": 61, "end": 64},
],
)
```<|||||>I think we should just wait for the test with `elastic` model and we' re good to go.<|||||>@Narsil I've since retrained that model (by labelling all sub-word tokens instead of padding) and it appears to work better with new domain data like "Elasticsearch".
The point was more to decouple the fix from a specific model and to be robust to possible outputs of a model particularly for words/sub-words that are out-of-domain for a model and relying on sub-word token classification which can be less predictable (in my experience). This was why I suggested stubbing out the model and just putting in the sub-word labels directly to the aggregator to see if the expected behaviour matches the actual new behaviour.<|||||>@joshdevins Yes, then I think I already added those tests here:
```python
def test_aggregation_strategy_example2(self):
model_name = self.small_models[0]
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
nlp = pipeline(task="ner", model=model_name, tokenizer=tokenizer, framework="pt")
# Just to understand scores indexes in this test
self.assertEqual(
nlp.model.config.id2label,
{0: "O", 1: "B-MISC", 2: "I-MISC", 3: "B-PER", 4: "I-PER", 5: "B-ORG", 6: "I-ORG", 7: "B-LOC", 8: "I-LOC"},
)
example = [
{
# Necessary for AVERAGE
"scores": np.array([0, 0.55, 0, 0.45, 0, 0, 0, 0, 0, 0]),
"is_subword": False,
"index": 1,
"word": "Ra",
"start": 0,
"end": 2,
},
{
"scores": np.array([0, 0, 0, 0.2, 0, 0, 0, 0.8, 0, 0]),
"is_subword": True,
"word": "##ma",
"start": 2,
"end": 4,
"index": 2,
},
{
# 4th score will have the higher average
# 4th score is B-PER for this model
# It's does not correspond to any of the subtokens.
"scores": np.array([0, 0, 0, 0.4, 0, 0, 0.6, 0, 0, 0]),
"is_subword": True,
"word": "##zotti",
"start": 11,
"end": 13,
"index": 3,
},
]
self.assertEqual(
nlp.aggregate(example, AggregationStrategy.NONE),
[
{"end": 2, "entity": "B-MISC", "score": 0.55, "start": 0, "word": "Ra", "index": 1},
{"end": 4, "entity": "B-LOC", "score": 0.8, "start": 2, "word": "##ma", "index": 2},
{"end": 13, "entity": "I-ORG", "score": 0.6, "start": 11, "word": "##zotti", "index": 3},
],
)
self.assertEqual(
nlp.aggregate(example, AggregationStrategy.FIRST),
[{"entity_group": "MISC", "score": 0.55, "word": "Ramazotti", "start": 0, "end": 13}],
)
self.assertEqual(
nlp.aggregate(example, AggregationStrategy.MAX),
[{"entity_group": "LOC", "score": 0.8, "word": "Ramazotti", "start": 0, "end": 13}],
)
self.assertEqual(
nested_simplify(nlp.aggregate(example, AggregationStrategy.AVERAGE)),
[{"entity_group": "PER", "score": 0.35, "word": "Ramazotti", "start": 0, "end": 13}],
)
```
<|||||>@Narsil Ah cool, I missed those examples in my read-through. LGTM 🎉<|||||>@LysandreJik I changed the co-authors, I'll merge after you check I've done it correctly.<|||||>Looks good to me, feel free to merge! |
transformers | 11,679 | closed | Grammar and style edits for the frontpage README | I'm one of those people who always spots apostrophes out of place, I'm sorry! I went through the frontpage README and fixed things up. I also reran the code examples when I had to change the text inside them. | 05-11-2021 13:58:14 | 05-11-2021 13:58:14 | |
transformers | 11,678 | closed | Zeroshot pipeline performance worse on CPU when processing multiple texts as "batch" | Hi, I'm getting weird performance results using the zeroshot pipeline on a laptop with CPU. Essentially piping 5 texts through it at the same time is about 3x _slower_ than just iterating over the texts one by one:
``` python
texts = ...
classifier = pipeline("zero-shot-classification", model="joeddav/xlm-roberta-large-xnli")
t0 = time()
res1 = classifier(texts, labels, template, multi_label=False)
t1 = time()
res2 = [classifier(txt, labels, template, multi_label=False) for txt in texts]
t2 = time()
print(t1-t0, t2-t1)
```
```
>>> 85.13976335525513 27.092346906661987
```
The results are the same (other than some decimals in probabilities). In both cases 4 CPUs are utilized pretty much constantly at 100%. I don't know the code, but perhaps there is an attempt to parallelize at the level of texts, which is being blocked by the GIL or something? Perhaps it's just a documentation issue and batch processing is not supported on CPU?
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Linux-5.6.0-1055-oem-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.7.1+cpu (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@joeddav
## Information
Model I am using: joeddav/xlm-roberta-large-xnli
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
See example above. For reference, the 5 texts are random samples from the 20 newsgroups dataset:
```python
['I have to disagree with you on this one. It is anything BUT common. In the 4 or 5 years I have been watching hockey I have NEVER seen this happen EVER. I am not sure what league you have been watching. :-) Anyone else agree with this?',
'About a month ago there was a photo posted on alt.binaries.pictures.misc of a 17.5-inch Northern Pike which had been caught on a lure made of 256K SIMMs. --',
"You can't. But good luck trying.",
": The cops/feds do *not* need to be able to get hold of your private key to : listen in to cellular conversations. Encryption is not end-to-end, but : cellphone to base-station - it *has* to be this way so that cellular users : and fixed installations can talk to each other. For cellular to cellular : calls, the transmission is decrypted at the base-station, passed to another : base-station and re-encrypted. The cops/feds can listen to the unscrambled : call *provided* they get a warrant to tap into the cellular provider's : equipment. The only reason for wanting a crackable system is so they can : listen without having to obtain a warrant. : But, maybe the Clipper system is secure, and they really do need a warrant : to get the key out of escrow before they can listen in using a scanner (see : above - they don't *have* to go down this route anyway). I have my doubts, : but even if true once they have the key they will *never* again need a : warrant to tap into that particular phone whenever they want. `Well, Judge, : it appears he wasn't a drug-dealer after all, so naturally we'll stop : listening in'... That was true for the UK Paul, but I'm fairly sure they're talking about building end-to-end encryption phones out of this chip. It's *not* for cellular (though it certainly could be used there in the way you suggest)",
'I am trying to get a copy of the _official_ rules of baseball. Someone once sent me the ISBN number of it, but I have since lost it. Can anyone give me this information, or tell me where I can find the book? None of my local bookstores have it.']
```
## Expected behavior
Piping multiple texts through the pipeline should be at least as fast, and ideally faster, than iterating over individual texts.
| 05-11-2021 12:22:00 | 05-11-2021 12:22:00 | The reason is likely padding. When passed through as a batch, the shorter sequences have to be padded to the length of the longest sequence. On GPU, you'll still get a speedup because batching allows so much more parallel computing to happen that it makes up for the padding. But on CPU you just end up with more pad tokens to be processed without as much parallelization speedup. I bet if your 5 sequences were all approx the same length, the difference in compute times would be far smaller or the batched might even be faster.<|||||>That makes perfect sense. Thanks for the quick response! May be a good idea to try breaking up larger texts into similarly sized chunks then I guess. I'll try that if I get around to it.<|||||>I think on CPU, the simplest and best solution is probably going to be to just pass each sequence one at a time (at least if you have wildly variable-length sequences like in your example).<|||||>That's what I'm doing for now, thanks. I think you can close this issue then (unless you want to keep it around to add something to the docs at some point, but I guess low-resource inference using CPU only is not the typical use case anyway). |
transformers | 11,677 | closed | Identify issue in slow torch tests | Pull request to try and identify the source of the hangs in the torch slow CI. Torch slow CI was taking three hours per run until a few days ago, and has since jumped to 6+ hours, for an unknown reason. The job ends up being killed as it goes over the timeout, so the resulting time might end up being even larger than six hours.
Example of run that took 3 hours (April 20, 2021): https://github.com/huggingface/transformers/actions/runs/765376348
Example of run that took 6+ hours (April 21, 2021: https://github.com/huggingface/transformers/actions/runs/768949009
Here is an example of a run that took 6+ hours, while completing the full common tests: https://github.com/huggingface/transformers/runs/2443524960?check_suite_focus=true
The common tests took 5h56 minutes to complete, and the pipeline tests took more than 4 hours to complete before being apparently killed by CI, so there was clearly something going wrong here.
In order to investigate the root cause of the issue, opening a PR here. Tests will be conducted on a testing machine with the exact same configuration as the other CI machines. Investigating on a single run, on a single GPU machine.
The approach is discussed with @stas00, who is helping out and offered some of the steps below.
## Step 1
The first step is ensuring this is not an error linked to the machine itself, so we first start by running the job on the machine without changing anything to it. We only add a 240-minute timeout so that it can go on to step 2 if it goes over the 4 hour mark (as we know it should take less than 3 hours to complete)
See run for first step here: https://github.com/huggingface/transformers/runs/2554755801
Edit: First run errored out at 6 hours like on other machines. I do not think it is a setup issue.
## Step 2 (if step 1 doesn't resolve the issue)
The second step is twofold: removing `pytest-xdist` as we do not leverage it (we're using a single worker), and adding `pytest-timeout` with a timeout of 300 seconds.
See run for second step here: https://github.com/huggingface/transformers/runs/2554760360
## Step 3 (if step 1 & 2 don't resolve the issue)
Do a manual run - at the 3 hour mark, it should be hanging.
As it is hanging, try and retrieve information about what is information. For example, with the following:
```
pip install py-spy
# dumps traceback for each thread
sudo py-spy dump --pid PID
```
## Step 4 (if no step above resolves the issue)
The diff between the two jobs (3hr and 6hr) doesn't seem to have anything that would make the tests hang - but reverting to the previous repository state could help us identify the culprit. Diff: https://github.com/huggingface/transformers/compare/95037a1..95dab34
Additionally, Stas identified two difference in dependencies between the two runs:
```
-datasets-1.5.0
+datasets-1.6.0
-nltk-3.6.1
+nltk-3.6.2
```
Those should be investigated at the same time. | 05-11-2021 10:12:16 | 05-11-2021 10:12:16 | And one more possible venue of just tracing start/stop of each test, so perhaps this can help to identify which test doesn't complete.
Here is a poor man's start/stop tracer
```
# add to conftest.py
import pytest
import os
trace = os.environ.get('TRACE_START_STOP', "")
@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
res = outcome.get_result()
file_name, _, test_name = res.location
test = f"{file_name} {test_name}"
if res.when == "setup" and res.passed:
if len(trace):
print(f"\nTRACE {test} start")
elif res.when == "call" and not res.passed:
pass
elif res.when == "teardown":
if len(trace):
print(f"\nTRACE {test} stop")
```
now run as:
```
TRACE_START_STOP=1 pytest tests/test_trainer.py
```
output:
```
TRACE transformers-master/tests/test_trainer.py TrainerIntegrationTest.test_can_resume_training start
.
TRACE transformers-master/tests/test_trainer.py TrainerIntegrationTest.test_can_resume_training stop
TRACE transformers-master/tests/test_trainer.py TrainerIntegrationTest.test_custom_optimizer start
.
TRACE transformers-master/tests/test_trainer.py TrainerIntegrationTest.test_custom_optimizer stop
TRACE transformers-master/tests/test_trainer.py TrainerIntegrationTest.test_data_is_not_parallelized_when_model_is_parallel start
.
TRACE transformers-master/tests/test_trainer.py TrainerIntegrationTest.test_data_is_not_parallelized_when_model_is_parallel stop
TRACE transformers-master/tests/test_trainer.py TrainerIntegrationTest.test_dynamic_shapes start
.
TRACE transformers-master/tests/test_trainer.py TrainerIntegrationTest.test_dynamic_shapes stop
```<|||||>Update: from further investigation, it was identified that the main culprit was the `test_trainer_seq2seq.py` file which is based on the `cnn_dailymail` dataset.
The issue is that this dataset contains a lot of examples and that it was cached on the shared disk, which is not necessarily in the same region as the machine. My intuition tells me reading large files such as model files is fine as the download/upload speed to the disk should be good - however, I doubt the latency holds up when looking for a lot of different small files. When processing the dataset, the machine did it at a rate of 10 examples per second - vs my laptop PC which handles them at a rate of 12,000 examples per second. Maybe @lhoestq has already encountered such an issue in the past.
Proposal to resolve the issue:
Right now I have patched this test by processing the dataset directly on the machine's disk, then moved it to the shared disk. When re-running the test, the machine picks the preprocessed dataset from the shared disk and passes the test in a total of 53 seconds, which is great.
What we learned with this endeavor is that:
- Having clean test outputs with :white_check_mark: everywhere is nice in theory, but when we have an issue we at least need the test names to be able to identify where it hangs
- Having the `pytest-timeout` dependency is a lifesaver as it can automatically kill the hanging test, like in this case.
I propose we keep the setup as it currently is, and find a way for the Slack CI feedback to say explicitly when there was a timeout. This would help to identify cases such as this one - and if such cases happen often, then we should re-think how the CI handles dataset storage, shared disk storage, or both.<|||||>Great work, @LysandreJik, at sorting it out!
> * Having clean test outputs with white_check_mark everywhere is nice in theory, but when we have an issue we at least need the test names to be able to identify where it hangs
We actually weren't getting this working fully, since pytest only prints the test name once at least one of the tests has completed. So for example if you get a pytest crash it will never print the test name if it was the first test in it.
So we should probably keep this one handy: https://github.com/huggingface/transformers/pull/11677#issuecomment-838846226
since it prints the test name as soon as the test starts (may even need to add a flush should it be buffered but usually the pytest print is unbuffered)
But also using `pytest -sv` will also start with a printout of each full test name, before the test is run, albeit it'd be very very noisy. But in a pinch that is a possible quick solution if you want to know which test started and hasn't finished.<|||||>I haven't experienced such speed differences (12 000 vs 10 samples per sec) on my side.
Note that the recent patch updates (1.6.1 and 1.6.2) fixed memory issues that could have led to slowdowns in some cases, have you tried updating `datasets` ?
Also let me know if I can help you on this<|||||>This should be closed as resolved due to revamped testing infrastructure (#15725, #15726, #15727, #15728, #15729). |
transformers | 11,676 | closed | Merge strings that are being concatenated in the same line | After fa84ae26d6, some strings are needlessly concatenated even though
they are not wrapped to multiple lines.
This change makes the code slightly less confusing and more grepable.
# What does this PR do?
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 05-11-2021 09:54:07 | 05-11-2021 09:54:07 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,675 | closed | Fix TF Roberta for mixed precision training | # What does this PR do?
This PR fixes the TF Roberta model for mixed precision training and is now aligned with the other models.
# Fixes
#11282 | 05-11-2021 08:49:36 | 05-11-2021 08:49:36 | It looks good to me too. Thanks for the PR! |
transformers | 11,674 | closed | Add MacOS TF version | # What does this PR do?
This PR adds the MacOS TensorFlow version mostly for the M1 Apple laptop that is the recommended version to use. | 05-11-2021 08:13:03 | 05-11-2021 08:13:03 | As far as I have been tested until now, yes looks working quite well! I would even say that the work done by the Apple team on it is very impressive!!
I will push new PRs if I encounter new issues with it :)<|||||>Great, thanks a lot @jplu :) |
transformers | 11,673 | closed | Add --text_column to run_summarization_no_trainer | # What does this PR do?
Add the `--text_column` option to `run_summarization_no_trainer.py`
(mostly copy from `run_summarization.py`)
Also removed a duplicated line:
`padding = "max_length" if args.pad_to_max_length else False`
@sgugger | 05-11-2021 06:50:00 | 05-11-2021 06:50:00 | Hi there and thanks for the PR! It doesn't really make sense to have `text_column` without `summary_column` to go with it. Could you add this one too?<|||||>Hi @sgugger ! `summary_column` is already there, it's only `text_column` missing. 😄 |
transformers | 11,672 | closed | Fix docstring of description about input_ids | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR fixes a docstring of description about input_ids in `DistilBertForSequenceClassification` class.
Fixes #11659
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@NielsRogge @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-11-2021 06:36:43 | 05-11-2021 06:36:43 | |
transformers | 11,671 | closed | Why run_translation.py automatically runs on CPU? | I use examples/pytorch/translation/run_translation.py fine-tune mbart-large-cc25 on my datasets, it automatically runs on CPU. I have 2 GPU, but only one is Nvidia.It is RTX 2080super.
python main.py \
--model_name_or_path facebook/mbart-large-cc25 \
--do_train \
--do_eval \
--source_lang en_XX \
--target_lang zh_CN \
--train_file /data/2WangHongyu/bioNMT_WHY/train.json \
--validation_file /data/2WangHongyu/bioNMT_WHY/dev.json \
--output_dir /output \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate \
--cache_dir /model/2WangHongyu/mbart-large
| 05-11-2021 05:44:02 | 05-11-2021 05:44:02 | Duplicate of https://github.com/huggingface/transformers/issues/11548#issuecomment-831159016
Could you please answer to
> Hi! Is your CUDA environment correctly set up? What is the output of the following in your environment?
>
> `python -c "import torch;print(torch.cuda.is_available())"`
and how do you identify that it's not running on GPU? Could you put the accompanying logs? At which point do you see it running on CPU when you think it should be running on GPU?<|||||>> Duplicate of [#11548 (comment)](https://github.com/huggingface/transformers/issues/11548#issuecomment-831159016)
>
> Could you please answer to
>
> > Hi! Is your CUDA environment correctly set up? What is the output of the following in your environment?
> > `python -c "import torch;print(torch.cuda.is_available())"`
>
> and how do you identify that it's not running on GPU? Could you put the accompanying logs? At which point do you see it running on CPU when you think it should be running on GPU?
True…
While the program is running, the GPU utilization is close to 0%, and the CPU utilization is close to 10%. After loading weights, the cmd window keeps showing ? it/s |
transformers | 11,670 | closed | license missing for xlm-roberta-large, and bert-base-spanish-wwm models | Hi HuggingFace team,
I am gratefully using many of huggingface models, but I have found 'xlm-roberta-large' model is missing its license policy.
also 'dccuchile/bert-base-spanish-wwm-uncased' and 'dccuchile/bert-base-spanish-wwm-cased'
[xlm-roberta-large](https://huggingface.co/xlm-roberta-large)
[dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased)
[dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased)
Could you please add license information using readme or model card or this thread?
Thanks ! | 05-11-2021 04:16:59 | 05-11-2021 04:16:59 | xlm-roberta should be licensed under MIT like all models available through [fairseq](https://github.com/pytorch/fairseq#license). @aconneau could you please confirm?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,669 | closed | [Question] How to serialize and load a trained RoBERTa model? | First, I apologize for my English, I am learning the language, I am also learning a little about the networks transformers, tensorflow, keras, Bert and Roberta, I am new to this.
For a kaggle challenge, I wrote a code based on RoBERTa, the results were very good, but wanting to replicate the same code in Colab Pro, it was not possible due to the version of TPU's found in the tool, so I decided to save the weights from Kaggle's training to load them into Colab, but I haven't been able to do it, I'm doing something wrong and I don't understand what's going on.
At the end I will leave the link of the code in Kaggle, so soon I will put in broad strokes as it is written:
`
```
import numpy as np
import pandas as pd
from transformers import AutoTokenizer, TFAutoModel
import tensorflow as tf
import numpy as np
import pandas as pd
from tensorflow.keras.layers import GlobalAveragePooling1D, Dense
from tensorflow.keras import Model
from keras.optimizers import Adam
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
`
`
modelo = 'joeddav/xlm-roberta-large-xnli'
tokenizer = AutoTokenizer.from_pretrained(modelo)
def token(x):
tokens = list(tokenizer.tokenize(x))
tokens.append('</s>')
t = tokenizer.convert_tokens_to_ids(tokens)
return t
def roberta_encode(hypotheses, premises, tokenizer):
Pad = tokenizer.convert_tokens_to_ids(tokenizer.pad_token)
sentence1 = tf.ragged.constant([token(s) for s in np.array(hypotheses)], dtype=tf.int32)
sentence2 = tf.ragged.constant([token(s) for s in np.array(premises)], dtype=tf.int32)
cls = [tokenizer.convert_tokens_to_ids([tokenizer.cls_token])]*sentence1.shape[0]
tokens = tf.concat([cls, sentence1, sentence2], axis=-1)
tokens = tokens[:, :max_len] #quitar para la version full
tokens = tokens.to_tensor(default_value=Pad)
pad = max_len - tf.shape(tokens)[1]
tokens = tf.pad(tokens, [[0, 0], [0, pad]], constant_values=Pad)
input_word_ids = tf.reshape(tokens, [-1, max_len])
input_mask = tf.cast(input_word_ids != Pad, tf.int32)
input_mask = tf.reshape(input_mask, [-1, max_len])
input_type_ids = tf.concat([tf.zeros_like(cls), tf.zeros_like(sentence1), tf.ones_like(sentence2)],
axis=-1).to_tensor()
```
`
`
```
inputs = {
'input_word_ids': input_word_ids,
'input_mask': input_mask,
'input_type_ids': input_type_ids}
return inputs
def build_dataset(x, y, mode, batch_size):#función vista en varios notebook's
if mode == "train":
dataset = (
tf.data.Dataset
.from_tensor_slices((x, y))
.repeat()
.shuffle(5678)
.batch(batch_size)
.prefetch(tf.data.experimental.AUTOTUNE)
)
elif mode == "valid":
dataset = (
tf.data.Dataset
.from_tensor_slices((x, y))
.batch(batch_size)
.cache()
.prefetch(tf.data.experimental.AUTOTUNE)
)
elif mode == "test":
dataset = (
tf.data.Dataset
.from_tensor_slices(x)
.batch(batch_size)
)
else:
raise NotImplementedError
return dataset
`
def build_model(model, max_len):
tf.keras.backend.clear_session()
tf.random.set_seed(0)
with strategy.scope():
input_word_ids = tf.keras.Input(shape=(max_len,), dtype=tf.int32, name="input_word_ids")
model = TFAutoModel.from_pretrained(modelo)
roberta = model([input_word_ids])[0]
output = GlobalAveragePooling1D()(roberta)
output = Dense(3, activation='softmax')(output)
model = Model(inputs=[input_word_ids], outputs = output)
model.compile(optimizer=Adam(lr=1e-5), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.summary()
return model
model = build_model(modelo, max_len)
Some layers from the model checkpoint at joeddav/xlm-roberta-large-xnli were not used when initializing TFXLMRobertaModel: ['classifier']
- This IS expected if you are initializing TFXLMRobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFXLMRobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
All the layers of TFXLMRobertaModel were initialized from the model checkpoint at joeddav/xlm-roberta-large-xnli.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFXLMRobertaModel for predictions without further training.
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_word_ids (InputLayer) [(None, 120)] 0
_________________________________________________________________
tfxlm_roberta_model (TFXLMRo TFBaseModelOutputWithPool 559890432
_________________________________________________________________
global_average_pooling1d (Gl (None, 1024) 0
_________________________________________________________________
dense (Dense) (None, 3) 3075
=================================================================
Total params: 559,893,507
Trainable params: 559,893,507
Non-trainable params: 0
________________________________________________________
steps_per_epoch = len(x_train) // batch_size
stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', verbose=1, patience=2, mode='min', restore_best_weights=True)
model.fit(train_dataset, validation_data=valid_dataset, steps_per_epoch=steps_per_epoch, epochs=4, callbacks=[stop])
Epoch 1/4
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/functional.py:595: UserWarning: Input dict contained keys ['input_mask', 'input_type_ids'] which did not match any model input. They will be ignored by the model.
[n for n in tensors.keys() if n not in ref_input_names])
/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/indexed_slices.py:430: UserWarning: Converting sparse IndexedSlices to a dense Tensor with 256002048 elements. This may consume a large amount of memory.
num_elements)
2487/2487 [==============================] - 852s 284ms/step - loss: 0.2515 - accuracy: 0.9074 - val_loss: 2.3169 - val_accuracy: 0.4474
Epoch 2/4
2487/2487 [==============================] - 683s 275ms/step - loss: 0.1742 - accuracy: 0.9391 - val_loss: 2.1128 - val_accuracy: 0.4446
Epoch 3/4
2487/2487 [==============================] - 685s 276ms/step - loss: 0.1359 - accuracy: 0.9527 - val_loss: 2.6941 - val_accuracy: 0.4377
Epoch 4/4
2487/2487 [==============================] - 685s 276ms/step - loss: 0.1070 - accuracy: 0.9631 - val_loss: 2.6835 - val_accuracy: 0.4423
Restoring model weights from the end of the best epoch.
Epoch 00004: early stopping
<tensorflow.python.keras.callbacks.History at 0x7f51f47b7150>`
```
try saving the weights like this:
`model.save('RobertaClasi.hdf5')`
but once it gave me an error message and on another occasion when wanting to load the model it was not possible.
I really appreciate any indication and at the end I leave the link of the code in kaggle.
[https://www.kaggle.com/hugoarmandopazvivas/contradictory-my-dear-watson-hapv?scriptVersionId=62195702](url)
@Rocketknight1 | 05-11-2021 00:12:20 | 05-11-2021 00:12:20 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,668 | closed | KeyError: 'bigbird_pegasus' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0.dev0
- Platform: Linux-5.10.25-linuxkit-x86_64-with-debian-10.1
- Python version: 3.7.4
- PyTorch version (GPU?): 1.8.1+cu102 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: (False)
- Using distributed or parallel set-up in script?: none
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): `google/bigbird-pegasus-large-arxiv`
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
```python
mport os
import torch
from datasets import load_dataset
from transformers import pipeline
from transformers import AutoTokenizer, AutoModel
dataset = load_dataset("patrickvonplaten/scientific_papers_dummy", "arxiv",
cache_dir=os.getenv("cache_dir", "../../models"))
paper = dataset["validation"]["article"][1]
tokenizer = AutoTokenizer.from_pretrained(
'google/bigbird-pegasus-large-arxiv',
cache_dir=os.getenv("cache_dir", "../../models"))
model = AutoModel.from_pretrained(
'google/bigbird-pegasus-large-arxiv',
cache_dir=os.getenv("cache_dir", "../../models"))
summarizer = pipeline(
'summarization',
model=model,
tokenizer=tokenizer)
```
Steps to reproduce the behavior:
1. Run the provided script
2. output:
```
2021-05-10 17:11:53.523744: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-05-10 17:11:53.523858: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Reusing dataset scientific_papers (models/scientific_papers/arxiv/1.1.1/051d70b9811c81480cbf2a238b499f7713ba4e19acdaeeb92320007d68b6d098)
Traceback (most recent call last):
File "src/bigbird/run.py", line 17, in <module>
cache_dir=os.getenv("cache_dir", "../../models"))
File "/usr/local/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py", line 398, in from_pretrained
config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/usr/local/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 421, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
KeyError: 'bigbird_pegasus'
```
I have also tried this import
```python
from transformers import BigBirdPegasusForConditionalGeneration, BigBirdPegasusTokenizer
```
as described in the docs [here](https://huggingface.co/google/bigbird-pegasus-large-arxiv), but in this case I get another error:
```
from transformers import BigBirdPegasusForConditionalGeneration, BigBirdPegasusTokenizer
ImportError: cannot import name 'BigBirdPegasusForConditionalGeneration' from 'transformers' (unknown location)
```
## Expected behavior
no error | 05-10-2021 17:17:31 | 05-10-2021 17:17:31 | Hey @loretoparisi,
It's working perfectly for me when running this:
```shell
pip3 uninstall transformers
pip3 install git+https://github.com/huggingface/transformers@master
```
```python
from transformers import BigBirdPegasusForConditionalGeneration, AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("google/bigbird-pegasus-large-arxiv")
# or
model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-arxiv")
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-arxiv")
```
BigBird pegasus is not having `BigBirdPegasusTokenizer` so use `AutoTokenizer` only.
<|||||>@vasudevgupta7 thank you it worked, they was `pip3 uninstall transformers`.<|||||>@vasudevgupta7 sorry, I'm a bit confused with the masked model, like in the case of BERT/RoBERTa:
```python
# by default its in `block_sparse` mode with num_random_blocks=3, block_size=64
model = BigBirdModel.from_pretrained("google/bigbird-roberta-large",
block_size=64,
num_random_blocks=3,
cache_dir=os.getenv("cache_dir", "../../models"))
tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-arxiv",
cache_dir=os.getenv("cache_dir", "../../models"))
text = "Paris is the [MASK] of France."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
print(output)
decoded = tokenizer.decode(tokenizer.convert_tokens_to_ids(output))
print(decoded)
```
right way to decode model's output?
Thank you!<|||||>@loretoparisi, you are using a `BigBird Roberta` model as the model and `BigBird Pegagus` as the tokenizer -> those are two different checkpoints.
Also, it would be very nice if you could use the [forum](https://discuss.huggingface.co/) for "How to do ...." questions as we try to keep the github issues for actual issues with the models. Thank you :-)<|||||>@patrickvonplaten typo in the code thanks. My two cents: models cards are missing the decoding part, while it should be there because it is not trivial.<|||||>> Hey @loretoparisi,
>
> It's working perfectly for me when running this:
>
> ```shell
> pip3 uninstall transformers
> pip3 install git+https://github.com/huggingface/transformers@master
> ```
>
> ```python
> from transformers import BigBirdPegasusForConditionalGeneration, AutoTokenizer, AutoModelForSeq2SeqLM
> model = AutoModelForSeq2SeqLM.from_pretrained("google/bigbird-pegasus-large-arxiv")
> # or
> model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-arxiv")
>
> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-arxiv")
> ```
>
> BigBird pegasus is not having `BigBirdPegasusTokenizer` so use `AutoTokenizer` only.
I have the same problem
I tried the code
`pip3 uninstall transformers pip3 install git+https://github.com/huggingface/transformers@master`
and then get
```
WARNING: Did not find branch or tag 'master', assuming revision or ref.
Running command git checkout -q master
error: pathspec 'master' did not match any file(s) known to git
error: subprocess-exited-with-error
× git checkout -q master did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× git checkout -q master did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
```
After running the code, the problem is still there<|||||>Relaunch my notebook, problem solved 😐 |
transformers | 11,667 | closed | [BigBird Pegasus] Add config to auto tokenizer | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds BigBirdPegasus to auto tokenizer
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-10-2021 16:22:10 | 05-10-2021 16:22:10 | |
transformers | 11,666 | closed | GPTNeoForCausalLM: resuming Trainer from checkpoint causes Missing key(s) in state_dict: "lm_head.weight" | ## Environment info
- `transformers` version: 4.6.0.dev0 (also happens with pip 4.5.1)
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic (Google Colab)
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (True)
- Tensorflow version (GPU?): Not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
- gpt2: @patrickvonplaten, @LysandreJik
- trainer: @sgugger
## Information
Resuming training from a `Trainer` checkpoint for `GPTNeoForCausalLM` causes the following runtime error:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-14-3b03205cdcc2> in <module>()
2 ### %%%%%%%%%%%%%%%%%%%%%%%% TRAINING %%%%%%%%%%%%%%%%%%%%%%%%% ###
3 ### %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ###
----> 4 trainer.train(checkpoint)
1 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict)
1222 if len(error_msgs) > 0:
1223 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
-> 1224 self.__class__.__name__, "\n\t".join(error_msgs)))
1225 return _IncompatibleKeys(missing_keys, unexpected_keys)
1226
RuntimeError: Error(s) in loading state_dict for GPTNeoForCausalLM:
Missing key(s) in state_dict: "lm_head.weight".
```
This happens with the 125M model, havent tested with 1.3b an 2.7b. Loadding the model manually using `.from_pretrained()` and commenting the following lines in `/transformers/trainer.py`
```
else:
# We load the model state dict on the CPU to avoid an OOM error.
state_dict = torch.load(os.path.join(resume_from_checkpoint, WEIGHTS_NAME), map_location="cpu")
# If the model is on the GPU, it still works!
self.model.load_state_dict(state_dict)
```
Allows me to resume training.
## To reproduce
Steps to reproduce the behavior:
1. Initialize training via `Trainer` for `GPTNeoForCausalLM` and save a checkpoint
2. Reset env and try to resume training from such checkpoint
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
For the training to resume correctly | 05-10-2021 16:19:20 | 05-10-2021 16:19:20 | @xusky69 , does the fix work for you? I'm still getting an error upon Gpt-neo training:
```bash
...
File "huggingface/transformers_local/src/transformers/trainer.py", line 1366, in train
self.model.load_state_dict(state_dict)
File "huggingface-SJGCx2Wk/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1224, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for GPTNeoForCausalLM:
Missing key(s) in state_dict: "lm_head.weight".
```
<|||||>The traceback shows you are not using a version of the library that has the fix (the line `self.model.load_state_dict(state_dict)` has been changed in the PR mentioned). Make sure to use a source install or upgrade to the latest release (4.6.0).<|||||>@sgugger , thanks for the response! I use the latest version, therefore the script fails in a different place (line 1365 – when the best model is loaded, but the PR fixes initial loading from the checkpoint). I've created a [new one](https://github.com/huggingface/transformers/pull/11718) – could you please take a look? |
transformers | 11,665 | closed | [Question] How to move and reuse preprocessed dataset? | Hi, I am training a gpt-2 from scratch using run_clm.py.
I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess),
I tried to :
1. copy `path_to_cache_dir/datasets` to `new_cache_dir/datasets`
2. set `export HF_DATASETS_CACHE="new_cache_dir/"`
but the program still re-preprocess the whole dataset without loading cache.
I also tried to `torch.save(lm_datasets, fw)`, but the saved file is only 14M.
What is the proper way to do this? | 05-10-2021 15:53:18 | 05-10-2021 15:53:18 | cc @lhoestq the preprocessed dataset should be cached, right?<|||||>Hi ! Yes it should, as long as you didn't change any parameter passed to the `map` function. They must be exactly the same.
Could you open an issue on the `datasets` repo at https://github.com/huggingface/datasets if you want to discuss this caching issue in more details ?<|||||>> Hi ! Yes it should, as long as you didn't change any parameter passed to the `map` function. They must be exactly the same.
>
> Could you open an issue on the `datasets` repo at https://github.com/huggingface/datasets if you want to discuss this caching issue in more details ?
Sure thanks, the new issue is at here:
https://github.com/huggingface/datasets/issues/2345<|||||>> Hi ! Yes it should, as long as you didn't change any parameter passed to the `map` function. They must be exactly the same.
>
> Could you open an issue on the `datasets` repo at https://github.com/huggingface/datasets if you want to discuss this caching issue in more details ?
I tried to re-run the example [script ](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py) after a success running (preprocess finished and start training) it still re-preprocess all data.
**Details:**
(1) It re-preprocess data even after showing: `05/11/2021 11:47:01 - WARNING - datasets.builder - Reusing dataset text (/home/cache/text/default-7083a0557f2cff9e/0.0.0/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5)`
(2) I didn't --overwrite_cache
So, how to reuse preprocessed data? Is there any option I need to open for the scripts? @lhoestq
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,664 | closed | RuntimeError: Error(s) in loading state_dict for Wav2Vec2ForCTC | ## Environment info
- `transformers` version: 4.5.0
- Platform: Ubuntu 18.04
- Python version: 3.7.4
### Who can help
- @patrickvonplaten
## Information
Model Im using is Wav2vec 2.0.
The problem arises when loading the pretrained/fintuned model:
```
RuntimeError: Error(s) in loading state_dict for Wav2Vec2ForCTC:
size mismatch for lm_head.weight: copying a param with shape torch.Size([123, 768]) from checkpoint, the shape in current model is torch.Size([132, 768]).
size mismatch for lm_head.bias: copying a param with shape torch.Size([123]) from checkpoint, the shape in current model is torch.Size([132]).
```
The tasks I am working on is:
* Finetuning Wav2vec 2.0 on my own data from the finetuned xlsr french model.
## To reproduce
I followed the steps mentioned [here](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2). I started to finetune the base model on my own dataset. Then, I tried to finetune it from the finetuned french xlsr model.
With fairseq, to fix this problem, we must add the `--restore` argument to say that this model is a finetuned model from a X-architecture.
Any idea about how can we do it using Transformers ?
Here is the whole error:
```
RuntimeError: Error(s) in loading state_dict for Wav2Vec2ForCTC:
size mismatch for lm_head.weight: copying a param with shape torch.Size([123, 768]) from checkpoint, the shape in current model is torch.Size([132, 768]).
size mismatch for lm_head.bias: copying a param with shape torch.Size([123]) from checkpoint, the shape in current model is torch.Size([132]).
``` | 05-10-2021 15:43:02 | 05-10-2021 15:43:02 | Hey @Kamilbentounes,
It looks like your `config.vocab_size` does not match the `config.vocab_size` of the fine-tuned French Wav2Vec2 model. It looks like you want to initialize the model with 132 characters, but the original vocab size is 123. Could you try to align the vocab size to the one of the fine-tuned model? :-)
If this doesn't fix the problem, please ping me here again<|||||>Hey @patrickvonplaten
Thanks for your quick reply. Yes, before writing this issue I tried it but I had an error during training phase :/ <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,663 | closed | Save scaler state dict when checkpointing | # What does this PR do?
One last thing was missing for resuming with checkpoints and have exactly the same results as a complete training: the gradient scaler state when using mixed precision with AMP in PyTorch. This PR addresses that.
Fixes #11323 | 05-10-2021 14:44:28 | 05-10-2021 14:44:28 | |
transformers | 11,662 | closed | IBERT: Testing the speedup | Hi,
I want to test IBERT's speedup, and I have done exactly what is said in https://huggingface.co/kssteven/ibert-roberta-base. For the quantization part, when I set quant_mode to true and run the evaluation again, I get a much slower model. What am I doing wrong?
Thank you for your reply! | 05-10-2021 14:36:21 | 05-10-2021 14:36:21 | Hi! You may find the discussion in https://github.com/huggingface/transformers/issues/11312 by @kssteven418 interesting!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I want to test IBERT's, and I have done exactly what is said in https://huggingface.co/kssteven/ibert-roberta-base. For the quantization part, when I set quant_mode to true and run the evaluation again, I get a much low accuracy model. What am I doing wrong? |
transformers | 11,661 | closed | Update pretrained_models.rst | # What does this PR do?
Updates the description of facebook/bart-base and facebook/bart-large in Pretrained models to specify the number of encoder and decoder layers according to #11574 | 05-10-2021 13:03:16 | 05-10-2021 13:03:16 | Hi @patil-suraj ,
I added the (N encoder and decoder layers) to the existing descriptions for facebook/bart-base and facebook/bart-large. Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,660 | closed | run_text_classification.py fix | This is the fix to TF run_text_classification.py suggested by @bhadreshpsavani in #10482 . | 05-10-2021 12:09:43 | 05-10-2021 12:09:43 | |
transformers | 11,659 | closed | [Doc] Something wrong in description of 'DistilBertForSequenceClassification' in doc | I think the description of input_ids (one of the parameters) of [DistilBertForSequenceClassification](https://huggingface.co/transformers/model_doc/distilbert.html#distilbertforsequenceclassification) is not correct.
I think the input_ids should be `torch.LongTensor of shape (batch_size, sequence_length)`, rather than `torch.LongTensor of shape (batch_size, num_choices)`.
| 05-10-2021 12:02:31 | 05-10-2021 12:02:31 | That's correct, thanks for spotting! Could you open a PR to fix this?
Thanks! |
transformers | 11,658 | closed | NCLL No space left on device Error while training with deepspeed | ## Environment info
- `transformers` version: 4.4.0
- Platform: docker
- Python version: 3.8
- PyTorch version (GPU?): pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 -f https://download.pytorch.org/whl/torch_stable.html
- Using distributed or parallel set-up in script?: DDP
Models:
- LayoutLMForTokenClassification
Library:
- deepspeed: @stas00
The problem arises when using:
I add my deepspeed config file to `TrainingArguments` when initialising args object.
During the training, I got a strange error from NCLL backend. Error message is `No space left on device` but it is not possible.
Here is complete traceback.
```
PyTorch version 1.7.1+cu101 available.
TensorFlow version 2.2.1 available.
Successfully imported onnx version 1.7.0
2021-05-10 10:56:19 DEBUG tensorflow Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.
2021-05-10 10:56:24 INFO __main__ Training in distributed mode...
[2021-05-10 10:56:26,344] [WARNING] [runner.py:122:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2021-05-10 10:56:26,360] [INFO] [runner.py:360:main] cmd = /usr/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMV19 --master_addr=127.0.0.1 --master_port=29500 nlp_ner_layoutlm/train_pipeline/training_step/training_script.py --local_example_folder /f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/layoutlm_data --model_dir /mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/pytorch_model --window_length 512 --batch_size 8 --weight_decay 0.0 --adam_epsilon 1e-08 --learning_rate 2e-05 --epochs 200 --seed 11046060 --bit_precision_fp16 1 --tagging_scheme BILOU --profile_logs /mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/tensorboard_logs --patience 50 --gradient_accumulation_steps 2 --warmup_steps 300 --composite 0 --n_transformer_layers 1 --composite_loss_weight 0.5 --self_training 0 --base_model /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface
[2021-05-10 10:56:28,227] [INFO] [launch.py:73:main] 0 NCCL_DEBUG INFO
[2021-05-10 10:56:28,227] [INFO] [launch.py:73:main] 0 NCCL_VERSION 2.7.8
[2021-05-10 10:56:28,227] [INFO] [launch.py:80:main] WORLD INFO DICT: {'localhost': [0, 1]}
[2021-05-10 10:56:28,227] [INFO] [launch.py:86:main] nnodes=1, num_local_procs=2, node_rank=0
[2021-05-10 10:56:28,227] [INFO] [launch.py:101:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1]})
[2021-05-10 10:56:28,227] [INFO] [launch.py:102:main] dist_world_size=2
[2021-05-10 10:56:28,227] [INFO] [launch.py:104:main] Setting CUDA_VISIBLE_DEVICES=0,1
2021-05-10 10:56:30 DEBUG tensorflow Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.
2021-05-10 10:56:30 DEBUG tensorflow Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.
PyTorch version 1.7.1+cu101 available.
PyTorch version 1.7.1+cu101 available.
TensorFlow version 2.2.1 available.
TensorFlow version 2.2.1 available.
Successfully imported onnx version 1.7.0
Successfully imported onnx version 1.7.0
2021-05-10 10:56:32 INFO common_utils.utils Received the following cli arguments: ['nlp_ner_layoutlm/train_pipeline/training_step/training_script.py', '--local_rank=0', '--local_example_folder', '/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/layoutlm_data', '--model_dir', '/mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/pytorch_model', '--window_length', '512', '--batch_size', '8', '--weight_decay', '0.0', '--adam_epsilon', '1e-08', '--learning_rate', '2e-05', '--epochs', '200', '--seed', '11046060', '--bit_precision_fp16', '1', '--tagging_scheme', 'BILOU', '--profile_logs', '/mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/tensorboard_logs', '--patience', '50', '--gradient_accumulation_steps', '2', '--warmup_steps', '300', '--composite', '0', '--n_transformer_layers', '1', '--composite_loss_weight', '0.5', '--self_training', '0', '--base_model', '/mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface']
2021-05-10 10:56:32 INFO common_utils.utils Received the following cli arguments: ['nlp_ner_layoutlm/train_pipeline/training_step/training_script.py', '--local_rank=1', '--local_example_folder', '/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/layoutlm_data', '--model_dir', '/mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/pytorch_model', '--window_length', '512', '--batch_size', '8', '--weight_decay', '0.0', '--adam_epsilon', '1e-08', '--learning_rate', '2e-05', '--epochs', '200', '--seed', '11046060', '--bit_precision_fp16', '1', '--tagging_scheme', 'BILOU', '--profile_logs', '/mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/tensorboard_logs', '--patience', '50', '--gradient_accumulation_steps', '2', '--warmup_steps', '300', '--composite', '0', '--n_transformer_layers', '1', '--composite_loss_weight', '0.5', '--self_training', '0', '--base_model', '/mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface']
2021-05-10 10:56:32 INFO common_utils.utils Parsed the following parameters: {'local_example_folder': '/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/layoutlm_data', 'model_dir': '/mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/pytorch_model', 'window_length': 512, 'batch_size': 8, 'weight_decay': 0.0, 'adam_epsilon': 1e-08, 'learning_rate': 2e-05, 'epochs': 200, 'seed': 11046060, 'bit_precision_fp16': 1, 'tagging_scheme': 'BILOU', 'profile_logs': '/mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/tensorboard_logs', 'patience': 50, 'base_model': '/mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface', 'gradient_accumulation_steps': 2, 'warmup_steps': 300, 'old_model_dir': None, 'local_rank': 0, 'sampling_lambda': 0.0, 'self_training': 0, 'composite': 0, 'n_transformer_layers': 1, 'composite_loss_weight': 0.5}
2021-05-10 10:56:32 INFO common_utils.utils Parsed the following parameters: {'local_example_folder': '/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/layoutlm_data', 'model_dir': '/mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/pytorch_model', 'window_length': 512, 'batch_size': 8, 'weight_decay': 0.0, 'adam_epsilon': 1e-08, 'learning_rate': 2e-05, 'epochs': 200, 'seed': 11046060, 'bit_precision_fp16': 1, 'tagging_scheme': 'BILOU', 'profile_logs': '/mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/tensorboard_logs', 'patience': 50, 'base_model': '/mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface', 'gradient_accumulation_steps': 2, 'warmup_steps': 300, 'old_model_dir': None, 'local_rank': 1, 'sampling_lambda': 0.0, 'self_training': 0, 'composite': 0, 'n_transformer_layers': 1, 'composite_loss_weight': 0.5}
Didn't find file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/added_tokens.json. We won't load it.
Didn't find file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/added_tokens.json. We won't load it.
Didn't find file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/tokenizer.json. We won't load it.
Didn't find file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/tokenizer.json. We won't load it.
loading file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/vocab.txt
loading file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/vocab.txt
loading file None
loading file None
loading file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/special_tokens_map.json
loading file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/special_tokens_map.json
loading file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/tokenizer_config.json
loading file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/tokenizer_config.json
loading file None
loading file None
2021-05-10 10:56:32 INFO nlp_ner_layoutlm.layoutlm.data_io Creating features from dataset file at /f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/layoutlm_data
2021-05-10 10:56:32 INFO nlp_ner_layoutlm.layoutlm.data_io Creating features from dataset file at /f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/layoutlm_data
2021-05-10 10:56:35 INFO nlp_ner_layoutlm.layoutlm.data_io Creating features from dataset file at /f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/layoutlm_data
2021-05-10 10:56:35 INFO nlp_ner_layoutlm.layoutlm.data_io Creating features from dataset file at /f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/layoutlm_data
2021-05-10 10:56:35 INFO nlp_ner_layoutlm.layoutlm.trainers Using base model from /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface
2021-05-10 10:56:35 INFO transformers.configuration_utils loading configuration file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/config.json
2021-05-10 10:56:35 INFO transformers.configuration_utils Model config LayoutLMConfig {
"attention_probs_dropout_prob": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2",
"3": "LABEL_3",
"4": "LABEL_4",
"5": "LABEL_5",
"6": "LABEL_6",
"7": "LABEL_7",
"8": "LABEL_8",
"9": "LABEL_9",
"10": "LABEL_10",
"11": "LABEL_11",
"12": "LABEL_12",
"13": "LABEL_13",
"14": "LABEL_14",
"15": "LABEL_15",
"16": "LABEL_16",
"17": "LABEL_17",
"18": "LABEL_18",
"19": "LABEL_19",
"20": "LABEL_20",
"21": "LABEL_21",
"22": "LABEL_22",
"23": "LABEL_23",
"24": "LABEL_24"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_10": 10,
"LABEL_11": 11,
"LABEL_12": 12,
"LABEL_13": 13,
"LABEL_14": 14,
"LABEL_15": 15,
"LABEL_16": 16,
"LABEL_17": 17,
"LABEL_18": 18,
"LABEL_19": 19,
"LABEL_2": 2,
"LABEL_20": 20,
"LABEL_21": 21,
"LABEL_22": 22,
"LABEL_23": 23,
"LABEL_24": 24,
"LABEL_3": 3,
"LABEL_4": 4,
"LABEL_5": 5,
"LABEL_6": 6,
"LABEL_7": 7,
"LABEL_8": 8,
"LABEL_9": 9
},
"layer_norm_eps": 1e-12,
"max_2d_position_embeddings": 1024,
"max_position_embeddings": 512,
"model_type": "layoutlm",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"output_past": true,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"transformers_version": "4.4.0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
2021-05-10 10:56:35 INFO nlp_ner_layoutlm.layoutlm.trainers Using base model from /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface
2021-05-10 10:56:35 INFO transformers.modeling_utils loading weights file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/pytorch_model.bin
2021-05-10 10:56:35 INFO transformers.configuration_utils loading configuration file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/config.json
2021-05-10 10:56:35 INFO transformers.configuration_utils Model config LayoutLMConfig {
"attention_probs_dropout_prob": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2",
"3": "LABEL_3",
"4": "LABEL_4",
"5": "LABEL_5",
"6": "LABEL_6",
"7": "LABEL_7",
"8": "LABEL_8",
"9": "LABEL_9",
"10": "LABEL_10",
"11": "LABEL_11",
"12": "LABEL_12",
"13": "LABEL_13",
"14": "LABEL_14",
"15": "LABEL_15",
"16": "LABEL_16",
"17": "LABEL_17",
"18": "LABEL_18",
"19": "LABEL_19",
"20": "LABEL_20",
"21": "LABEL_21",
"22": "LABEL_22",
"23": "LABEL_23",
"24": "LABEL_24"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_10": 10,
"LABEL_11": 11,
"LABEL_12": 12,
"LABEL_13": 13,
"LABEL_14": 14,
"LABEL_15": 15,
"LABEL_16": 16,
"LABEL_17": 17,
"LABEL_18": 18,
"LABEL_19": 19,
"LABEL_2": 2,
"LABEL_20": 20,
"LABEL_21": 21,
"LABEL_22": 22,
"LABEL_23": 23,
"LABEL_24": 24,
"LABEL_3": 3,
"LABEL_4": 4,
"LABEL_5": 5,
"LABEL_6": 6,
"LABEL_7": 7,
"LABEL_8": 8,
"LABEL_9": 9
},
"layer_norm_eps": 1e-12,
"max_2d_position_embeddings": 1024,
"max_position_embeddings": 512,
"model_type": "layoutlm",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"output_past": true,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"transformers_version": "4.4.0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
2021-05-10 10:56:35 INFO transformers.modeling_utils loading weights file /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface/pytorch_model.bin
2021-05-10 10:56:50 INFO transformers.modeling_utils All model checkpoint weights were used when initializing LayoutLMModel.
2021-05-10 10:56:50 INFO transformers.modeling_utils All the weights of LayoutLMModel were initialized from the model checkpoint at /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LayoutLMModel for predictions without further training.
2021-05-10 10:56:50 INFO transformers.modeling_utils All model checkpoint weights were used when initializing LayoutLMModel.
2021-05-10 10:56:50 INFO transformers.modeling_utils All the weights of LayoutLMModel were initialized from the model checkpoint at /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LayoutLMModel for predictions without further training.
2021-05-10 10:56:53 INFO nlp_ner_layoutlm.layoutlm.trainers training on cuda
2021-05-10 10:56:53 INFO nlp_ner_layoutlm.layoutlm.trainers training on cuda
2021-05-10 10:56:56 INFO transformers.training_args PyTorch: setting up devices
2021-05-10 10:56:56 INFO transformers.training_args PyTorch: setting up devices
[2021-05-10 10:56:56,398] [INFO] [distributed.py:46:init_distributed] Initializing torch distributed with backend: nccl
[2021-05-10 10:56:56,429] [INFO] [distributed.py:46:init_distributed] Initializing torch distributed with backend: nccl
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:97 [0] NCCL INFO Bootstrap : Using [0]eth0:10.1.0.194<0>
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:97 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:97 [0] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:97 [0] NCCL INFO NET/Socket : Using [0]eth0:10.1.0.194<0>
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:97 [0] NCCL INFO Using network Socket
NCCL version 2.7.8+cuda10.1
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:98 [1] NCCL INFO Bootstrap : Using [0]eth0:10.1.0.194<0>
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:98 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:98 [1] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:98 [1] NCCL INFO NET/Socket : Using [0]eth0:10.1.0.194<0>
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:98 [1] NCCL INFO Using network Socket
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:176 [0] NCCL INFO Channel 00/02 : 0 1
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:177 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/64
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:176 [0] NCCL INFO Channel 01/02 : 0 1
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:177 [1] NCCL INFO Trees [0] -1/-1/-1->1->0|0->1->-1/-1/-1 [1] -1/-1/-1->1->0|0->1->-1/-1/-1
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:177 [1] NCCL INFO Setting affinity for GPU 1 to 0fff
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:176 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/64
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:176 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1|-1->0->1/-1/-1 [1] 1/-1/-1->0->-1|-1->0->1/-1/-1
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:176 [0] NCCL INFO Setting affinity for GPU 0 to 0fff
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:176 [0] NCCL INFO Channel 00 : 0[100000] -> 1[200000] via direct shared memory
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:177 [1] NCCL INFO Channel 00 : 1[200000] -> 0[100000] via direct shared memory
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:176 [0] NCCL INFO Channel 01 : 0[100000] -> 1[200000] via direct shared memory
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:177 [1] NCCL INFO Channel 01 : 1[200000] -> 0[100000] via direct shared memory
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:176 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:176 [0] NCCL INFO comm 0x7ff1ec001060 rank 0 nranks 2 cudaDev 0 busId 100000 - Init COMPLETE
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:97 [0] NCCL INFO Launch mode Parallel
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:177 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:177 [1] NCCL INFO comm 0x7ff5d8001060 rank 1 nranks 2 cudaDev 1 busId 200000 - Init COMPLETE
2021-05-10 10:56:59 INFO transformers.trainer Using amp fp16 backend
2021-05-10 10:56:59 INFO transformers.trainer Using amp fp16 backend
2021-05-10 10:56:59 INFO nlp_ner_layoutlm.layoutlm.utils_train Starting to train...
2021-05-10 10:56:59 INFO nlp_ner_layoutlm.layoutlm.utils_train Starting to train...
2021-05-10 10:56:59 INFO transformers.integrations Keeping the `fp16` config from nlp_ner_layoutlm/toplevel_configs/ds_config.json intact, ignoring any fp16-specific cl args
[2021-05-10 10:56:59,844] [WARNING] [config.py:79:_sanity_check] DeepSpeedConfig: cpu_offload is deprecated. Please use offload_optimizer.
[2021-05-10 10:56:59,891] [INFO] [utils.py:11:_initialize_parameter_parallel_groups] data_parallel_size: 2, parameter_parallel_size: 2
2021-05-10 10:56:59 INFO transformers.integrations Keeping the `fp16` config from nlp_ner_layoutlm/toplevel_configs/ds_config.json intact, ignoring any fp16-specific cl args
[2021-05-10 10:56:59,932] [INFO] [logging.py:60:log_dist] [Rank 0] DeepSpeed info: version=0.3.16, git-hash=unknown, git-branch=unknown
[2021-05-10 10:56:59,932] [WARNING] [config.py:79:_sanity_check] DeepSpeedConfig: cpu_offload is deprecated. Please use offload_optimizer.
[2021-05-10 10:56:59,954] [INFO] [utils.py:11:_initialize_parameter_parallel_groups] data_parallel_size: 2, parameter_parallel_size: 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO Channel 00/02 : 0 1
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/64
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO Channel 01/02 : 0 1
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] NCCL INFO Trees [0] -1/-1/-1->1->0|0->1->-1/-1/-1 [1] -1/-1/-1->1->0|0->1->-1/-1/-1
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] NCCL INFO Setting affinity for GPU 1 to 0fff
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 8/8/64
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1|-1->0->1/-1/-1 [1] 1/-1/-1->0->-1|-1->0->1/-1/-1
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO Setting affinity for GPU 0 to 0fff
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO Channel 00 : 0[100000] -> 1[200000] via direct shared memory
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] NCCL INFO Channel 00 : 1[200000] -> 0[100000] via direct shared memory
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] include/shm.h:28 NCCL WARN Call to posix_fallocate failed : No space left on device
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] NCCL INFO include/shm.h:41 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] include/shm.h:48 NCCL WARN Error while creating shared memory segment nccl-shm-recv-8f2537dafaac0775-1-0-1 (size 9637888)
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] NCCL INFO transport/shm.cc:101 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] NCCL INFO transport.cc:30 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] NCCL INFO transport.cc:49 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] NCCL INFO init.cc:766 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] NCCL INFO init.cc:840 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:98:184 [1] NCCL INFO group.cc:73 -> 2 [Async thread]
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] include/shm.h:28 NCCL WARN Call to posix_fallocate failed : No space left on device
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO include/shm.h:41 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] include/shm.h:48 NCCL WARN Error while creating shared memory segment nccl-shm-recv-c43b846667d22574-1-1-0 (size 9637888)
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO transport/shm.cc:101 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO transport.cc:30 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO transport.cc:49 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO init.cc:766 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO init.cc:840 -> 2
simple-layoutlm-mmml-1573-investigate-how-to-implement-d-lcbz2t:97:183 [0] NCCL INFO group.cc:73 -> 2 [Async thread]
2021-05-10 10:56:59 ERROR __main__ NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:784, unhandled system error, NCCL version 2.7.8
Traceback (most recent call last):
File "nlp_ner_layoutlm/train_pipeline/training_step/training_script.py", line 64, in <module>
train_model(
File "/app/nlp_ner_layoutlm/layoutlm/utils_train.py", line 147, in train_model
raise e
File "/app/nlp_ner_layoutlm/layoutlm/utils_train.py", line 145, in train_model
trainer.train()
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 903, in train
model, optimizer, lr_scheduler = init_deepspeed(self, num_training_steps=max_steps)
File "/usr/local/lib/python3.8/dist-packages/transformers/integrations.py", line 414, in init_deepspeed
model, optimizer, _, lr_scheduler = deepspeed.initialize(
File "/usr/local/lib/python3.8/dist-packages/deepspeed/__init__.py", line 120, in initialize
engine = DeepSpeedEngine(args=args,
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 149, in __init__
self._configure_distributed_model(model)
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 591, in _configure_distributed_model
self._broadcast_model()
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 559, in _broadcast_model
dist.broadcast(p,
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py", line 864, in broadcast
work = group.broadcast([tensor], opts)
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:784, unhandled system error, NCCL version 2.7.8
2021-05-10 10:56:59 ERROR __main__ NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:784, unhandled system error, NCCL version 2.7.8
Traceback (most recent call last):
File "nlp_ner_layoutlm/train_pipeline/training_step/training_script.py", line 64, in <module>
train_model(
File "/app/nlp_ner_layoutlm/layoutlm/utils_train.py", line 147, in train_model
raise e
File "/app/nlp_ner_layoutlm/layoutlm/utils_train.py", line 145, in train_model
trainer.train()
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 903, in train
model, optimizer, lr_scheduler = init_deepspeed(self, num_training_steps=max_steps)
File "/usr/local/lib/python3.8/dist-packages/transformers/integrations.py", line 414, in init_deepspeed
model, optimizer, _, lr_scheduler = deepspeed.initialize(
File "/usr/local/lib/python3.8/dist-packages/deepspeed/__init__.py", line 120, in initialize
engine = DeepSpeedEngine(args=args,
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 149, in __init__
self._configure_distributed_model(model)
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 591, in _configure_distributed_model
self._broadcast_model()
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 559, in _broadcast_model
dist.broadcast(p,
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py", line 864, in broadcast
work = group.broadcast([tensor], opts)
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:784, unhandled system error, NCCL version 2.7.8
Traceback (most recent call last):
File "nlp_ner_layoutlm/train_pipeline/training_step/training_script.py", line 64, in <module>
train_model(
File "/app/nlp_ner_layoutlm/layoutlm/utils_train.py", line 147, in train_model
raise e
File "/app/nlp_ner_layoutlm/layoutlm/utils_train.py", line 145, in train_model
trainer.train()
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 903, in train
model, optimizer, lr_scheduler = init_deepspeed(self, num_training_steps=max_steps)
File "/usr/local/lib/python3.8/dist-packages/transformers/integrations.py", line 414, in init_deepspeed
model, optimizer, _, lr_scheduler = deepspeed.initialize(
File "/usr/local/lib/python3.8/dist-packages/deepspeed/__init__.py", line 120, in initialize
engine = DeepSpeedEngine(args=args,
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 149, in __init__
self._configure_distributed_model(model)
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 591, in _configure_distributed_model
Traceback (most recent call last):
File "nlp_ner_layoutlm/train_pipeline/training_step/training_script.py", line 64, in <module>
train_model(
File "/app/nlp_ner_layoutlm/layoutlm/utils_train.py", line 147, in train_model
self._broadcast_model()
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 559, in _broadcast_model
raise e
File "/app/nlp_ner_layoutlm/layoutlm/utils_train.py", line 145, in train_model
trainer.train()
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 903, in train
dist.broadcast(p,
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py", line 864, in broadcast
work = group.broadcast([tensor], opts)
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:784, unhandled system error, NCCL version 2.7.8
model, optimizer, lr_scheduler = init_deepspeed(self, num_training_steps=max_steps)
File "/usr/local/lib/python3.8/dist-packages/transformers/integrations.py", line 414, in init_deepspeed
model, optimizer, _, lr_scheduler = deepspeed.initialize(
File "/usr/local/lib/python3.8/dist-packages/deepspeed/__init__.py", line 120, in initialize
engine = DeepSpeedEngine(args=args,
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 149, in __init__
self._configure_distributed_model(model)
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 591, in _configure_distributed_model
self._broadcast_model()
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 559, in _broadcast_model
dist.broadcast(p,
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py", line 864, in broadcast
work = group.broadcast([tensor], opts)
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:784, unhandled system error, NCCL version 2.7.8
Killing subprocess 97
Killing subprocess 98
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 192, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.8/dist-packages/deepspeed/launcher/launch.py", line 171, in <module>
main()
File "/usr/local/lib/python3.8/dist-packages/deepspeed/launcher/launch.py", line 161, in main
sigkill_handler(signal.SIGTERM, None) # not coming back
File "/usr/local/lib/python3.8/dist-packages/deepspeed/launcher/launch.py", line 139, in sigkill_handler
raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python', '-u', 'nlp_ner_layoutlm/train_pipeline/training_step/training_script.py', '--local_rank=1', '--local_example_folder', '/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/layoutlm_data', '--model_dir', '/mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/pytorch_model', '--window_length', '512', '--batch_size', '8', '--weight_decay', '0.0', '--adam_epsilon', '1e-08', '--learning_rate', '2e-05', '--epochs', '200', '--seed', '11046060', '--bit_precision_fp16', '1', '--tagging_scheme', 'BILOU', '--profile_logs', '/mnt/pipeline/f8a83c0e-1438-4a6c-a2d1-f5ed8cf76b0a/tensorboard_logs', '--patience', '50', '--gradient_accumulation_steps', '2', '--warmup_steps', '300', '--composite', '0', '--n_transformer_layers', '1', '--composite_loss_weight', '0.5', '--self_training', '0', '--base_model', '/mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface']' returned non-zero exit status 1.
2021-05-10 10:57:04 INFO common_utils.message_utils Loading initial config...
2021-05-10 10:57:04 INFO common_utils.message_utils Injecting secrets...
2021-05-10 10:57:05 INFO common_utils.message_utils Done injecting keyvault into config...
2021-05-10 10:57:05 DEBUG common_utils.kafka Initializing kafka producer
2021-05-10 10:57:05 DEBUG common_utils.message_utils Sending exception to Kafka...
2021-05-10 10:57:05 DEBUG common_utils.kafka Message delivered to dev-train-result [1] @ 228
2021-05-10 10:57:05 DEBUG common_utils.message_utils Exception sent to Kafka.
Traceback (most recent call last):
File "nlp_ner_layoutlm/train_pipeline/training_step/layoutlm_train_model.py", line 251, in <module>
run_step(
File "/app/common_utils/kubeflow_utils.py", line 261, in run_step
step_callback(**args.__dict__)
File "nlp_ner_layoutlm/train_pipeline/training_step/layoutlm_train_model.py", line 130, in train_and_save_layoutLM_model
raise InternalError(
common_utils.errors.InternalError: Something went wrong while training in distributed mode. Process finished with Exit Code 1
```
When I set environment variable to `NCCL_SHM_DISABLE=1`, this error doesn't happen but deepspeed doesn't produce better results and I can't train with bigger batches.
| 05-10-2021 11:15:39 | 05-10-2021 11:15:39 | I am closing the issue, because I see that it is a docker shm issue that is not related to HF. I have solved the issue thanks to [this stackoverflow issue](https://stackoverflow.com/a/46434614/11758585) |
transformers | 11,657 | closed | Memory Leak in Deberta (v1) Base | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0.dev0
- Platform: Linux-5.4.0-1047-aws-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
### Who can help
@patrickvonplaten @LysandreJik @sgugger
## Information
Model I am using (Bert, XLNet ...):
I am using a Deberta-base. First I've pre-trained it on >630M texts in spanish, with a BPE tokenizer trained on the same corpus, which in total is 590M (I've performed more than one epoch), using MLM-WWM. Then, I'm trying to use this model on fine-tuning, but I'm facing some issues.
First of all, Deberta is supposed to be much better than Bert and Roberta, however I'm experiencing a very bad performance, when compared to the other spanish model: dccuchile/bert-base-spanish-cased (BETO from now on), which supposedly has a worse architecture and which is trained only slightly more than my model. I've tried with many different hyperparameters, following the recommendations in Deberta paper, without improving results. For reference, in a problem where BETO (the model above) achieves 0.97, I'm achieving 0.91 at best. Moreover, as I'm training many models for hyperparameter search (without using your hyperparameter search api), I see that with each new Deberta model the GPU memory usage increases, which doesn't happen with BETO. I think this is a memory leak in the implementation of Deberta, or at least in the token classification and sequence classification layers of deberta. I don't know if this inefficient implementation leading to this memory leak can have any relationship with the poor performance of the model. Could you please take a look at it?
I hope the architecture itself is not wrongly coded, because otherwise we've spent thousands of dollars in training a spanish model from scratch for nothing. Please, if I can give any further information that can help in clear this out, let me know. I'm a little anxious over here because the results aren't as expected and because there are clear signs that the Deberta implementation has, at least, a memory management problem.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ X] my own modified scripts: (give details below): My script consist of a loop for training different versions of my spanish Deberta Model on a dataset (each version is the same model with different hyperparameters).
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X ] my own task or dataset: (give details below): I've tried with PAWS-X, ConLL2002, Ehealth_kd, Muchocine. All these datasets were downloaded from the datasets library.
## To reproduce
Steps to reproduce the behavior:
1. Use the deberta-base model and fine-tuning on a given dataset (it doesn't matter which one)
2. Create a hyperparameter dictionary and get the list of hyperparameters for each run with list(sklearn.ParameterGrid(search_dic))
3. Train the model with trainer using in each run the hyperparameters from the above list. As each model is trained, you will see an increment in memory usage even after doing torch.cuda.empty_cache().
## Expected behavior
it is expected, given the results reported on Deberta paper, that Deberta-base works better than Bert-base with less training (the architecture of BETO), therefore I wouldn't expect that after training for almost as long as BETO we have much worse results than it. Also, it is expected that after each run with trainer, and after deleting the trainer from memory with del Trainer, and releasing gpu memory with torch.cuda.empty_cache(), the gpu memory usage is not increased from run to run, as with other model architectures this doesn't happen, and with Deberta it does. | 05-10-2021 10:27:58 | 05-10-2021 10:27:58 | Hello @alexvaca0, thank you for opening an issue! Is there a way for you to provide a collab with a reproducer so that we may take a look at the memory issue?
Regarding the very bad performance, and your query that you hope the "architecture itself is not wrongly coded" - rest assured, the architecture was contributed by the author of the model. I believe DeBERTa has historically been hard to pretrain, as I've heard similar reports in the past. Pinging @BigBird01, the author of the model.
Pengcheng, do you have some tips regarding pretaining the DeBERTa model?
I believe the original repository also contains code for model pretraining: https://github.com/microsoft/DeBERTa
Have you taken a look at the pretraining script in that repository?<|||||>Yes. We already released our code for pre-training and fine-tuning(SiFT) in our public repo. Please take a look at it. By saying it's hard to pre-train, what do you refer to? Do you mean instability or accuracy of the model?
Thanks!
Pengcheng
From: Lysandre Debut ***@***.***>
Sent: Monday, May 10, 2021 12:37 PM
To: huggingface/transformers ***@***.***>
Cc: Pengcheng He ***@***.***>; Mention ***@***.***>
Subject: Re: [huggingface/transformers] Memory Leak in Deberta (v1) Base (#11657)
Hello @alexvaca0<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Falexvaca0&data=04%7C01%7CPengcheng.H%40microsoft.com%7C8c8ef4b53adf484a7a0d08d913eafcdf%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637562722260776631%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=rBueqZcT0kOvwb%2FjxKIs%2B0V2yyfdcWDo3lq2FwyeHOc%3D&reserved=0>, thank you for opening an issue! Is there a way for you to provide a collab with a reproducer so that we may take a look at the memory issue?
Regarding the very bad performance, and your query that you hope the "architecture itself is not wrongly coded" - rest assured, the architecture was contributed by the author of the model. I believe DeBERTa has historically been hard to pretrain, as I've heard similar reports in the past. Pinging @BigBird01<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FBigBird01&data=04%7C01%7CPengcheng.H%40microsoft.com%7C8c8ef4b53adf484a7a0d08d913eafcdf%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637562722260786585%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=fUm4si06Lmfwyv4bWSHWzM%2FCJXPrJpR2E1q6nIuYDEk%3D&reserved=0>, the author of the model.
Pengcheng, do you have some tips regarding pretaining the DeBERTa model?
I believe the original repository also contains code for model pretraining: https://github.com/microsoft/DeBERTa<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmicrosoft%2FDeBERTa&data=04%7C01%7CPengcheng.H%40microsoft.com%7C8c8ef4b53adf484a7a0d08d913eafcdf%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637562722260786585%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=TZlKvOgHNmHzuE9orUxy%2BtIG8GGxxdq%2Bfl%2BBOiG8Ctk%3D&reserved=0>
Have you taken a look at the pretraining script in that repository?
-
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fhuggingface%2Ftransformers%2Fissues%2F11657%23issuecomment-837210289&data=04%7C01%7CPengcheng.H%40microsoft.com%7C8c8ef4b53adf484a7a0d08d913eafcdf%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637562722260796544%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=%2FuCIjvI9f%2Bbub6XqcvVLkH0Kdt3Pc83Z1u6X5t9jilw%3D&reserved=0>, or unsubscribe<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAJDNDRXY2CW5JCM7FBNM3MLTNAYVXANCNFSM44Q2XEOA&data=04%7C01%7CPengcheng.H%40microsoft.com%7C8c8ef4b53adf484a7a0d08d913eafcdf%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637562722260796544%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=Wbspih0vbzvWLFW7wgPzfCCh2i3Bka1%2Bj8bZpRBfvGo%3D&reserved=0>.
<|||||>@BigBird01 @LysandreJik Hi, thanks for the quick response to both of you, I really appreciate your help :)
Currently I don't think I can find the time to prepare a reproducer, maybe if you have a script for training a model with several configurations in a loop or using Optuna with the hyperparameter search API from Trainer (it also happens there), you can just replace the model string you were using with microsoft/deberta-base. Using one of the example collabs from Transformers would also be useful, as you'd only have to replace the model name. https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb
I'm glad to know that there are no mistakes in the implementation itself, and therefore the only issue to solve is this memory leak.
I've taken a look at Deberta repository, but I don't find a pre-training script; where exactly can I find it?. However, in order not to waste all the money already spent in training the model, I think it'd be more appropriate to continue using Transformers code. I've followed all hyperparameters stated in the paper for Deberta-base for pre-training, these doesn't change in your pre-training script , do they? @BigBird01
Another issue is that there is no SpanWholeWordMaskCollator in Transformers, therefore we are training with Whole Word Masking... do you think this will severely affect the model performance? On the other hand, if you have code for collating batches with Span Whole Word Masking, do you think it would be possible to put that in transformers data_collator.py code and continue training using that new collator? Or this may lead to divergence of the model?
Thank you again, and sorry about all the questions, I've many doubts regarding this subject.
Regards,
Alejandro<|||||>@BigBird01 other people have had issues with pretraining, this issue comes to mind: https://github.com/huggingface/transformers/issues/11689<|||||>@alexvaca0 The Transformers library is not intended to become a hsot for data collators specific to all possible tasks so we probably won't add this `SpanWholeWordMaskCollator`. You can however copy it in any of your script and use it.<|||||>@sgugger I don't think that collator is so rare, in fact many models such as SpanBERT, ALBERT and DEBERTA use this pre-training setup... <|||||>Any updates regarding the memory leak? I'm still experiencing it...<|||||>Hi @alexvaca0, I am trying to reproduce the memory leak you mention but I do not manage to obtain it. Within a loop I create a new model, `TrainingArgument` and `Trainer`, start the training and look at the metrics.
I also tried only running `trainer.train()` within the loop, and the second iteration gets a slight increase in GPU memory usage but it stabilizes right after.
I've tried with the hyper-parameter search as well (using `optuna`) but have not managed to replicate the memory increase you mention.
If you have a script (even a large one as long as I can run it locally) or a notebook that reproduces the leak, I would happily take a look at it.<|||||>I could prepare one, as I cannot share my pre-trained deberta model with you... But I think we could replicate it the following way: retraining the deberta-base english model for some steps more, saving that checkpoint, and then using a hyperparameter search with optuna from that checkpoint, not the official deberta base checkpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I'm still experiencing this issue. For example, if you initialize a Trainer in google colab with deberta-base, and then try to change the trainer for using other model, the gpu memory used by deberta is not released, I mean, if the trainer with deberta used 16GB, when I try to change the trainer and set bert-base, for example, the object is not replaced. This brings me to the same conclusion I claimed above: there must be a memory leak in deberta code, it leaves objects in the gpu that cannot be released. @patrickvonplaten @LysandreJik @sgugger <|||||>Hello @alexvaca0! As mentioned above, I have tried to reproduce but I have failed to do so. Would you happen to have a script handy so that we may take a look? You do not need to share your model if you do no wish to any DeBERTa model on the hub should do.
Thank you.<|||||>Could this be a fix to your issue? https://github.com/huggingface/transformers/pull/12718<|||||>Hi @LysandreJik , as soon as I can I'll try to re-install transformers from source and see if #12718 fixes my issue, although it seems to be related to cpu memory, not gpu memory; moreover, I didn't experience this with any other model but deberta-base, with BERT for example it worked smoothly. I'll also prepare a notebook for you to reproduce, as soon as the workload I have enables me to do so. Thanks! :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,656 | closed | DISTILBERT: run_squad.py not working | Hi,
I am trying to use the transformers/examples/legacy/question-answering/run_squad.py script to train and evaluate DistilBERT on squad2.0. Unfortunately it throws the following error:
**TypeError: forward() got an unexpected keyword argument 'token_type_ids'**
I used the following tokenizer: distilbert-base-uncased
Is there a way for me to fix this issue? Thank you for your reply! | 05-10-2021 10:20:19 | 05-10-2021 10:20:19 | Hi! This script is a legacy script that we will not be maintaining anymore. Have you tried with the `run_qa.py` script available [here](https://github.com/huggingface/transformers/tree/master/examples/pytorch/question-answering)? This script should be simpler to understand and more complete.
Let us know if you run in any issues, thanks.<|||||>Thank You! I will try this script then!
<|||||>Hi!
I followed your advice and ran DistilBERT with run_qa.py on squad_v2 with the following arguments:
python transformers/examples/pytorch/question-answering/run_qa.py --model_name_or_path distilbert-base-uncased --dataset_name squad_v2 --do_train --do_eval --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 3 --max_seq_length 384 --doc_stride 128 --output_dir output_distilbert
Unfortunately, for the evaluation I get the following error:
ValueError: max() arg is an empty sequence
Did I forget to add one parameter? Thank you for your answer
<|||||>Could you provide the full stack trace? Thank you!
@sgugger <|||||>Yes of course! Thank you
Traceback (most recent call last):
File "transformers/examples/pytorch/question-answering/run_qa.py", line 613, in <module>
main()
File "transformers/examples/pytorch/question-answering/run_qa.py", line 581, in main
metrics = trainer.evaluate()
File "/home/ines/Ibert/transformers/examples/pytorch/question-answering/trainer_qa.py", line 56, in evaluate
metrics = self.compute_metrics(eval_preds)
File "transformers/examples/pytorch/question-answering/run_qa.py", line 543, in compute_metrics
return metric.compute(predictions=p.predictions, references=p.label_ids)
File "/home/ines/Ibert/venv_ibert/lib/python3.7/site-packages/datasets/metric.py", line 402, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "/home/ines/.cache/huggingface/modules/datasets_modules/metrics/squad/513bf9facd7f12b0871a3d74c6999c866ce28196c9cdb151dcf934848655d77e/squad.py", line 109, in _compute
score = evaluate(dataset=dataset, predictions=pred_dict)
File "/home/ines/.cache/huggingface/modules/datasets_modules/metrics/squad/513bf9facd7f12b0871a3d74c6999c866ce28196c9cdb151dcf934848655d77e/evaluate.py", line 67, in evaluate
exact_match += metric_max_over_ground_truths(exact_match_score, prediction, ground_truths)
File "/home/ines/.cache/huggingface/modules/datasets_modules/metrics/squad/513bf9facd7f12b0871a3d74c6999c866ce28196c9cdb151dcf934848655d77e/evaluate.py", line 52, in metric_max_over_ground_truths
return max(scores_for_ground_truths)
ValueError: max() arg is an empty sequence<|||||>Ah, since you're using the squad V2 dataset I believe you must also tell the script that it should understand examples that don't have an answer. For this, you can add the `--version_2_with_negative` argument when running your script.
Does that help?<|||||>Yes it does, I thought it was taken into accound with the name of the dataset, but I was wrong. Thank you!
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,655 | closed | Fine-tuning the Entire RAG Architecture (including DPR retriever) | # What does this PR do?
The original RAG implementation is able to end-to-end train the question encoder and the generator.
This extension enables the end-to-end training of RAG including the context encoder in the retriever component.
Please read the [accompanying blog post](https://shamanesiri.medium.com/how-to-finetune-the-entire-rag-architecture-including-dpr-retriever-4b4385322552) for details on this implementation.
The original RAG code has also been modified to work with the latest versions of PyTorch lightning (version 1.2.10) and RAY (version ). All other implementation details remain the same as the [original RAG code](https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag).
Read more about RAG at https://arxiv.org/abs/2005.11401.
This code can be modified to experiment with other research on retrieval augmented models that include training of the retriever such as [REALM](https://arxiv.org/abs/2002.08909) and [MARGE](https://arxiv.org/abs/2006.15020).
Reviewers @patrickvonplaten @lhoestq | 05-10-2021 09:28:10 | 05-10-2021 09:28:10 | > Great, I think we are very close to merging this PR :-)
>
> Could we add a test in both `test_modeling_rag` and `test_retrieval_rag` ? We should test there that the model behaves correctly when `set_context_encoder_for_training` is set
on it!<|||||>Hello, I tried running the code in this pull request because the methodology is something I'm very interested in and I ran into a few issues. These may be things you are aware of but I just wanted to mention them in case you hadn't run into them.
1. When launching the code in distributed mode (CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8) I received the error:
handle = worker.core_worker.get_named_actor_handle(name)
File "python/ray/_raylet.pyx", line 1496, in ray._raylet.CoreWorker.get_named_actor_handle
File "python/ray/_raylet.pyx", line 157, in ray._raylet.check_status
ValueError: Failed to look up actor with name 'retrieval_worker_0'. You are either trying to look up a named actor you didn't create, the named actor died, or the actor hasn't been created because named actor creation is asynchronous.
Based on the traceback it seems to be related to line 733 in finetune_rag.py
named_actors = [ray.get_actor("retrieval_worker_{}".format(i)) for i in range(args.num_retrieval_workers)]
If I run it with the normal python launch it goes fine however. This is with ray version 1.3.0, pytorch-lightning 1.2.10 and transformers version 4.7.0.dev0 which matches the requirements.txt file. This is an issue because I don't believe the sharded_ddp plugin works without the distributed launch.
2. pytorch_lightning.utilities.exceptions.MisconfigurationException: ModelCheckpoint(monitor='val_em') not found in the returned metrics: ['val_loss', 'val_avg_loss', 'val_avg_em', 'val_avg_gen_time', 'val_avg_gen_len', 'step_count', 'loss']. HINT: Did you call self.log('val_em', value) in the LightningModule?
This happens at the first validation step after training epoch 0. I believe that the values being passed on line 421 in the finetune_rag.py script are not named correctly? The metrics all have "_avg_" in the name however the monitored metric doesn't seem to have that and is just "val_em".
3. It seems that in some locations the ctx_encoder_tokenizer is a required keyword argument in some locations and not in others. I had change line 367 in retrieval_rag.py to: def __init__(self, config, question_encoder_tokenizer, ctx_encoder_tokenizer, generator_tokenizer, index=None, init_retrieval=True): adding the ctx_encoder_tokenizer otherwise it said it was missing the keyword argument.
4. I had to change the line 528 in modeling_rag.py from "self.context_encoder_training=False" to "self.context_encoder_training=True" in order to get it to properly do the context_encoding. I could have messed something else up in the training to cause it to not properly set this to True when setting the context encoder but I couldn't get it to work without doing this (threw the error KeyError: 'tokenized_doc_ids')
5. I had to add "from transformers import PreTrainedTokenizer" to retrieval_rag.py also because that doesn't seem to be imported anywhere in the file but is used in the file on line 552. This could be an issue with my transformer version but I still believe it would have to be in the import statements anyways no?
Any or all of these could be issues with how I'm running it but I figured I'd bring them to your attention because these were all the things I had to change in order to get it to run. I can provide more info on any/all of these if you would like but I figured I would give you a list of things I ran into since it hasn't been merged yet so i'd imagine not many people have tried to run it end to end. Thanks for adding this feature though; it's definitely going to be a big upgrade for those of us who are using the model on different datasets and use cases.
<|||||>@calderma
You have done an amazing exploration. I am really sorry :( that apart from the first issue, all other things being already fixed, but I did not push the latest version since I am conducting some experiments to update the README (original RAG vs Updated RAG).
In the latest version, I added a dummy training example which makes things a lot more clear.
For the first issue, I actually do not have a distributed system to test. I think it is something to do with PyTorch-lightning initialization. Try to add distributed parameters to the lightning trainer. **(Trainer(gpus=8, accelerator='ddp', num_nodes=4))**
**named_actors = [ray.get_actor("retrieval_worker_{}".format(i)) for i in range(args.num_retrieval_workers)]**
During the initialization of RAY workers, we create them only on the master DDP process. So the above line is there to get the retrieval workers for other ddp processes (let's say cuda 1, 2, ..) that have already created during the master process [(check this line)](https://github.com/shamanez/transformers/blob/rag-retriever-end2end/examples/research_projects/rag-end2end-retriever/finetune_rag.py#L712).
As the error shows in your issue, what has happened is the initialization across the nodes somehow hasn't [shared](https://github.com/shamanez/transformers/blob/rag-retriever-end2end/examples/research_projects/rag-end2end-retriever/finetune_rag.py#L712). So obviously ray gives an error saying, it can't find an already initialized worker by its name. I think this has something to do with how pytorch-lightning execute multi-node training. So I would suggest you to follow their guidelines. Something as follows.
Please let me know if this works, So I can include it in the latest version.
<|||||>Sure I should be able to try it tomorrow in the early afternoon. I will post in this thread afterwards.<|||||>> Sure I should be able to try it tomorrow in the early afternoon. I will post in this thread afterwards.
Since I have only one HPC cluster, I just run the process with CUDA_VISIBLE_DEVICES-0,1,2,3... bash script. It works fine for me. Can you please elaborate a little bit on this statement " I don't believe the sharded_ddp plugin works without the distributed launch." ?<|||||>Sure I was referencing the plugin for fairscale with pytorch lightning:
https://medium.com/pytorch/pytorch-lightning-1-1-model-parallelism-training-and-more-logging-options-7d1e47db7b0b
I was under the impression to use that plugin it had to be launched with the distributed pytorch launch but honestly I've never tried it with just launching it via python. When I trained on the old RAG model i used the distributed command and passed --plugins ddp_sharded but i suppose it might just work with regular python launching. I don't currently have the ability to test it until tomorrow though.<|||||>I believe I had to alter the original RAG code to work with pytorch lightning 1.1 as it was based on 1.0 but I needed to use fairscale to use a batch size larger than 1. Unfortunately I no longer have access to the repository I was pushing to at that time, which was a few months ago.<|||||>@calderma Exactly, I also think it is something to do with the launching of the script with multiple nodes.
I am asking this because PL has changed a lot compared to the version that used in the original RAG. Especially plugins. In the upgraded RAG, I had to remove [this method with plugins](https://github.com/shamanez/transformers/blob/rag-retriever-end2end/examples/research_projects/rag/finetune_rag.py#L84) and use Callbacks since PL at the moment is not that much encouraging us to use custom pluggings.
<|||||>I'd imagine that the distributed launch isn't worth worrying about then but I can still test it tomorrow if you would like to see if it makes the ray workers behave.<|||||>Thanks a lot.<|||||>Hello again. I pulled your latest changes and tried running it again. I didn't get the distributed training to work with those changes but I'm by no means an expert at that so maybe someone better at distributed pytorch-lightning could take a look at it. I did notice a few other things with regards to training.
1. It seems the --data_cache_dir option doesn't exist in the code anymore? looking through finetune_rag.py i didn't see it anywhere but when i looked back at previous commits it was there in the arguments.
2. I had to manually create the kb_shards directory to get it to run. Could have easily been an issue on my end regarding permissions or something.
3. I got the error:
callbacks_rag.py", line 44, in get_checkpoint_callback
every_n_val_epochs=1, # maybe save a checkpoint every time val is run, not just end of epoch.
TypeError: __init__() got an unexpected keyword argument 'every_n_val_epochs'
There was an issue with my cluster apparently that should be resolved soon so I'll try again but I don't believe it was related to these errors. <|||||>@calderma
1. I did swamp the data_cache_dir with cache_dir parameter given in lightning_base.py.
2. Now you do not need to create kb-shards directory manually, check [this line, which creates them automatically](https://github.com/shamanez/transformers/blob/rag-retriever-end2end/examples/research_projects/rag-end2end-retriever/finetune_rag.py#L689)
3. I think you need to update PL. Or you can run the code by [swapping those parameters with **period**](https://pytorch-lightning.readthedocs.io/en/latest/extensions/generated/pytorch_lightning.callbacks.ModelCheckpoint.html#pytorch_lightning.callbacks.ModelCheckpoint).
For the distributed running, I checked with PyTorch lightning examples. Seem like it depends on how you have constructed the cluster. Code-wise we do not need to change anything other than adding num_nodes and accelerator parameters to the trainer.
**trainer = Trainer(gpus=8, num_nodes=4, accelerator='ddp')**, which I already did. [ See these examples, it seems pretty straightforward](https://pytorch-lightning.readthedocs.io/en/latest/clouds/cluster.html#general-purpose-cluster).
Now you can also run the original [RAG with new PyTorch lightning.](https://github.com/huggingface/transformers/pull/11806)
<|||||>great sounds good. The only thing that might need to be switched with regard to the first one is I believe the finetune_rag_ray_end2end.sh script still passes "--data_cache_dir". Regarding the PL version I was going by what was in the requirements file. my pip list shows pytorch-lightning == 1.2.10 which seems to be what's in the requirements. Thanks for the help with the distributed PL!<|||||>Omg yeah. My bad! Thanks a lot highlighting them. I will quickly change it
and push it.
On Sat, May 22, 2021, 11:09 calderma ***@***.***> wrote:
> great sounds good. The only thing that might need to be switched with
> regard to the first one is I believe the finetune_rag_ray_end2end.sh script
> still passes "--data_cache_dir". Regarding the PL version I was going by
> what was in the requirements file. my pip list shows pytorch-lightning ==
> 1.2.10 which seems to be what's in the requirements. Thanks for the help
> with the distributed PL!
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/11655#issuecomment-846303671>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGT7TIRQFUPWMOCHBSTTO3R3DANCNFSM44QW2KSA>
> .
>
<|||||>Great! I'm going to run it over the weekend so I will let you know if I hit any other roadblocks or if it finishes without issue.<|||||>
Hi Patrick and Quentin ,
**I added the testing files in the test_run folder.** [test_rag_new_features.sh ](https://github.com/shamanez/transformers/blob/rag-retriever-end2end/examples/research_projects/rag-end2end-retriever/test_run/test_rag_new_features.sh#L6 )tests if the new functions are working and test_finetune.sh trains with a dummy dataset.
Additionally, We also did a small experiment with the SQuAD dataset using all context passages as the knowledge base. The increase in the EM-Scores was better than we expected. Users also can compare these two methods.
Cheers, @patrickvonplaten @lhoestq <|||||>Just to close the loop on this, my test run of the end2end went smoothly and I had no additional roadblocks. Thanks!<|||||>@calderma , I had some problems with version control. So I created a revamp pull request .. can you just run in and let me know :)
https://github.com/huggingface/transformers/pull/11893
<|||||>sure i'll run it today<|||||>In this version I added the following:
1. Added test functions as requested in [this comment](https://github.com/huggingface/transformers/pull/11655#pullrequestreview-658830866).
2. Added results section to the README.
<|||||>Closing in favor of #11893<|||||>@calderma
> 1. When launching the code in distributed mode (CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8) I received the error:
> handle = worker.core_worker.get_named_actor_handle(name)
> File "python/ray/_raylet.pyx", line 1496, in ray._raylet.CoreWorker.get_named_actor_handle
> File "python/ray/_raylet.pyx", line 157, in ray._raylet.check_status
> ValueError: Failed to look up actor with name 'retrieval_worker_0'. You are either trying to look up a named actor you didn't create, the named actor died, or the actor hasn't been created because named actor creation is asynchronous.
>
@calderma I think We found the problem when running RAG with RAY on distributed systems.
In some distributed systems, **os.environ["NODE_RANK"]** is a string but not an integer.
So basically the if the condition can get mess up. So please upgrade the if condition inf finetune.py as follows:
`
if ("LOCAL_RANK" not in os.environ or os.environ["LOCAL_RANK"] == 0) and ( "NODE_RANK" not in os.environ or int(os.environ["NODE_RANK"] )== 0 ):`
Hope this solves the problem!
|
transformers | 11,654 | closed | add bigbird-pegasus evaluation notebook | # What does this PR do?
Add bigbird-pegasus evaluation notebook
@patrickvonplaten | 05-10-2021 08:43:25 | 05-10-2021 08:43:25 | |
transformers | 11,653 | closed | Add DETR | # What does this PR do?
It adds Facebook AI's DETR model (end-to-end object detection with Transformers). It's a clean PR based on #11506.
The 3 models are called `DetrModel`, `DetrForObjectDetection` and `DetrForSegmentation`. The latter was first called `DetrForPanopticSegmentation`, but as it can also be used to do only instance segmentation, I renamed it.
To do:
- [x] address remaining comments
- [x] fix remaining tests (there are still 2 tests failing for `test_modeling_detr.py`) - here I'd like some help
- [x] add remaining checkpoints to the hub
- [ ] add notebooks to showcase how to do inference/fine-tuning on custom data
- [x] perhaps also write more documentation | 05-10-2021 07:33:06 | 05-10-2021 07:33:06 | @LysandreJik all comments are addressed, also added 2 community notebooks. PR is ready! |
transformers | 11,652 | closed | [DOC] Fine-Tuning NER Custom Dataset Clarification | I'm following [this](https://huggingface.co/transformers/custom_datasets.html#tok-ner) guide for fine-tuning for NER with a custom dataset. I struggled with the example code for `def encode_tags()` until I realized, that the tokens per sample are limited to 512 and my dataset exceeded this in some instances. This resulted in errors like this:
`ValueError: NumPy boolean array indexing assignment cannot assign 544 input values to the 464 output values where the mask is true`.
I currently assume, the limit is due to the specific Tokenizer. I'm using `tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-cased')` as in the example.
I'm proposing to add a clarification about the token limit per sample assumption like this:
https://github.com/huggingface/transformers/edit/master/docs/source/custom_datasets.rst
Line 365 and following:
> Let's write a function to do this. This is where we will use the ``offset_mapping`` from the tokenizer as mentioned
> above. For each sub-token returned by the tokenizer, the offset mapping gives us a tuple indicating the sub-token's
> start position and end position relative to the original token it was split from. That means that if the first position
> in the tuple is anything other than ``0``, we will set its corresponding label to ``-100``. While we're at it, we can
> also set labels to ``-100`` if the second position of the offset mapping is ``0``, since this means it must be a
> special token like ``[PAD]`` or ``[CLS]``.
And append: `Be aware that this example has an upper limit of 512 tokens per sample.`
Let me know your thoughts and I'll open a PR, if you find this useful. | 05-10-2021 06:54:43 | 05-10-2021 06:54:43 | When did you encounter that `ValueError`? Normally, if you pass text into the `__call__` method of a tokenizer with `truncation` set to `True`, you shouldn't encounter any errors, as sequences that are too long are truncated.<|||||>Hi Niels, thanks for your feedback. The Exception is raised when calling `encode_tags`. I'm following the tutorial code, just my dataset is not WNUT-17.
> train_labels = encode_tags(train_tags, train_encodings)
> val_labels = encode_tags(val_tags, val_encodings)<|||||>Update: Even when I ensure the number of tokens per sample is <= 512, I get ValueErrors from calling `encode_tags` on some samples. I'll try to understand this better or provide a demo.<|||||>Update: The function `tokenize_and_align_labels` from the [token classification example notebook](https://github.com/huggingface/notebooks/blob/master/examples/token_classification.ipynb) (cell 23) works fine on my data.<|||||>Having the same error on my custom dataset, with truncation set to true, with the Camembert tokenizer.
`ValueError: NumPy boolean array indexing assignment cannot assign 1078 input values to the 319 output values where the mask is true`<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Having the same issue, from the same tutorial, with a custom dataset. Any ideas on how to fix it?<|||||>@Famaral97 You may want to take a look here: https://github.com/huggingface/transformers/issues/11652#issuecomment-839496635
This worked well for my dataset.<|||||>@jorahn Thanks for letting us know about the tokenize_and_align_labels() function. But when I follow the method mentioned in the notebook I'm getting an error with data collator saying:
AttributeError: 'tokenizers.Encoding' object has no attribute 'keys'
|
transformers | 11,651 | closed | BigBird on TPU | # What does this PR do?
This PR will enable BigBird working on TPUs. Problem was happening because we were concatenating tensors of different dtypes. This PR will fix it.
See this notebook to infer BigBird on TPUs: https://colab.research.google.com/drive/1ptZlDuEgmoElWmPmrZXHA7uWjvra9G8T#scrollTo=sR2Yk-HzmGnw
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #11363
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten | 05-10-2021 06:44:15 | 05-10-2021 06:44:15 | Hi, do you guys have any idea whether we can still train with `trainer` on TPU with BigBird? I am still facing this same error.
Is there any tentative timeline by which this problem can be fixed?<|||||>BigBird will be merged in Flax soon. It is recommend to use `FlaxBigBird` for TPU <|||||>That's great news! Till when can we expect it to be available?<|||||>It will get merged this week (may be in a day or two). |
transformers | 11,650 | closed | [Examples] Fix invalid links after reorg | # What does this PR do?
The link in some examples' README didn't being updated after example reorg (#11350).
I simply `grep -R https://github.com/huggingface/transformers/blob/master/examples` in examples and fix those invalid links.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger (author of #11350)
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-09-2021 20:04:04 | 05-09-2021 20:04:04 | |
transformers | 11,649 | closed | Reformer inference widget is broken | From: https://huggingface.co/google/reformer-enwik8?text=My+name+is+Julien+and+I+like+to

@patrickvonplaten | 05-09-2021 17:42:04 | 05-09-2021 17:42:04 | Yeah Reformer has no tokenizer so this doesn't work...sadly I also think since Reformer doesn't work super well, it's low priority to fix this (cc @Narsil )<|||||>Seems like the tokenizer fix is not that hard to make given the README (simply byte shift+2).
However the code seems to refer to a spiece.model : https://github.com/huggingface/transformers/blob/master/src/transformers/models/reformer/tokenization_reformer.py#L37
How bad is reformer performing ? (I am also curious to know how bad are character level models)
Edit: OK, this one is different because it is character level, but it's different from regular transformers that uses spiece, I see.<|||||>The character level Reformer model does quite well at text generation
Here is a demo (from a while back) for generating Wikipedia like entries:
https://colab.research.google.com/drive/1Oao8vBtDkz6v1E1efUhTlug5uuxC_BnE?usp=sharing<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,648 | closed | Very large difference between the results after resume | Dear HuggingFace team,
There is unfortunately no option to reopen the bugs, the issue I reported here [1] still exists with testing the last version of transformers. I add my comments on the same ticket. Could you kindly reopen this bug?
The variations are very high after resume, which makes the results not usable with resuming after checkpoint, I also tried to make things deterministic in torch, but it also could not solve the issue. I study in an institution where I only have access to GPUs for short hours, and I very much appreciate your help on this issue to make training reproducible after resume from the checkpoints in trainer.
@sgugger
[1] https://github.com/huggingface/transformers/issues/11323 | 05-09-2021 17:17:18 | 05-09-2021 17:17:18 | Please do not open a duplicate issue, you can reopen the old one.<|||||>Dear Sylvain
If huggingFace team closes an issue, the user is not able to reopen it, at least on my side, I also attached the screenshot of it.
https://ibb.co/56S00LT
Thank you. <|||||>Your screenshot does not show the bottom of the dialog box where there should be the button "Reopen" normally.<|||||>And if not, please ask us to reopen the issue on the issue discussion, do not open a duplicate :-)<|||||>there is really no reopen on the users sides. Sure I will make sure to ask and I will avoid recreate the issue, thank you very much for the remark :) I pay attention |
transformers | 11,647 | open | Key Error: 'pre-processing' during conversion from tatoeba to Marian model | ## Environment info
- `transformers` version: `4.6.0.dev0`
- Platform: `CentOS Linux release 7.7.1908 (Core)`
- Python version: `3.8.5`
- PyTorch version: `1.8.1 + cuda 10.2`
- Tensorflow version: N/A
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Marian: @patrickvonplaten , @patil-suraj
## Information
Model I am using (Bert, XLNet ...): Marian
The problem arises when using:
* [x] the official example scripts: tatoeba to marian model script
* [ ] my own modified scripts
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: machine translation
* [ ] my own task or dataset
## To reproduce
Following the script from [scripts/tatoeba/README.md ](https://github.com/huggingface/transformers/tree/master/scripts/tatoeba)
1.
```git clone [email protected]:huggingface/transformers.git
cd transformers
pip install -e .
pip install pandas GitPython wget
```
2.
```
curl https://cdn-datasets.huggingface.co/language_codes/language-codes-3b2.csv > language-codes-3b2.csv
curl https://cdn-datasets.huggingface.co/language_codes/iso-639-3.csv > iso-639-3.csv
```
3. `git clone [email protected]:Helsinki-NLP/Tatoeba-Challenge.git`
4. `python src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py --models kor-eng eng-kor --save_dir converted/`
Error message:
```
Traceback (most recent call last):
File "src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py", line 1267, in <module>
resolver = TatoebaConverter(save_dir=args.save_dir)
File "src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py", line 58, in __init__
reg = self.make_tatoeba_registry()
File "src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py", line 258, in make_tatoeba_registry
return [(k, v["pre-processing"], v["download"], v["download"][:-4] + ".test.txt") for k, v in results.items()]
File "src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py", line 258, in <listcomp>
return [(k, v["pre-processing"], v["download"], v["download"][:-4] + ".test.txt") for k, v in results.items()]
KeyError: 'pre-processing'
```
## Expected behavior
Conversion of the model from Tatoeba to Marian for the chosen language pair with no errors.
| 05-09-2021 09:28:22 | 05-09-2021 09:28:22 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>unstale<|||||>@patil-suraj - It would be really nice if we could tackle the tatoeba models at some point...
This seems to be related: https://github.com/huggingface/transformers/pull/12192
https://github.com/huggingface/transformers/issues/10943 |
transformers | 11,646 | closed | Strange implementation of `convert_tokens_to_string` in albert tokenizer. | Hi,
the albert tokenizer implements the `convert_tokens_to_string` function:
https://github.com/huggingface/transformers/blob/ba0d50f2148f0db0e04a80cddb1f57ce0c91c182/src/transformers/models/albert/tokenization_albert.py#L222-L223
While the deberta v2 and some other tokenizer just delegate this to the sentencepiece tokenizer:
https://github.com/huggingface/transformers/blob/ba0d50f2148f0db0e04a80cddb1f57ce0c91c182/src/transformers/models/deberta_v2/tokenization_deberta_v2.py#L146
IMO it would be better to always delegate to the sentencepiece tokenizer. What do you think?
## PS:
Some more examples here
https://github.com/huggingface/transformers/blob/ba0d50f2148f0db0e04a80cddb1f57ce0c91c182/src/transformers/models/barthez/tokenization_barthez.py#L251-L252
https://github.com/huggingface/transformers/blob/ba0d50f2148f0db0e04a80cddb1f57ce0c91c182/src/transformers/models/camembert/tokenization_camembert.py#L251-L252
https://github.com/huggingface/transformers/blob/ba0d50f2148f0db0e04a80cddb1f57ce0c91c182/src/transformers/models/m2m_100/tokenization_m2m_100.py#L187-L188
https://github.com/huggingface/transformers/blob/ba0d50f2148f0db0e04a80cddb1f57ce0c91c182/src/transformers/models/mbart/tokenization_mbart50.py#L208-L209
https://github.com/huggingface/transformers/blob/ba0d50f2148f0db0e04a80cddb1f57ce0c91c182/src/transformers/models/speech_to_text/tokenization_speech_to_text.py#L169-L173
https://github.com/huggingface/transformers/blob/ba0d50f2148f0db0e04a80cddb1f57ce0c91c182/src/transformers/models/xlm_prophetnet/tokenization_xlm_prophetnet.py#L264-L265 | 05-09-2021 04:49:29 | 05-09-2021 04:49:29 | Indeed, you're probably right! When updating the ALBERT tokenizer to use the `sentencepiece.decode` instead of the manual handling - do all tests pass? Even the integration test?
Makes me think we really should have integration tests for all tokenizers, as scenarios like this one are bound to happen.<|||||>Well yes. While "adding subword regularization in more tokenizers": #11417
I recognized that the tokenizers could benefit from some bigger refactoring.
Pulling commen functions into a base class would be nice. And while doing this adding tests....
There is lot of duplicate code there...
I might do this as a PR the next days (weeks) - we will see.<|||||>PR with a fix started: #11716<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I am still working on this...<|||||>Fixed in #11716 closing here. |
transformers | 11,645 | closed | Bad result in fine-tuning XLNet for SQuAD | Hello,
I'am fine-tuning XLNet for SQuAD V1.1 task, but I get the bad result. I got checkpoint of XLNet in [model hub](https://huggingface.co/xlnet-base-cased).
GPU: single GeForce RTX 3090 24G
running scripy:
`CUDA_VISIBLE_DEVICES=4 python ./examples/pytorch/question-answering/run_qa.py --model_name_or_path ../pretrained_model/xlnet_base --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --overwrite_output_dir --output_dir ../squad/xlnet_base`
result:
`{
"epoch": 2.0,
"eval_samples": 10848,
"exact_match": 12.639545884578997,
"f1": 14.638577161480404,
"train_runtime": 7179.3726,
"train_samples": 88835,
"train_samples_per_second": 2.062
}`
| 05-08-2021 12:34:09 | 05-08-2021 12:34:09 | Hi @Timothy023
Please use the [forum](https://discuss.huggingface.co/) to post such questions. Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,644 | closed | Cannot load studio-ousia/luke-base for AutoModelForTokenClassification | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0.dev0 (pulled from repo)
- Platform: 3
- Python version: 3.7.10
- PyTorch version (GPU?): 1.7.0 (no)
- Tensorflow version (GPU?): 2.4.1 (no)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
I tried loading LUKE's weight for AutoModelForTokenClassification. I intend to train further for NER. It failed due to a configuration error.
## To reproduce
Steps to reproduce the behavior:
```
from transformers import AutoModelForTokenClassification, TrainingArguments, Trainer
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("studio-ousia/luke-base") #Succesful
model = AutoModel.from_pretrained("studio-ousia/luke-base") #Succesful
model = AutoModelForTokenClassification.from_pretrained("studio-ousia/luke-base", num_labels=39)
```
```
Some weights of the model checkpoint at studio-ousia/luke-base were not used when initializing LukeModel: ['embeddings.position_ids']
- This IS expected if you are initializing LukeModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing LukeModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-9-b4efbc1b7796> in <module>
7 model = AutoModel.from_pretrained("studio-ousia/luke-base")
8
----> 9 model = AutoModelForTokenClassification.from_pretrained("studio-ousia/luke-base", num_labels=39)
/opt/conda/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
381 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
382 raise ValueError(
--> 383 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
384 f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}."
385 )
ValueError: Unrecognized configuration class <class 'transformers.models.luke.configuration_luke.LukeConfig'> for this kind of AutoModel: AutoModelForTokenClassification.
Model type should be one of BigBirdConfig, ConvBertConfig, LayoutLMConfig, DistilBertConfig, CamembertConfig, FlaubertConfig, XLMConfig, XLMRobertaConfig, LongformerConfig, RobertaConfig, SqueezeBertConfig, BertConfig, MegatronBertConfig, MobileBertConfig, XLNetConfig, AlbertConfig, ElectraConfig, FunnelConfig, MPNetConfig, DebertaConfig, DebertaV2Config, IBertConfig.
```
## Expected behavior
Succesful loading | 05-08-2021 12:18:47 | 05-08-2021 12:18:47 | cc @NielsRogge<|||||>LUKE does not have a `*ForTokenClassification` model, so it's not available in the token classification auto model. The following three models are available: `LukeForEntityClassification`, `LukeForEntityPairClassification`, `LukeForEntitySpanClassification`<|||||>Yes, I explicitly did not include the head models of LUKE in any `AutoModel`, as LUKE works a bit differently than other models.
For NER, it does not use a token classification head as other models like BERT do. Instead, LUKE considers all possible entity spans in a sentence, which are then classified accordingly. See the [code example](https://huggingface.co/transformers/master/model_doc/luke.html#transformers.LukeForEntitySpanClassification) for reference. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @NielsRogge ,
I am trying to finetune Luke on a custom dataset for the NER task. As mentioned here, I infer we need to do it through LukeForEntitySpanClassification. Can you please guide me on how I could achieve the same? I am unaware of any existing tutorial for the same.
Thank You<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, @Sreyan88 Were you able to train LUKE on a custom dataset? I am also working on the same and do not have any progress on this yet. Any help is appreciated. Thanks! |
transformers | 11,643 | closed | How to train TFBertForTokenClassification without padding mechanism | I have a situation that input are variable-length, and I want to train sentence one by one. I do not want to use padding and train on batch.
In my study, Keras support fit-generator, but this mechanism deprecated on TF2
Is there any suggestion, I have no idea and need help | 05-08-2021 11:16:16 | 05-08-2021 11:16:16 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,642 | closed | IDE cannot correctly navigate to references, It will navigate all object to `transformers/utils/dummy_pt_objects.py`. | # Enviornment
- `transformers` version: 4.5.1
- Platform: macOS-11.1-arm64-arm-64bit
- Python version: 3.9.1
- PyTorch version (GPU?): 1.8.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
# Issues
When using PyCharm or VSCode with `Pylance` (an extension), right-click model object will navigate users to `transformers/utils/dummy_pt_objects.py` rather than the correct path. | 05-08-2021 08:59:18 | 05-08-2021 08:59:18 | In PyCharm, you can select all `dummy*` files, right-click and select "Mark as Plain Text". This should prevent the IDE from navigating to the dummy files.
I have never used VSCode with `Pylance` however.<|||||>need some help from vscode user, does anyone found the same issue?<|||||>I'm also having this issue with VS Code.<|||||>Hello @liyucheng09!
This happens due to dynamic checks that happen to optimize imports in the transformers codebase. (e.g. when you import fast tokenizers using `is_tokenizers_available()` method ([example](https://github.com/huggingface/transformers/blob/3694484d0ae6f1b2e4f60460d6767f5d90442ba9/src/transformers/__init__.py#L1912)) Pylance isn't able to infer such value while analysing and thus uses the else branch statements instead which leads to using dummy classes as if you haven't got tokenizers available).
Unfortunately, there is not much you can do. Pylance team recommends reaching out to maintainers in order to make code compatible with Pyright/Pylance static analyzer.
Also, you can patch the installed module to make it compatible with Pylance:
https://gist.github.com/mozharovsky/0753e9eba28cda8890e2daa009c5e0b3<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Can this be reopened? It is still occurring and bothersome. The solution proposed by @mozharovsky is a good one, it can be converted into a PR if one of the maintainers agree.<|||||>The solution proposed by @mozharovsky would not reflect the reality when you don't have a dependency installed (for instance if you don't have tokenziers installed, it's normal to be sent to the dummy object by your IDE). It would also need to rewrite part of our internal tooling that checks the consistency of the inits. Not saying it's impossible but it's quite a bit of work.
None of the main maintainers will work on this, but we can look at a PR if someone is motivated enough. We won't give up the dynamic checks that happen to optimize the imports however.
<|||||>Ok, fair enough. I can probably work on it. What is this internal tooling you're talking about? Could you give a link to the part that needs changing? This issue is important because it makes code completion and type checking unusable for users of vscode and mypy.<|||||>The internal tooling checking the inits is [here](https://github.com/huggingface/transformers/blob/master/utils/check_inits.py). I'm not sure how it will behave after a change in the inits, so I can't point to you the exact part that will need changing once the inits are modified :-)<|||||>Thanks for the link, I'll take a look! Regarding not reflecting the reality, it's true what you said. But it's impossible to determine if another library is installed during static type checking. Given this, what is the preferred option -
A. All users get code completion and avoid type checking errors, even if some dependencies are missing
B. None of the users will have code completion and everyone gets type errors even if they have dependencies installed
I think A. If you disagree please tell me now so I can avoid working on the PR :)<|||||>Pyright and Pylance recently added this change: https://github.com/microsoft/pylance-release/issues/2402
I think this should now be straightforward to resolve by replacing the conditional statements highlighted by @mozharovsky with `try/except`. |
transformers | 11,641 | closed | How to change training epochs when using run_summarization.py | ### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.!-->
Models:
- bart, t5: @patrickvonplaten, @patil-suraj
## Information
I am using t5-large and t5-base to train my customer model with my customer csv dataset through running run_summarization.py. But I found that t5-small's performance is better than t5-base and t5-large. I think that is because we only train 3 epochs in run_summarization.py. Can you guys tell me how to change the training epochs?
I'm not sure whether my thinking is correct. Feel free to provide more suggestion. After all, it is strange that the performance of t5-small is better than t5-large and t5-base.
Thank you very much!!!
| 05-08-2021 06:56:15 | 05-08-2021 06:56:15 | Hi there,
`run_summarization.py` uses `Trainer`, you can pass the `--num_train_epochs` to control the number of epochs. Please find the docs [here](https://huggingface.co/transformers/main_classes/trainer.html).
Also please use the [forum](https://discuss.huggingface.co/) to ask such questions, issues are for bugs and feature requests. Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,640 | closed | Multilingual MobileBERT | # 🌟 New model addition
## Model description
Recently Google released multilingual MobileBERT on [tensorflow/models](https://github.com/tensorflow/models/blob/master/official/nlp/projects/mobilebert/README.md)
> In addition, we also provide new multiple-lingual MobileBERT checkpoint trained using multi-lingual Wiki data.
<!-- Important information -->
## Open source status
* [x] the model implementation is available: [tensorflow/models](https://github.com/tensorflow/models/blob/master/official/nlp/projects/mobilebert/README.md)
* [x] the model weights are available: Yup, [here](https://storage.cloud.google.com/tf_model_garden/official/mobilebert/multi_cased_L-24_H-128_B-512_A-4_F-4_OPT.tar.gz)
* [ ] who are the authors: (mention them, if possible by @gh-username)
| 05-08-2021 03:16:21 | 05-08-2021 03:16:21 | |
transformers | 11,639 | closed | I-BERT tokenizer not loading; example code not working. | Following the example [here](https://huggingface.co/transformers/model_doc/ibert.html), I'm trying to load the 'kssteven/ibert-roberta-base' tokenizer:
```
from transformers import RobertaTokenizer
tokenizer = RobertaTokenizer.from_pretrained('kssteven/ibert-roberta-base')
```
It errors out as follows:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/carola/opt/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1710, in from_pretrained
resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs
File "/Users/carola/opt/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1781, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/Users/carola/opt/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/roberta/tokenization_roberta.py", line 171, in __init__
**kwargs,
File "/Users/carola/opt/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/gpt2/tokenization_gpt2.py", line 179, in __init__
with open(vocab_file, encoding="utf-8") as vocab_handle:
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
Using transformers version 4.5.1 on Mac or Ubuntu | 05-07-2021 23:39:04 | 05-07-2021 23:39:04 | Hi @carolmanderson ,
Some tokenizer files were missing from the model repo. I've uploaded them, it should now work.<|||||>Hi,
I want to test IBERT's speedup, and I have done exactly what is said in https://huggingface.co/kssteven/ibert-roberta-base. For the quantization part, when I set quant_mode to true and run the evaluation again, I get a much slower model. What am I doing wrong?
Thank you for your reply!<|||||>Would be nice if you open a new issue for this. Thanks.<|||||>Thanks @patil-suraj , it is working for me now. <|||||>I'm facing the same issue in 4.9
`--> 173 self.transformer_tokenizer = RobertaTokenizerFast.from_pretrained(transformer, **tokenizer_parameters)
174 self.transformer_config = AutoConfig.from_pretrained(transformer)
175 self.network = NERDANetwork(self.transformer_model, self.device, len(tag_complete), dropout = dropout)
/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1731
1732 return cls._from_pretrained(
-> 1733 resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs
1734 )
1735
/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs)
1848 # Instantiate tokenizer.
1849 try:
-> 1850 tokenizer = cls(*init_inputs, **init_kwargs)
1851 except OSError:
1852 raise OSError(
/usr/local/lib/python3.7/dist-packages/transformers/models/roberta/tokenization_roberta_fast.py in __init__(self, vocab_file, merges_file, tokenizer_file, errors, bos_token, eos_token, sep_token, cls_token, unk_token, pad_token, mask_token, add_prefix_space, **kwargs)
171 mask_token=mask_token,
172 add_prefix_space=add_prefix_space,
--> 173 **kwargs,
174 )
175
/usr/local/lib/python3.7/dist-packages/transformers/models/gpt2/tokenization_gpt2_fast.py in __init__(self, vocab_file, merges_file, tokenizer_file, unk_token, bos_token, eos_token, add_prefix_space, **kwargs)
143 eos_token=eos_token,
144 add_prefix_space=add_prefix_space,
--> 145 **kwargs,
146 )
147
/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_fast.py in __init__(self, *args, **kwargs)
105 elif fast_tokenizer_file is not None and not from_slow:
106 # We have a serialization from tokenizers which let us directly build the backend
--> 107 fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
108 elif slow_tokenizer is not None:
109 # We need to convert a slow tokenizer to build the backend
Exception: No such file or directory (os error 2)`
<|||||>Same problem.
When I tried to debug this, I found this code load a config file [tokenizer_config](https://huggingface.co/kssteven/ibert-roberta-base/resolve/main/tokenizer_config.json) and try to load this file:

My transformers version: 4.10.0.dev0. <|||||>> I'm facing the same issue in 4.9
>
> `--> 173 self.transformer_tokenizer = RobertaTokenizerFast.from_pretrained(transformer, **tokenizer_parameters)
> 174 self.transformer_config = AutoConfig.from_pretrained(transformer)
> 175 self.network = NERDANetwork(self.transformer_model, self.device, len(tag_complete), dropout = dropout)
>
> /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
> 1731
> 1732 return cls._from_pretrained(
> -> 1733 resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs
> 1734 )
> 1735
>
> /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs)
> 1848 # Instantiate tokenizer.
> 1849 try:
> -> 1850 tokenizer = cls(*init_inputs, **init_kwargs)
> 1851 except OSError:
> 1852 raise OSError(
>
> /usr/local/lib/python3.7/dist-packages/transformers/models/roberta/tokenization_roberta_fast.py in **init**(self, vocab_file, merges_file, tokenizer_file, errors, bos_token, eos_token, sep_token, cls_token, unk_token, pad_token, mask_token, add_prefix_space, **kwargs)
> 171 mask_token=mask_token,
> 172 add_prefix_space=add_prefix_space,
> --> 173 **kwargs,
> 174 )
> 175
>
> /usr/local/lib/python3.7/dist-packages/transformers/models/gpt2/tokenization_gpt2_fast.py in **init**(self, vocab_file, merges_file, tokenizer_file, unk_token, bos_token, eos_token, add_prefix_space, **kwargs)
> 143 eos_token=eos_token,
> 144 add_prefix_space=add_prefix_space,
> --> 145 **kwargs,
> 146 )
> 147
>
> /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_fast.py in **init**(self, *args, **kwargs)
> 105 elif fast_tokenizer_file is not None and not from_slow:
> 106 # We have a serialization from tokenizers which let us directly build the backend
> --> 107 fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
> 108 elif slow_tokenizer is not None:
> 109 # We need to convert a slow tokenizer to build the backend
>
> Exception: No such file or directory (os error 2)`
Hi, have you solved this problem?<|||||>@xiangsanliu this should be fixed now, the hardcoded `tokenizers_file` path is now removed (cf hub commit https://huggingface.co/kssteven/ibert-roberta-base/commit/0857df571974cf0633da7536addb8b9da230293b)
you could pass `force_download=True` to `.from_pretrained` to get the update config file. |
transformers | 11,638 | closed | [Deepspeed Wav2vec2] integration | Addressing the need in https://github.com/huggingface/transformers/issues/11446, this PR is working on making wav2vec2 work under deepspeed.
This PR:
* changes Trainer to automatically convert inputs to the correct dtype if it's not int64 - we didn't need this for nlp models because embeddings took care of this - this is not the case with wav2vec2 type of models where inputs are float32 by default. (for deepspeed only at the moment - potentially need to do the same for `fp16_full_eval`)
* multiple fixes to the `wav2vec2` model, because it does very non-standard things, like model `weight_norm` which is implemented in a very odd way and deepspeed's automatic ease-of-use fails to do that and requires multiple manual adjustments for it to do the right thing. `weight_norm` creates a param, then drops it replacing it with 2 other params and re-creates them on every forward. (in pre-hook).
* moves `require_deepspeed` to `testing_utils.py` as we have multiple test files using it
* adds `dtype` accessor to DS conf object
* adds 8 new tests, checking each setup
Testing with `run_asr.py`:
### ZeRO-2
Everything works:
* [x] fp16 distributed zero2
* [x] fp16 non distributed zero2
* [x] fp32 distributed zero2
* [x] fp32 non distributed zero2
important - must use for distributed use:
```
"zero_optimization": {
"find_unused_parameters": true,
```
So you can use the `--deepspeed examples/research_projects/wav2vec2/ds_config_wav2vec2_zero2.json` which already has the adjustment.
### ZeRO-3
You can use the `--deepspeed examples/research_projects/wav2vec2/ds_config_wav2vec2_zero3.json`
This works:
* [x] fp32 non distributed zero3
* [x] fp32 distributed zero3
* [x] fp16 non distributed zero3
* [x] fp16 distributed zero3
### Possible PR spin-offs
it looks like plain pytorch dist doesn't work either https://github.com/huggingface/transformers/issues/11452
so this PR can be adapted to detect `dist` and do the same as what deepspeed branch does. probably a separate PR is the best.
(LayerSkip that is)
---------------------
To run tests:
Install this deepspeed master https://github.com/microsoft/DeepSpeed:
```
pip install deepspeed
```
and then:
```
HF_DATASETS_IN_MEMORY_MAX_SIZE=0 RUN_SLOW=1 pyt examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py
```
---------------------
Example of usage: assuming you in a top dir of the git clone of this branch
### `run_asr.py` and tiny model and tiny dataset
This is the foundation for the new tests:
```
HF_DATASETS_IN_MEMORY_MAX_SIZE=0 PYTHONPATH=src deepspeed --num_gpus 2 \
examples/research_projects/wav2vec2/run_asr.py \
--output_dir=output_dir --num_train_epochs=2 --per_device_train_batch_size=2 \
--per_device_eval_batch_size=2 --evaluation_strategy=steps --save_steps=500 --eval_steps=100 \
--logging_steps=5 --learning_rate=5e-4 --warmup_steps=3000 \
--model_name_or_path=patrickvonplaten/wav2vec2_tiny_random_robust \
--dataset_name=patrickvonplaten/librispeech_asr_dummy --dataset_config_name=clean \
--train_split_name=validation --validation_split_name=validation --orthography=timit \
--preprocessing_num_workers=1 --group_by_length --freeze_feature_extractor --verbose_logging \
--deepspeed examples/research_projects/wav2vec2/ds_config_wav2vec2_zero2.json
```
### run_common_voice.py
very hard to test with as it takes some 5-10mins to just get ready to run.
**edit**: switch to `datasets` master branch and add `HF_DATASETS_IN_MEMORY_MAX_SIZE=0` to the command line - it will be cached now.
`run_common_voice.py` now runs under `--fp16` but gives `loss=nan`, probably the same issue as bf16-pretrained models? I tested - it has the same issue under AMP and no deepspeed. So it's a different problem to solve.
fp32 works just fine loss-wise, you can try:
```
HF_DATASETS_IN_MEMORY_MAX_SIZE=0 PYTHONPATH="src" deepspeed --num_gpus=1 \
examples/research_projects/wav2vec2/run_common_voice.py \
--model_name_or_path="facebook/wav2vec2-large-xlsr-53" --dataset_config_name="tr" \
--output_dir=./wav2vec2-large-xlsr-turkish-demo --overwrite_output_dir --num_train_epochs="5" \
--per_device_train_batch_size="16" --learning_rate="3e-4" --warmup_steps="500" \
--evaluation_strategy="steps" --save_steps="5" --eval_steps="5" --logging_steps="5" \
--save_total_limit="3" --freeze_feature_extractor --feat_proj_dropout="0.0" --layerdrop="0.1" \
--gradient_checkpointing --group_by_length --do_train --do_eval --deepspeed \
examples/research_projects/wav2vec2/ds_config_wav2vec2_zero2.json
```
Thanks to @patrickvonplaten for making small wav2vec2 models which helped a ton to debug faster and they were needed for the tests.
## Requirements to merge this PR
- [x] https://github.com/microsoft/DeepSpeed/pull/1135
- [x] deepspeed version requirement bumped to 0.4.0
- [x] deepspeed 0.4.0 released
Fixes: https://github.com/huggingface/transformers/issues/11446
| 05-07-2021 22:29:53 | 05-07-2021 22:29:53 | Also link: #11452 here<|||||>Also while working on this I came up with a fused version of `Conv1d` + `WeightNorm` which is simpler to understand and work with as compared to the original `weight_norm` which uses a pre-forward hook, but I'm not sure how to test its correctness - I only tested it in my head.
I will paste it here for now should we want to try it in the future. I think it might need more tweaks for deepspeed in `compute_weight` as these would now not be partitioned. But I don't think they need to be partitioned, as deepspeed will have to gather these anyway.
The main cons is that I'm using non-public APIs.
```
import torch.nn as nn
from torch.nn.parameter import Parameter
from torch import _weight_norm, norm_except_dim
class Conv1dWithWeightNorm(nn.Conv1d):
def __init__(self, *args, **kwargs):
super(Conv1dWithWeightNorm, self).__init__(*args, **kwargs)
self.dim = 2
if is_deepspeed_zero3_enabled():
import deepspeed
with deepspeed.zero.GatheredParameters(self.weight):
weight = self.weight
else:
weight = self.weight
self.weight_g = Parameter(norm_except_dim(weight, 2, self.dim).data)
self.weight_v = Parameter(weight.data)
del self._parameters["weight"]
self.weight = _weight_norm(self.weight_v, self.weight_g, self.dim)
def compute_weight(self):
self.weight_g = Parameter(norm_except_dim(self.weight, 2, self.dim).data)
self.weight_v = Parameter(self.weight.data)
return _weight_norm(self.weight_v, self.weight_g, self.dim)
def forward(self, input):
self.weight = self.compute_weight()
return self._conv_forward(input, self.weight, self.bias)
```
and then using it:
```
class Wav2Vec2PositionalConvEmbedding(nn.Module):
def __init__(self, config):
super().__init__()
self.conv = Conv1dWithWeightNorm(
in_channels=config.hidden_size,
out_channels=config.hidden_size,
kernel_size=config.num_conv_pos_embeddings,
padding=config.num_conv_pos_embeddings // 2,
groups=config.num_conv_pos_embedding_groups,
)
self.padding = Wav2Vec2SamePadLayer(config.num_conv_pos_embeddings)
self.activation = ACT2FN[config.feat_extract_activation]
```
<|||||>I've just noticed that `Wav2Vec2Encoder` also uses `LayerDrop`
https://github.com/huggingface/transformers/blob/0e82f0cbc28b41b3d87a5e4069dc0e20bacc2494/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L555
So it'd break under multi-gpus - how do I activate this path so that it can be triggered and tested?
I did the synchronization for `Wav2Vec2EncoderStableLayerNorm` which was getting used by all current wav2vec2 example scripts I tried.
**edit:** resolved in https://github.com/huggingface/transformers/pull/11638
|
transformers | 11,637 | closed | [self-push CI] sync with self-scheduled | I forgot to add the missing `libaio-dev` to this workflow. This PR is fixing that.
Thank you!
@sgugger or @LysandreJik
| 05-07-2021 20:55:09 | 05-07-2021 20:55:09 | |
transformers | 11,636 | closed | [examples] fix sys.path in conftest.py | the pt/tf reshuffle broke the examples tests setup. This PR fixes it, by
* fixing `sys.path` setting now that it's one level down
@sgugger | 05-07-2021 19:59:45 | 05-07-2021 19:59:45 | That can't work since `conftest` now believes the module "tensorflow" is the folder in the examples with the same name, which is the reason why I moved it.
Path in conftest need to be adapted instead.<|||||>ah, OK, then I will make copies of it. But you can't move it since sys.path was relying on it being in a top-level subfolder - so will have to fix that or make the code more flexible. It's currently broken.
I will take care of it.
thank you for letting me know tf didn't like it.<|||||>ok,
1. put `conftest.py` back and fixed it to set up `sys.path` correctly
2. made 2 copies of it for legacy/research.<|||||>> The fix in itself LGTM but why add a conftest to legacy and research-projects? Both are not supposed to be tested.
I'm adding deepspeed tests for wav2vec2 - it's been complicated to make it work (some parts still don't work). So I want to make sure it doesn't break again.
I don't think it's a good idea to put these tests under common tests.
So I can't run the tests w/o this conftest on dev box.<|||||>I really don't think it's a good idea to add this to `research-projects`: all research projects have a pinned version of Transformers and this would use the master install to test them, which is incompatible with what is advertised.
Maybe add it specifically to the wav2vec2 research project? Or maybe wav2vec2 should move to a maintained example @patrickvonplaten ?<|||||>> I really don't think it's a good idea to add this to `research-projects`: all research projects have a pinned version of Transformers and this would use the master install to test them, which is incompatible with what is advertised.
Good point!
OK, will recall the 2 other files for now.
> Maybe add it specifically to the wav2vec2 research project? Or maybe wav2vec2 should move to a maintained example @patrickvonplaten ?
Yes, totally agree!<|||||>`pathlib` is easier to use for finding out older ancestors than doing `dirname` 4 times:
```
import sys
from pathlib import Path
git_repo_path = Path(__file__).resolve().parents[3] / "src"
sys.path.insert(1, str(git_repo_path))
``` |
transformers | 11,635 | closed | Add visual + link to Premium Support webpage to README | 
probably the most natural place to put it, @gary149 @LysandreJik | 05-07-2021 18:27:23 | 05-07-2021 18:27:23 | |
transformers | 11,634 | closed | Add missing git dependency for RAG example | As noticed in https://github.com/huggingface/transformers/issues/11609, one must install the `git` dependency.
I added it to the requirements.txt of the rag example | 05-07-2021 17:17:30 | 05-07-2021 17:17:30 | |
transformers | 11,633 | closed | Reduce to 1 worker and set timeout for GPU TF tests | This PR reduces the amount of workers for the TensorFlow tests and adds a timeout to 120 minutes to prevent crashed pytest workers from hanging indefinitely. | 05-07-2021 15:53:14 | 05-07-2021 15:53:14 | |
transformers | 11,632 | open | Felix | # 🌟 New model addition
## Model description
New model released by [google](https://github.com/google-research/google-research/tree/master/felix)
## Open source status
* [x] the model implementation is available: https://github.com/google-research/google-research/tree/master/felix
* [x] the model weights are available: https://github.com/google-research/google-research/tree/master/felix (pretrained `bert-base-cased` can be used)
* [x] who are the authors: @Jmallins
| 05-07-2021 13:25:07 | 05-07-2021 13:25:07 | Can I help with this, @patrickvonplaten ?<|||||>Hey @mrm8488,
It would be amazing if you're interested in adding this model. It looks like a difficult one, but I'd definitely try to help you as much as I can if you want to add it :-)<|||||>Looking forward ! <|||||>@ArthurZucker @younesbelkada (the ML engineers currently in charge of the text models). This has been collecting dust for a while. I still think it's relevant because it's the only encoder-only model that I'm aware of that can do text editing. Encoder-only models are faster than encoder-decoder and decoder-only models for inference.<|||||>This might be of interest for @fxmarty 👀 <|||||>(I mean, I suppose LaserTagger is also encoder-only, but FELIX came later and outperforms LaserTagger in both "%exact match with the reference sentence" and latency.)<|||||>Is anyone working on this? If not, I might try to take a shot at it...but first I'd ask if anyone has implemented already anything related to this model or this remained self-contained in the issues cathegory.<|||||>I think you can take this on! <|||||>Is there any new update on this? This approach for text editing with PointerNet reordering feels like a very important step for text editing and faster text generation inference. Is text editing, in general, something the HuggingFace team is interested in pursuing?<|||||>@afonso-sousa so far I didn't have time to work on this. I will eventually find the time, I hope this soon enough, but I would do it in my free time so I have no guarantees. But I'm not jealous: if anyone finds time to do it before me please do! |
transformers | 11,631 | closed | Update code example | # What does this PR do?
Fixes a small typo in the code example of `LukeForEntitySpanClassification`.
Fixes #11629
| 05-07-2021 12:05:03 | 05-07-2021 12:05:03 | |
transformers | 11,630 | closed | Simplify GPT-Neo local attention implementation | # What does this PR do?
The current local self attention implementation for gpt-neo follows the implementation in mtf, which makes it hard to understand. This pull request implements local self attention by changing the bias mask in global attention. Local self attention is a sliding window where each token can only attend to the previous window_size tokens. This implementation clearly reflects this.
[Measured performance](https://gist.github.com/finetuneanon/5b2186c3555b652f387c86160cd89b55) (apply just 330686a3c0520c9727fe5ebed385e708d0178d77 and 269c497be1691556c830f61fa8f90001c692722f on #11320 patched to use) shows no noticable difference between implementations with respect to speed or VRAM usage. The results of both implementations are also identical.
Fixes #11320
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
Since this PR mostly removes code, no additional tests or documentation were written.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
gpt-neo: @patil-suraj
| 05-07-2021 11:36:33 | 05-07-2021 11:36:33 | I just found an issue with use_cache. Will fix it later.
Edit: Fixed.<|||||>unstale<|||||>I have used this version of the code quite extensively without issues and have found no difference in performance when doing evaluations. I have also just fixed the so far present warning about the local attention layer biases not being initialized from checkpoints.<|||||>> I have used this version of the code quite extensively without issues and have found no difference in performance when doing evaluations. I have also just fixed the so far present warning about the local attention layer biases not being initialized from checkpoints.
I want to second this. For the past month, I have been [using](https://github.com/nostalgebraist/nostalgebraist-autoresponder/blob/ff2eafb1fbf6d670513f5d7b64ab99a739120945/stable_library_code/transformers/gpt_neo/modeling_gpt_neo.py) this version of the code in my bot -- which runs 24/7 and uses GPT-Neo 2.7B. It's worked perfectly for me.
By comparison, the implementation currently on `master` [requires](https://github.com/nostalgebraist/nostalgebraist-autoresponder/blob/ff2eafb1fbf6d670513f5d7b64ab99a739120945/selector_model/selector_nn_neo.py#L45) counter-intuitive, inefficient manual padding to avoid OOM.<|||||>Sorry about the super late response.
@finetuneanon @nostalgebraist
This sounds awesome! And yeah, I agree, the current implem is not super readable.
Did you guys run any memory/speed benchmarks comparing the two versions? If so could you please post the results and the script so that we could take a look?<|||||>Can provide those soon. Will close this PR though, because it's also included in #12106.<|||||>Performance evaluation results are available here:
https://github.com/huggingface/transformers/pull/12106#discussion_r650008259
I am reopening this pull request due to the splitting of that PR.<|||||>@patil-suraj is there a reason this hasn't been merged yet? I notice that this method is used in #12493<|||||>HI @finetuneanon, sorry to only come back to this now.
This is a great solution, so let's go ahead with this!
The PR is in good shape already but needs some clean-up and we would also need to adapt tests.
Let me know if you want to continue working on this.<|||||>Thanks for taking over. |
transformers | 11,629 | closed | LukeForEntitySpanClassification - ValueError: only one element tensors can be converted to Python scalars | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0.dev0
- Platform: Linux-5.10.25-linuxkit-x86_64-with-debian-10.1
- Python version: 3.7.4
- PyTorch version (GPU?): 1.8.1+cu102 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: none
### Who can help
@LysandreJik
## Information
Model I am using (Bert, XLNet ...):
- "studio-ousia/luke-large-finetuned-conll-2003"
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
Example script:
https://huggingface.co/transformers/master/model_doc/luke.html#lukeforentityspanclassification
## To reproduce
Steps to reproduce the behavior:
1. Run [this](https://github.com/loretoparisi/hf-experiments/blob/master/src/luke/run.py) script or the code example below adapted from the documentation [here](https://huggingface.co/transformers/master/model_doc/luke.html#lukeforentityspanclassification)
2. Error:
```
Traceback (most recent call last):
File "src/luke/run.py", line 71, in <module>
predicted_class_idx = logits.argmax(-1).item()
ValueError: only one element tensors can be converted to Python scalars
```
```python
import os
from transformers import LukeTokenizer, LukeModel, LukeForEntityPairClassification, LukeForEntitySpanClassification
ner_model = LukeForEntitySpanClassification.from_pretrained("studio-ousia/luke-large-finetuned-conll-2003",
cache_dir=os.getenv("cache_dir", "../../models"))
ner_tokenizer = LukeTokenizer.from_pretrained("studio-ousia/luke-large-finetuned-conll-2003",
cache_dir=os.getenv("cache_dir", "../../models"))
# List all possible entity spans in the text
word_start_positions = [0, 8, 14, 17, 21] # character-based start positions of word tokens
word_end_positions = [7, 13, 16, 20, 28] # character-based end positions of word tokens
entity_spans = []
for i, start_pos in enumerate(word_start_positions):
for end_pos in word_end_positions[i:]:
entity_spans.append((start_pos, end_pos))
inputs = ner_tokenizer(text, entity_spans=entity_spans, return_tensors="pt")
outputs = ner_model(**inputs)
logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", ner_model.config.id2label[predicted_class_idx])
```
## Expected behavior
no errores, print predicted classes
| 05-07-2021 10:46:37 | 05-07-2021 10:46:37 | Yes, it should be updated, because `LukeForEntitySpanClassification` classifies each possible entity span independently, so it should instead become this:
```
predicted_class_indices = logits.argmax(-1).squeeze().tolist()
for span, predicted_class_idx in zip(entity_spans, predicted_class_indices):
if predicted_class_idx != 0:
print(text[span[0]:span[1]], model.config.id2label[predicted_class_idx])
```
The logits are of shape `(1,15,5)`, because there are 15 possible entity spans and 5 classes.
Thanks for reporting. Will fix this!
cc @ikuyamada <|||||>@NielsRogge cool, updated [my code](https://github.com/loretoparisi/hf-experiments/blob/master/src/luke/run.py#L71) and tested, now it works ok:
```
Beyoncé PER
Los Angeles LOC
```
|
transformers | 11,628 | closed | Fixes NoneType exception when topk is larger than one coupled with a small context in the Question-Answering pipeline | # What does this PR do?
This PR fixes a bug while using the QA pipeline where if `topk` > 1 (e.g., 20) and the context of a Question-Context pair is short, the pipeline will propose non-context words as candidates for the answer span. This will generate a `NoneType` error down the line.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/11354
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik @Narsil
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-07-2021 10:26:03 | 05-07-2021 10:26:03 | Great, thanks! Could you fix the quality issues in your code? You can do so with the following:
```
pip install -e .[quality]
make fixup
```<|||||>> Great, thanks! Could you fix the quality issues in your code? You can do so with the following:
>
> ```
> pip install -e .[quality]
> make fixup
> ```
yes ! thanks for the command. It is done :) |
transformers | 11,627 | closed | make fix copy | BigBird Pegasus was merged in parallel to a style refactor of Bart which led `make fix-copies` to fail | 05-07-2021 09:00:45 | 05-07-2021 09:00:45 | |
transformers | 11,626 | closed | NN_pruning module for Question Answering | Hi!
I am trying to run the launch_qa_sparse_single.py file from the question answering example from your nn_pruning library (https://github.com/huggingface/nn_pruning). I haven't changed anything from the original code and I get this error:
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
***** Running training *****
Num examples = 131754
Num Epochs = 20
Instantaneous batch size per device = 16
Total train batch size (w. parallel, distributed & accumulation) = 16
Gradient Accumulation steps = 1
Total optimization steps = 164700
0%| | 0/164700 [00:00<?, ?it/s]Traceback (most recent call last):
File "question_answering/launch_qa_sparse_single.py", line 33, in <module>
main()
File "question_answering/launch_qa_sparse_single.py", line 23, in main
qa.run()
File "./question_answering/xp.py", line 324, in run
self.train()
File "./question_answering/xp.py", line 312, in train
model_path= model_path
File "/home/ines/NN_pruning/venv_nn_prun/lib/python3.7/site-packages/transformers/trainer.py", line 1120, in train
tr_loss += self.training_step(model, inputs)
File "/home/ines/NN_pruning/nn_pruning/nn_pruning/sparse_trainer.py", line 86, in training_step
return super().training_step(*args, **kwargs)
File "/home/ines/NN_pruning/venv_nn_prun/lib/python3.7/site-packages/transformers/trainer.py", line 1542, in training_step
loss.backward()
File "/home/ines/NN_pruning/venv_nn_prun/lib/python3.7/site-packages/torch/tensor.py", line 245, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/ines/NN_pruning/venv_nn_prun/lib/python3.7/site-packages/torch/autograd/__init__.py", line 147, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [16]] is at version 3; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
I found several solutions to this problem on the internet but all the solutions I came accross with tell me to change something in the architecture of the model. Unfortunately here, we are using a Trainer from the transformers library so I don't really know how to fix this issue. Thank you for your help.
I am running this code with torch==1.8.1 and cuda=11.1. | 05-07-2021 08:46:56 | 05-07-2021 08:46:56 | Hi there, would you mind opening this issue in `nn_pruning` repo? Thanks.<|||||>This was bypassed in nn_pruning then fixed in transformers in https://github.com/huggingface/transformers/pull/12026 . |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.