repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 7,902 | closed | change TokenClassificationTask class methods to static methods | Since we do not require self in the class methods of TokenClassificationTask we should probably switch to static methods. Also, since the class TokenClassificationTask does not contain a constructor it is currently unusable as is. By switching to static methods this fixes the issue of having to document the intent of the broken class.
Also, since the get_labels and read_examples_from_file methods are ought to be implemented. Static method definitions are unchanged even after inheritance, which means that it can be overridden, similar to other class methods.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-19-2020 13:11:03 | 10-19-2020 13:11:03 | But why doesn't this PR trigger a CI test? @LysandreJik <|||||>I have no idea, this is the second time it happens. I sent an empty commit on your branch to trigger the build @donchev7.<|||||>Pinging @stefan-it for review. |
transformers | 7,901 | closed | GPT2Tokenizer strips spaces surrounding special tokens | ## Environment info
- `transformers` version: 3.3.1
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.9
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@mfuntowicz, this seems to be an issue related to tokenization. So I hope you are the right person to ping here.
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Minimal working example:
```py
import os
os.makedirs('models/', exist_ok=True)
from transformers import GPT2Tokenizer
from tokenizers import ByteLevelBPETokenizer
open('train.txt', 'w').write('Training data including a <special> token.')
special_tokens = ['<special>']
bpe_tokenizer = ByteLevelBPETokenizer()
bpe_tokenizer.train(
files=['train.txt'],
special_tokens=special_tokens
)
bpe_tokenizer.save_model('models/')
gpt2_tokenizer = GPT2Tokenizer.from_pretrained(
'models/',
additional_special_tokens=special_tokens,
)
```
When encoding below text, the two tokenizers yield different outputs:
```py
>>> text = 'A <special> token.'
>>> bpe_tokenizer.encode(text).tokens
['A', 'Ġ', '<special>', 'Ġ', 't', 'o', 'k', 'e', 'n', '.']
>>> gpt2_tokenizer.tokenize(text)
['A', '<special>', 't', 'o', 'k', 'e', 'n', '.'] # <----- Note the missing space (`Ġ`) around `<special>`
```
## Expected behavior
I would expect that both tokenizers give the same output when encoding the sentence. Furthermore, because `GPT2Tokenizer` seems to remove the spaces surrounding the special token, the decode(encode()) does not return the original string.
```py
assert bpe_tokenizer.encode(text).tokens == ['A', 'Ġ', '<special>', 'Ġ', 't', 'o', 'k', 'e', 'n', '.']
assert gpt2_tokenizer.tokenize(text) == ['A', 'Ġ', '<special>', 'Ġ', 't', 'o', 'k', 'e', 'n', '.']
```
It is possible that I misunderstand the `GPT2Tokenizer` API. Please advise if I should pass `special_tokens` in a different way. Thank you in advance. | 10-19-2020 11:51:44 | 10-19-2020 11:51:44 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>The issue is still present.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This issue is still present in `transformers==4.5.1`.<|||||>Hey @jantrienes, sorry for getting back so late to you. The issue here is on our side - the added tokens work differently between the slow and fast tokenizers and we'll have to patch that. In the meantime, you can use `AddedToken`s instead of the strings to define the strategy w.r.t the stripping of whitespace. This should behave normally:
```py
import os
os.makedirs('models/', exist_ok=True)
from transformers import GPT2Tokenizer
from tokenizers import ByteLevelBPETokenizer, AddedToken
open('train.txt', 'w').write('Training data including a <special> token.')
special_tokens = [AddedToken('<special>')]
bpe_tokenizer = ByteLevelBPETokenizer()
bpe_tokenizer.train(
files=['train.txt'],
special_tokens=special_tokens
)
bpe_tokenizer.save_model('models/')
gpt2_tokenizer = GPT2Tokenizer.from_pretrained(
'models/',
)
gpt2_tokenizer.add_special_tokens({"additional_special_tokens": special_tokens})
text = 'A <special> token.'
print(bpe_tokenizer.encode(text).tokens)
print(gpt2_tokenizer.tokenize(text))
assert bpe_tokenizer.encode(text).tokens == ['A', 'Ġ', '<special>', 'Ġ', 't', 'o', 'k', 'e', 'n', '.']
assert gpt2_tokenizer.tokenize(text) == ['A', 'Ġ', '<special>', 'Ġ', 't', 'o', 'k', 'e', 'n', '.'
```
Note how the special tokens are defined using `AddedToken` instead of a string. Unfortunately, these cannot be passed during initialization as you've done, but I'm fixing this in https://github.com/huggingface/transformers/pull/11325.
You can control the `AddedToken`'s behavior relative to whitespace using the `rstrip` and `lstrip` keyword arguments.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This issue is present for "non-special" tokens in transformer 4.12.3. The code explicitly strips spaces from the surrounding tokens. It does NOT look for the `AddedToken` class to get the behavior so there is no way to change this.
See `tokenization_utils.py::tokenizer()` line 517:
```
else:
# We strip left and right by default
if right:
tokens[i + 1] = right.lstrip()
if left:
tokens[i - 1] = left.rstrip()
```
This leads to incorrect behavior unless the added token is in the middle of a longer word.
I should point out that the situation is more complicated than just changing the behavior above. Even if you comment out those lines (at least with the T5Tokenizer) there are still no spaces around the added tokens because, just below those lines, the logic to concatenate all the tokens is also stripping the spaces as well (see `tokenized_text.extend(self._tokenize(token))`).
I'm not sure what the right solution is but this is in the base class so it's happening for a number of tokenizers including T5 and bart. It is also happening for T5Fast.<|||||>@SaulLu, would you like to take a look at this?<|||||>Glad to look at this issue in more detail! I'll dive in tomorrow :monocle_face: <|||||>
> I'm not sure what the right solution is but this is in the base class so it's happening for a number of tokenizers including T5 and bart. It is also happening for T5Fast.
Yeah, I'm noticing spaces around my added tokens when creating a custom Bart tokenizer. Did this get resolved for you? Is there a way to work around it? |
transformers | 7,900 | closed | example for passage re-ranking using bert-multilingual-passage-reranking-msmarco | can you give me a working example this
`from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("amberoad/bert-multilingual-passage-reranking-msmarco")
model = AutoModelForSequenceClassification.from_pretrained("amberoad/bert-multilingual-passage-reranking-msmarco")
`
| 10-19-2020 10:13:01 | 10-19-2020 10:13:01 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,899 | closed | Create README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-19-2020 09:53:15 | 10-19-2020 09:53:15 | |
transformers | 7,898 | closed | [EncoderDecoder] google/roberta2roberta_L-24_wikisplit | It seems like `google/roberta2roberta_L-24_wikisplit` should be pre-processed differently than originally thought:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_wikisplit")
model = AutoModelForSeq2SeqLM.from_pretrained("google/roberta2roberta_L-24_wikisplit")
long_sentence = """Due to the hurricane, Lobsterfest has been canceled, making Bob very happy about it and he decides to open Bob 's Burgers for customers who were planning on going to Lobsterfest."""
input_ids = tokenizer(long_sentence, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)[0]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
# should output
# Due Due hurricane, Lobsterfest has been canceled, making Bob very happy about it. He decides to open B
# ob's Burgers for customers who were planning on going to Lobsterfest.com.
```
yields a weird word duplication bug, whereas:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_wikisplit")
model = AutoModelForSeq2SeqLM.from_pretrained("google/roberta2roberta_L-24_wikisplit")
long_sentence = """Due to the hurricane, Lobsterfest has been canceled, making Bob very happy about it and he decides to open Bob 's Burgers for customers who were planning on going to Lobsterfest."""
input_ids = tokenizer(tokenizer.bos_token + long_sentence + tokenizer.eos_token, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)[0]
print(tokenizer.decode(output_ids, skip_special_tokens=True))
# should output
# Due to the hurricane, Lobsterfest has been canceled, making Bob very happy about it. He decides to open Bob's Burgers for customers who were planning on going to Lobsterfest.
```
yields good results.
Notice the difference of
`input_ids = tokenizer(tokenizer.bos_token + long_sentence + tokenizer.eos_token, return_tensors="pt").input_ids` between code sample 1 and 2.
@patrickvonplaten
Wait for author's (@shashiongithub) answer before changing code example.
| 10-19-2020 09:12:58 | 10-19-2020 09:12:58 | Confirmed by author that BOS and EOS have to be added -> fixed in https://github.com/huggingface/transformers/commit/0724c0f3a2d302246d0bd0b7d2f721fa902dee1b. |
transformers | 7,897 | closed | GPT2Tokenizer.add_tokens() didnt change tokenizer.vocab_size | follow the instruction in add_tokens()
code:
special_tokens = ['<pad>', '<go_r>', '<unk>', '<go_b>', '<go_a>', '<go_u>', ]
num_added_toks = tokenizer.add_tokens(special_tokens)
print('We have added', num_added_toks, 'tokens')
it can successfully encode and decode the newly added tokens, but the GPT2Tokenizer.vocab_size is still 50257. | 10-19-2020 07:59:58 | 10-19-2020 07:59:58 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,896 | closed | Bert2bert EncoderDecoderModel from Huggingface is generating a zero tensor for any input | I am using Bert2Bert EncoderDecoderModel from Huggingface for sentence simplification. But my model is generating a zero tensor of the same length regardless of the input. Could someone help me what is wrong in my project. The GitHub link of my project is this https://github.com/Aakash12980/Sentence-Simplification-using-Transformer. Please help me through this.
My decode module is like this:
`
@task.command()
@click.option('--src_test', default="./drive/My Drive/Mini Project/dataset/src_test.txt", help="test source file path")
@click.option('--tgt_test', default="./drive/My Drive/Mini Project/dataset/tgt_test.txt", help="test target file path")
@click.option('--best_model', default="./drive/My Drive/Mini Project/best_model/model.pt", help="best model file path")
def test(**kwargs):
print("Testing Model module executing...")
logging.info(f"Test module invoked.")
src_test = open_file(kwargs['src_test'])
tgt_test = open_file(kwargs['tgt_test'])
len_data = len(src_test)
test_dataset = WikiDataset(src_test, tgt_test)
test_dl = DataLoader(test_dataset, batch_size=TRAIN_BATCH_SIZE, collate_fn=collate_fn, shuffle=True)
_,_,_ = load_checkpt(kwargs["best_model"])
print("Model loaded...")
model.to(device)
model.eval()
test_start_time = time.time()
epoch_test_loss = evaluate(test_dl, 0)
epoch_test_loss = epoch_test_loss/len_data
print(f'avg. test loss: {epoch_test_loss:.5f} | time elapsed: {time.time() - test_start_time}')
logging.info(f'avg. test loss: {epoch_test_loss:.5f} | time elapsed: {time.time() - test_start_time}')
print("Test Complete!")
`
and the output its generates is this:
`input: tensor([[ 101, 1188, 1108, 1272, 11310, 1125, 1136, 1151, 2548, 1106,
2732, 20386, 2692, 117, 2693, 1117, 16975, 117, 1105, 1103,
8183, 1571, 1105, 8183, 1545, 3784, 118, 8581, 1125, 3335,
1136, 1678, 2629, 119, 102]])
output: tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
input: tensor([[ 101, 1109, 4665, 1108, 6497, 1173, 1112, 1126, 169, 169,
6548, 21718, 24226, 25285, 112, 112, 117, 1112, 1103, 1586,
1108, 169, 169, 16314, 1105, 4489, 1200, 1190, 1103, 6188,
117, 4411, 1126, 5340, 1895, 4252, 20473, 4626, 2116, 1104,
9494, 112, 112, 119, 102]])
output: tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
input: tensor([[ 101, 13030, 1643, 2883, 1144, 3567, 8593, 9648, 1958, 117,
1259, 1210, 4748, 1453, 4278, 117, 1160, 1635, 8793, 112,
1635, 4278, 1105, 170, 6068, 1635, 1509, 119, 102]])
output: tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])`
| 10-19-2020 04:15:08 | 10-19-2020 04:15:08 | Hey @Aakash12980,
Could you please provide a complete code examples with which I can reproduce the results? Did you train the model yourself or did you use a pretrained model? (Which one in case you did?)<|||||>> Hey @Aakash12980,
>
> Could you please provide a complete code examples with which I can reproduce the results? Did you train the model yourself or did you use a pretrained model? (Which one in case you did?)
@patrickvonplaten I have used pretrained bert2bert model and I am fine-tuning it for sentence simplification.
The full code in run.py is below
```
logging.basicConfig(filename="./drive/My Drive/Mini Project/log_file.log", level=logging.INFO,
format="%(asctime)s:%(levelname)s: %(message)s")
CONTEXT_SETTINGS = dict(help_option_names = ['-h', '--help'])
TRAIN_BATCH_SIZE = 4
N_EPOCH = 5
LOG_EVERY = 11000
config_encoder = BertConfig()
config_decoder = BertConfig()
config_decoder.is_decoder = True
config_decoder.add_cross_attention = True
config = EncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-cased', 'bert-base-cased', config=config)
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using {device} as device")
def collate_fn(batch):
data_list, label_list = [], []
for _data, _label in batch:
data_list.append(_data)
label_list.append(_label)
return data_list, label_list
def save_model_checkpt(state, is_best, check_pt_path, best_model_path):
f_path = check_pt_path
torch.save(state, f_path)
if is_best:
best_fpath = best_model_path
shutil.copyfile(f_path, best_fpath)
def load_checkpt(checkpt_path, optimizer=None):
if device == "cpu":
checkpoint = torch.load(checkpt_path, map_location=torch.device("cpu"))
model.load_state_dict(checkpoint["model_state_dict"])
if optimizer is not None:
optimizer.load_state_dict(checkpoint["optimizer_state_dict"], map_location=torch.device("cpu"))
else:
model.load_state_dict(checkpoint["model_state_dict"])
if optimizer is not None:
optimizer.load_state_dict(checkpoint["optimizer_state_dict"])
eval_loss = checkpoint["eval_loss"]
epoch = checkpoint["epoch"]
return optimizer, eval_loss, epoch
def evaluate(batch_iter, e_loss):
was_training = model.training
model.eval()
eval_loss = e_loss
with torch.no_grad():
for step, batch in enumerate(batch_iter):
src_tensors, src_attn_tensors, tgt_tensors, tgt_attn_tensors = generate_tokens(batch)
loss, _ = model(input_ids = src_tensors.to(device),
decoder_input_ids = tgt_tensors.to(device),
attention_mask = src_attn_tensors.to(device),
decoder_attention_mask = tgt_attn_tensors.to(device),
labels=tgt_tensors.to(device))[:2]
eval_loss += (1/(step+1)) * (loss.item() - eval_loss)
if was_training:
model.train()
return eval_loss
@click.group(context_settings=CONTEXT_SETTINGS)
@click.version_option(version = '1.0.0')
def task():
''' This is the documentation of the main file. This is the reference for executing this file.'''
pass
@task.command()
@click.option('--src_train', default="./drive/My Drive/Mini Project/dataset/src_train.txt", help="train source file path")
@click.option('--tgt_train', default="./drive/My Drive/Mini Project/dataset/tgt_train.txt", help="train target file path")
@click.option('--src_valid', default="./drive/My Drive/Mini Project/dataset/src_valid.txt", help="validation source file path")
@click.option('--tgt_valid', default="./drive/My Drive/Mini Project/dataset/tgt_valid.txt", help="validation target file path")
@click.option('--best_model', default="./drive/My Drive/Mini Project/best_model/model.pt", help="best model file path")
@click.option('--checkpoint_path', default="./drive/My Drive/Mini Project/checkpoint/model_ckpt.pt", help=" model check point files path")
@click.option('--seed', default=123, help="manual seed value (default=123)")
def train(**kwargs):
print("Training data module executing...")
logging.info(f"Train module invoked.")
seed = kwargs["seed"]
torch.manual_seed(seed)
if device == "cuda":
torch.cuda.manual_seed(seed)
np.random.seed(seed)
print("Loading dataset...")
src_train = open_file(kwargs['src_train'])
tgt_train = open_file(kwargs['tgt_train'])
src_valid = open_file(kwargs['src_valid'])
tgt_valid = open_file(kwargs['tgt_valid'])
train_len = len(src_train)
valid_len = len(src_valid)
print("Dataset Loaded.")
train_dataset = WikiDataset(src_train, tgt_train)
valid_dataset = WikiDataset(src_valid, tgt_valid)
del src_valid, src_train, tgt_train, tgt_valid
print("Creating Dataloader...")
train_dl = DataLoader(train_dataset, batch_size=TRAIN_BATCH_SIZE, collate_fn=collate_fn, shuffle=True)
valid_dl = DataLoader(valid_dataset, batch_size=TRAIN_BATCH_SIZE, collate_fn=collate_fn, shuffle=True)
print("Dataloader created.")
model.to(device)
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'gamma', 'beta']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.0}
]
optimizer = torch.optim.AdamW(optimizer_grouped_parameters, lr=3e-3)
eval_loss = float('inf')
start_epoch = 0
if os.path.exists(kwargs["checkpoint_path"]):
optimizer, eval_loss, start_epoch = load_checkpt(kwargs["checkpoint_path"], optimizer)
print(f"Loading model from checkpoint with start epoch: {start_epoch} and loss: {eval_loss}")
logging.info(f"Model loaded from saved checkpoint with start epoch: {start_epoch} and loss: {eval_loss}")
train_model(start_epoch, eval_loss, (train_dl, valid_dl), optimizer, kwargs["checkpoint_path"], kwargs["best_model"], (train_len, valid_len))
print("Model Training Complete!")
@task.command()
@click.option('--src_test', default="./drive/My Drive/Mini Project/dataset/src_test.txt", help="test source file path")
@click.option('--tgt_test', default="./drive/My Drive/Mini Project/dataset/tgt_test.txt", help="test target file path")
@click.option('--best_model', default="./drive/My Drive/Mini Project/best_model/model.pt", help="best model file path")
def test(**kwargs):
print("Testing Model module executing...")
logging.info(f"Test module invoked.")
src_test = open_file(kwargs['src_test'])
tgt_test = open_file(kwargs['tgt_test'])
len_data = len(src_test)
test_dataset = WikiDataset(src_test, tgt_test)
test_dl = DataLoader(test_dataset, batch_size=TRAIN_BATCH_SIZE, collate_fn=collate_fn, shuffle=True)
_,_,_ = load_checkpt(kwargs["best_model"])
print("Model loaded...")
model.to(device)
model.eval()
test_start_time = time.time()
epoch_test_loss = evaluate(test_dl, 0)
epoch_test_loss = epoch_test_loss/len_data
print(f'avg. test loss: {epoch_test_loss:.5f} | time elapsed: {time.time() - test_start_time}')
logging.info(f'avg. test loss: {epoch_test_loss:.5f} | time elapsed: {time.time() - test_start_time}')
print("Test Complete!")
#/drive/My Drive/Mini Project
@task.command()
@click.option('--src_file', default="./drive/My Drive/Mini Project/dataset/src_file.txt", help="test source file path")
@click.option('--best_model', default="./drive/My Drive/Mini Project/checkpoint/model_ckpt.pt", help="best model file path")
@click.option('--output', default="./drive/My Drive/Mini Project/outputs/decoded.txt", help="file path to save predictions")
def decode(**kwargs):
print("Decoding sentences module executing...")
logging.info(f"Decode module invoked.")
src_test = open_file(kwargs['src_file'])
print("Saved model loading...")
_,_,_ = load_checkpt(kwargs["best_model"])
print(f"Model loaded.")
model.to(device)
model.eval()
inp_tokens = create_sent_tokens(src_test)
predicted_list = []
print("Decoding Sentences...")
for tensor in inp_tokens:
with torch.no_grad():
predicted = model.generate(tensor.to(device), decoder_start_token_id=model.config.decoder.pad_token_id)
print(f"input: {tensor}")
print(f'output: {predicted.squeeze()}')
predicted_list.append(predicted.squeeze())
output = get_sent_from_tokens(predicted_list)
with open(kwargs["output"], "w") as f:
for sent in output:
f.write(sent + "\n")
print("Output file saved successfully.")
def train_model(start_epoch, eval_loss, loaders, optimizer, check_pt_path, best_model_path, len_data):
best_eval_loss = eval_loss
print("Model training started...")
for epoch in range(start_epoch, N_EPOCH):
print(f"Epoch {epoch} running...")
epoch_start_time = time.time()
epoch_train_loss = 0
epoch_eval_loss = 0
model.train()
for step, batch in enumerate(loaders[0]):
src_tensors, src_attn_tensors, tgt_tensors, tgt_attn_tensors = generate_tokens(batch)
optimizer.zero_grad()
model.zero_grad()
loss = model(input_ids = src_tensors.to(device),
decoder_input_ids = tgt_tensors.to(device),
attention_mask = src_attn_tensors.to(device),
decoder_attention_mask = tgt_attn_tensors.to(device),
labels = tgt_tensors.to(device))[0]
loss.backward()
optimizer.step()
epoch_train_loss += (1/(step+1))*(loss.item() - epoch_train_loss)
if (step+1) % LOG_EVERY == 0:
print(f'Epoch: {epoch} | iter: {step+1} | avg. loss: {epoch_train_loss/TRAIN_BATCH_SIZE} | time elapsed: {time.time() - epoch_start_time}')
logging.info(f'Epoch: {epoch} | iter: {step+1} | avg. loss: {epoch_train_loss/TRAIN_BATCH_SIZE} | time elapsed: {time.time() - epoch_start_time}')
eval_start_time = time.time()
epoch_eval_loss = evaluate(loaders[1], epoch_eval_loss)
epoch_eval_loss = epoch_eval_loss/TRAIN_BATCH_SIZE
print(f'Completed Epoch: {epoch} | avg. eval loss: {epoch_eval_loss:.5f} | time elapsed: {time.time() - eval_start_time}')
logging.info(f'Completed Epoch: {epoch} | avg. eval loss: {epoch_eval_loss:.5f} | time elapsed: {time.time() - eval_start_time}')
check_pt = {
'epoch': epoch+1,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'eval_loss': epoch_eval_loss
}
check_pt_time = time.time()
print("Saving Checkpoint.......")
if epoch_eval_loss < best_eval_loss:
print("New best model found")
logging.info(f"New best model found")
best_eval_loss = epoch_eval_loss
save_model_checkpt(check_pt, True, check_pt_path, best_model_path)
else:
save_model_checkpt(check_pt, False, check_pt_path, best_model_path)
print(f"Checkpoint saved successfully with time: {time.time() - check_pt_time}")
logging.info(f"Checkpoint saved successfully with time: {time.time() - check_pt_time}")
gc.collect()
torch.cuda.empty_cache()
if __name__ == "__main__":
task()
```
The GitHub link for my full project is here [https://github.com/Aakash12980/Sentence-Simplification-using-Transformer](url). <|||||>Hey @Aakash12980 - I think you forgot to replace the pad_token_ids with -100 in your code. You can checkout the map function here: https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16#training-script to see how this should be done.<|||||>> Hey @Aakash12980 - I think you forgot to replace the pad_token_ids with -100 in your code. You can checkout the map function here: https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16#training-script to see how this should be done.
@patrickvonplaten I have included -100 as well in the labels and again I am getting the same weird outputs from my model.generate() method.
```
Setting `pad_token_id` to 102 (first `eos_token_id`) to generate sequence
output: tensor([101, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,
119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,
119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,
119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,
119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,
119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,
119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,
119, 119], device='cuda:0')
Setting `pad_token_id` to 102 (first `eos_token_id`) to generate sequence
output: tensor([101, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,
119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,
119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,
119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,
119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,
119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,
119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119, 119,
119, 119], device='cuda:0')
```
My updated code is this link: [https://github.com/Aakash12980/Sentence-Simplification-using-Transformer](url)<|||||>Hey @Aakash12980, hmm - I'm not sure what to do here then either. I've done quite a lot of Bert2Bert fine-tune runs and they all worked for me. Also I will publish a more in-detail notebook in ~2 weeks on how to do Bert2Bert training.
It's quite time-consuming and difficult for me to dive deeper into your code, so the best I can do for you at the moment is this training code snippet: https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16#training-script<|||||>@patrickvonplaten Thank you so much for your time. <|||||>@patrickvonplaten Could you please explain to me why you have assigned -100 to PAD tokens?
```
# mask loss for padding
batch["labels"] = [
[-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch["labels"]
]
```
I don't quite understand what mask loss for padding means. Also, Do I need to pass labels when I am not using Trainer() method?
```
loss, logits = self.model(input_ids = src_tensors.to(device),
decoder_input_ids = tgt_tensors.to(device),
attention_mask = src_attn_tensors.to(device),
decoder_attention_mask = tgt_attn_tensors.to(device),
labels = labels.to(device))[:2]
```
Why do we pass labels since we have already provided decoder_iniput_ids?<|||||>if label == -100, it means that on such tokens no loss is calculated and therefore does not affect the gradient. We don't want the model to learn to predict PAD tokens, so we make sure that no loss is calculated by putting a -100 on them. PyTorch's CE loss function uses -100 as the ignore token number.<|||||>Thank you @patrickvonplaten I have trained my model but my model is generating tokens exactly similar to the input tokens. I tried to find out what have I done wrong but I couldn't. I don't know why it is generating the same tokens as inputs. Do you have any idea?
```
Model loaded.
Decoding Sentences...
input tokens: tensor([[ 101, 140, 23339, 1706, 12788, 140, 22715, 8469, 1106, 2194,
1103, 5080, 1104, 172, 23339, 17458, 1107, 170, 172, 22715,
24759, 3536, 119, 102]])
Setting `pad_token_id` to 102 (first `eos_token_id`) to generate sequence
output tokens: tensor([ 101, 140, 23339, 1706, 12788, 140, 22715, 8469, 1106, 2194,
1103, 5080, 1104, 172, 23339, 17458, 1107, 170, 172, 22715,
24759, 3536, 119, 102], device='cuda:0')
input tokens: tensor([[ 101, 18959, 18072, 1108, 1549, 1103, 13852, 6607, 1104, 156,
14046, 3263, 1670, 1118, 1624, 1600, 1107, 18563, 1571, 117,
1106, 9489, 1105, 2669, 1103, 2226, 112, 188, 16358, 20408,
119, 102]])
Setting `pad_token_id` to 102 (first `eos_token_id`) to generate sequence
output tokens: tensor([ 101, 18959, 18072, 1108, 1549, 1103, 13852, 6607, 1104, 156,
14046, 3263, 1670, 1118, 1624, 1600, 1107, 18563, 1571, 117,
1106, 9489, 1105, 2669, 1103, 2226, 112, 188, 16358, 20408,
119, 102], device='cuda:0')
input tokens: tensor([[ 101, 153, 14272, 1633, 1110, 170, 5188, 1107, 1103, 1318,
26042, 2853, 1107, 1564, 118, 2466, 1699, 119, 102]])
Setting `pad_token_id` to 102 (first `eos_token_id`) to generate sequence
output tokens: tensor([ 101, 153, 14272, 1633, 1110, 170, 5188, 1107, 1103, 1318,
26042, 2853, 1107, 1564, 118, 2466, 1699, 119, 102],
device='cuda:0')
```
It is driving me crazy. I ran the training module for 4 epochs and this was the best model I got so far.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,895 | closed | [testing] slow tests should be marked as slow | As discussed in https://github.com/huggingface/transformers/issues/7885 this is an effort to make the test suite manageable execution time-wise, as the number of tests is growing and it takes much longer to complete the tests on CI.
* [x] document and set a standard for when a test needs to be marked as `@slow`
* [x] Marks slow the following tests:
- tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_from_pretrained (23s)
- tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_translation (17s)
- tests/test_modeling_marian.py::TestMarian_en_ROMANCE::test_pipeline (35s)
Fixes: https://github.com/huggingface/transformers/issues/7885
@LysandreJik, @sgugger, @sshleifer
| 10-19-2020 00:56:28 | 10-19-2020 00:56:28 | > For common tests to be marked slow, the slowest iteration of that common test must be > 15s.
This one probably needs to be more specific, if for a normal test we decide a threshold of 5s, and you get a common test runs like: 14s, 13, 13, 12s - this is still too slow. perhaps an average of all commont tests for this test should be calculated?
e.g. this one:
```
$ grep test_model_outputs_equivalence stats.txt | perl -ne 's|^(.*?)s.|$x+=$1|e; END {print int $x}'
230
$ grep test_model_outputs_equivalence stats.txt | wc -l
46
$ perl -le 'print 230/46'
5
```
How interesting that it hits 5sec exactly, but it's on my machine so need to re-eval on CI.
Here is a one liner to do all 3 lines at once: (`$.` contains the same data as `wc -l` - line counter in perl)
```
$ grep test_model_outputs_equivalence stats.txt | perl -ne 's|^(.*?)s.|$x+=$1|e; END {print int $x/$.}'
5
```
The full picture for this common test:
```
20.11s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_model_outputs_equivalence
16.19s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_model_outputs_equivalence
13.49s call tests/test_modeling_gpt2.py::GPT2ModelTest::test_model_outputs_equivalence
9.94s call tests/test_modeling_bert.py::BertModelTest::test_model_outputs_equivalence
9.56s call tests/test_modeling_albert.py::AlbertModelTest::test_model_outputs_equivalence
8.81s call tests/test_modeling_flaubert.py::FlaubertModelTest::test_model_outputs_equivalence
8.29s call tests/test_modeling_tf_xlm.py::TFXLMModelTest::test_model_outputs_equivalence
7.98s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_model_outputs_equivalence
7.87s call tests/test_modeling_xlnet.py::XLNetModelTest::test_model_outputs_equivalence
6.85s call tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_model_outputs_equivalence
6.81s call tests/test_modeling_squeezebert.py::SqueezeBertModelTest::test_model_outputs_equivalence
6.30s call tests/test_modeling_tf_roberta.py::TFRobertaModelTest::test_model_outputs_equivalence
6.25s call tests/test_modeling_roberta.py::RobertaModelTest::test_model_outputs_equivalence
5.90s call tests/test_modeling_reformer.py::ReformerLocalAttnModelTest::test_model_outputs_equivalence
5.81s call tests/test_modeling_electra.py::ElectraModelTest::test_model_outputs_equivalence
5.79s call tests/test_modeling_distilbert.py::DistilBertModelTest::test_model_outputs_equivalence
5.69s call tests/test_modeling_xlm.py::XLMModelTest::test_model_outputs_equivalence
5.35s call tests/test_modeling_tf_t5.py::TFT5ModelTest::test_model_outputs_equivalence
4.64s call tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_model_outputs_equivalence
4.34s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_model_outputs_equivalence
3.79s call tests/test_modeling_dpr.py::DPRModelTest::test_model_outputs_equivalence
3.71s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_model_outputs_equivalence
3.61s call tests/test_modeling_bart.py::BARTModelTest::test_model_outputs_equivalence
3.58s call tests/test_modeling_openai.py::OpenAIGPTModelTest::test_model_outputs_equivalence
3.57s call tests/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_model_outputs_equivalence
3.53s call tests/test_modeling_ctrl.py::CTRLModelTest::test_model_outputs_equivalence
3.40s call tests/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_model_outputs_equivalence
3.31s call tests/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_model_outputs_equivalence
3.19s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_model_outputs_equivalence
3.12s call tests/test_modeling_tf_openai.py::TFOpenAIGPTModelTest::test_model_outputs_equivalence
2.98s call tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_model_outputs_equivalence
2.93s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_model_outputs_equivalence
2.80s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_model_outputs_equivalence
2.59s call tests/test_modeling_longformer.py::LongformerModelTest::test_model_outputs_equivalence
2.37s call tests/test_modeling_transfo_xl.py::TransfoXLModelTest::test_model_outputs_equivalence
2.17s call tests/test_modeling_funnel.py::FunnelModelTest::test_model_outputs_equivalence
2.13s call tests/test_modeling_fsmt.py::FSMTModelTest::test_model_outputs_equivalence
2.02s call tests/test_modeling_bert_generation.py::BertGenerationEncoderTest::test_model_outputs_equivalence
1.94s call tests/test_modeling_funnel.py::FunnelBaseModelTest::test_model_outputs_equivalence
1.70s call tests/test_modeling_t5.py::T5ModelTest::test_model_outputs_equivalence
1.60s call tests/test_modeling_deberta.py::DebertaModelTest::test_model_outputs_equivalence
1.44s call tests/test_modeling_lxmert.py::LxmertModelTest::test_model_outputs_equivalence
1.22s call tests/test_modeling_layoutlm.py::LayoutLMModelTest::test_model_outputs_equivalence
1.11s call tests/test_modeling_reformer.py::ReformerLSHAttnModelTest::test_model_outputs_equivalence
0.86s call tests/test_modeling_tf_ctrl.py::TFCTRLModelTest::test_model_outputs_equivalence
0.33s call tests/test_modeling_blenderbot.py::BlenderbotTesterMixin::test_model_outputs_equivalence
```
```<|||||>Please have a look at this recent [report](https://github.com/huggingface/transformers/issues/7885#issuecomment-712287004) and let me know if anything else should be marked as slow.
<|||||>@LysandreJik, I think the 3 of us have had a good go at it. At your convenience please review the resulting prose and if there is something to modify please proceed to change it directly and then merge it. Much appreciated! |
transformers | 7,894 | closed | AdamW | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
Using AdamW of transformers 2.5.1 or 2.6.0, I got the following:
/home/user/anaconda3/lib/python3.7/site-packages/transformers/optimization.py:155: UserWarning: This overload of add_ is deprecated:
add_(Number alpha, Tensor other)
Consider using one of the following signatures instead:
add_(Tensor other, *, Number alpha) (Triggered internally at /opt/conda/conda-bld/pytorch_1595629403081/work/torch/csrc/utils/python_arg_parser.cpp:766.)
exp_avg.mul_(beta1).add_(1.0 - beta1, grad)
- `transformers` version:
- Platform: linux
- Python version: 3.7.4
- PyTorch version (GPU?): 1.6.0
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 10-19-2020 00:55:39 | 10-19-2020 00:55:39 | Hi, yes, this is a warning, nothing you should be afraid of. |
transformers | 7,893 | closed | pip install transformers by default install 2.5.1 | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
https://huggingface.co/transformers/installation.html
Running '''pip install transformers''' by default installs 2.5.1
Could you also specify TensorFlow CPU or GPU version should be installed
- `transformers` version:
- Platform:
- Python version: 3.7.4
- PyTorch version (GPU?): yes
- Tensorflow version (GPU?): 2.2.0
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 10-19-2020 00:45:05 | 10-19-2020 00:45:05 | Running `pip install transformers` installs the latest version for me. Which pip version are you running on?
You can install both cpu or gpu for tensorflow, if you want to run on a cpu or a gpu.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,892 | closed | Issue with XLM-R for multiple-choice questions | Hi there,
I am not able to reasonable numbers with XLM models ("xlm-roberta-base" "xlm-roberta-large") when I test them on [multiple choice questions](https://github.com/huggingface/transformers/tree/master/examples/multiple-choice).
I suspect that it's related to issue #7774.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Ubuntu
- Python version: 3.7
- PyTorch version (GPU?): 3.6.0 (yes)
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?:
### Who can help
@LysandreJik @sgugger @VictorSanh
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
Here is the script you can try, based on [these instructions](https://github.com/huggingface/transformers/tree/master/examples/multiple-choice):
```
export SWAG_DIR=/path/to/swag_data_dir
python ./examples/multiple-choice/run_multiple_choice.py \
--task_name swag \
--model_name_or_path xlm-roberta-base \
--do_train \
--do_eval \
--data_dir $SWAG_DIR \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--max_seq_length 80 \
--output_dir models_bert/swag_base \
--per_gpu_eval_batch_size=16 \
--per_device_train_batch_size=16 \
--gradient_accumulation_steps 2 \
--overwrite_output
```
It should give you under 30% -- near random.
| 10-19-2020 00:01:37 | 10-19-2020 00:01:37 | Thanks for reporting, will investigate.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hi @danyaljj and @LysandreJik I have the same issue. Are there any suggestions for running xlm-r (base/large) with multiple-choice qa? |
transformers | 7,891 | closed | [RAG] Propagating of n_docs as parameter to all RagModel's related functions |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #7874
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@patrickvonplaten | 10-18-2020 22:14:57 | 10-18-2020 22:14:57 | @lhoestq would be great if you can review as well<|||||>@patrickvonplaten Thanks for the review.
while working on this PR I found that in `RagTokenForGeneration` we are computing `batch_size` as follows -
```
batch_size = context_input_ids.shape[0] // n_docs
```
So still issue can come when `((context_input_ids.shape[0] % n_docs) != 0)`, but I can't think of solution to address this.<|||||>> @patrickvonplaten Thanks for the review.
>
> while working on this PR I found that in `RagTokenForGeneration` we are computing `batch_size` as follows -
>
> ```
> batch_size = context_input_ids.shape[0] // n_docs
> ```
>
> So still issue can come when `((context_input_ids.shape[0] % n_docs) != 0)`, but I can't think of solution to address this.
`context_input_ids` is always supposed to have a size of `n_docs` times the number of input questions <|||||>> > @patrickvonplaten Thanks for the review.
> > while working on this PR I found that in `RagTokenForGeneration` we are computing `batch_size` as follows -
> > ```
> > batch_size = context_input_ids.shape[0] // n_docs
> > ```
> >
> >
> > So still issue can come when `((context_input_ids.shape[0] % n_docs) != 0)`, but I can't think of solution to address this.
>
> `context_input_ids` is always supposed to have a size of `n_docs` times the number of input questions
It would be better if we mention it explicitly by assert. WDYT?
In one of my test case I used `n_docs=3` for retriever and `n_docs=2` for generator and it failed<|||||>> It would be better if we mention it explicitly by assert. WDYT?
> In one of my test case I used `n_docs=3` for retriever and `n_docs=2` for generator and it failed
Yes indeed. Also if `((context_input_ids.shape[0] % n_docs) != 0)` then we should raise an error otherwise some retrieved documents will be ignored for generation.<|||||>Yes @lalitpagaria - it would be nice if you can add an asserte statement verifying that `n_docs` is correctly set. `n_docs` should be the same for both retriever and generator.<|||||>@patrickvonplaten @lhoestq Added assert at two places please verify, along with supporting unit test. Pardon my naming convention for test function, and please suggest proper name :)
> n_docs should be the same for both retriever and generator.
This can't be check if `generator` does not know about `retriever` hence using this `((context_input_ids.shape[0] % n_docs) != 0)`<|||||>@patrickvonplaten and @lhoestq Thanks for the review. I liked the test coverage of this project. Initially I struggled but letter all worked nicely. You can merge when you want.<|||||>Slow tests pass => ready to merge<|||||>Good job @lalitpagaria ! |
transformers | 7,890 | closed | [wip] improved github actions workflow | This is a WIP work on https://github.com/huggingface/transformers/issues/7887 | 10-18-2020 20:17:17 | 10-18-2020 20:17:17 | @sshleifer, I don't think I can do anything here w/o repo permissions. I have no access to the runners.
https://github.com/stas00/transformers/actions/runs/319165633
> No runner matching the specified labels was found: self-hosted, multi-gpu<|||||>github is so borked! It now won't let me delete the new workflow - and keeps running it automatically and emailing me on each commit I make elsewhere in this project - what horrors!<|||||>ask on https://github.community/, they are pretty responsive. |
transformers | 7,889 | closed | from_pretrained incompatible with the models being downloaded | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.3.0
- Platform: Linux
- Python version: 3.7.7
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
-->
@LysandreJik
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
```
import logging
logging.basicConfig(level=logging.DEBUG)
from transformers import TFAlbertModel
model = TFAlbertModel.from_pretrained('albert-xxlarge-v2')
```
**Problem:**
The model does not load the weights. I assume due to some incompatibility. The output:
```
INFO:transformers.modeling_tf_utils:Layers of TFAlbertModel not initialized from pretrained model: ['pooler', 'encoder', 'embeddings']
INFO:transformers.modeling_tf_utils:Layers from pretrained model not used in TFAlbertModel: ['albert', 'predictions']
```
I suppose the model from https://s3.amazonaws.com/models.huggingface.co/bert/albert-xxlarge-v2-tf_model.h5 has been changed and is incompatible now. I cannot find the older version of the model anywhere.
## To reproduce
Steps to reproduce the behavior:
(See script above)
1. (Assume) Clean model cache
2. Enable full logging
3. Run TFAlbertModel.from_pretrained('albert-xxlarge-v2')
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
It should load the weights correctly (or at least give an error/warning), not simply fail silently!
<!-- A clear and concise description of what you would expect to happen. -->
---
## Question
Is there a way to access older Albert models (for `transformers==2.3.0`)?
Those from https://s3.amazonaws.com/models.huggingface.co/bert/albert-xxlarge-v2-tf_model.h5 seem to be incompatible.
I really cannot modify the code now or upgrade `transformers` version, but it would totally save me if someone had available the older version of the model. Would it be possible to share (at least all Albert models)?
| 10-18-2020 20:04:58 | 10-18-2020 20:04:58 | I let @LysandreJik answer about the model but I have a tangential question I love to have some information about @sebastianbujwid (we may be able to help):
Why is it not possible for you to update transformers past the 2.3.0 version?<|||||>Hello! Indeed, there has been an issue with the TF ALBERT model that has been resolved in the following commit: https://github.com/huggingface/transformers/commit/1abd53b1aa2f15953bbbbbfefda885d1d9c9d94b#diff-143909fef69337378ffd5ccb1af28c3855ba498a11296fb62886b854d269342d
This commit is, unfortunately, only available on version v2.5.1. Is it possible for you to update your version to v2.5.1? If not, let me know specifically which models you would like and I'll give you URLs you can use to download these specific model weights, but please understand that *before v2.5.1*, the TF version of the ALBERT models will not correctly if using them with heads (e.g. SequenceClassification, TokenClassification, QuestionAnswering, etc.)<|||||>Thank you so much for the quick response!
I did more digging into the issue and realized that I actually still have the old model weights in my `.cache`, they are just not used since the model checks the URL and downloads the new weights anyway (expects the cached weights with the newest e-tag).
Although I haven't verified it yet, I'm quite confident I should be able to either use the cached weights I have or maybe upgrade to v2.5.1 (assuming the API hasn't changed much, for practical reasons I'd strongly prefer not to change the code since would have run additional test and re-do at least some of my main experiments).
Otherwise, I might reach out again for the specific URLs. Thanks :)
Maybe the issue with incompatibility is not fixable for older versions anymore but for the future maybe it would be good to do some versioning of the weight files and check for the right version when downloading/reading from the cache, in case that's not done yet? It's a shame if some code suddenly stops working without being changed.
Regarding the logging, after a quick check now I see that the recent code from the main branch uses `logging.warning` instead which should at least make the issue easier to notice (sadly, it took me a few days to realize that all my problems were due to the model not loading the pre-trained weights - apparently ALBERT with random weights still works much better than random on certain problems).<|||||>You're absolutely right that these incompatibility issues are due to not having a way to version models. You'll be happy to hear that we're currently working on this internally :).
Thank you for opening such a detailed issue! |
transformers | 7,888 | closed | [tests] fix slow bart cnn test, faster marian tests | + bart slow cnn test needs to not consider special tokens
+ Marian integration tests run `setupClass`, which downloads a tokenizer, even if they are skipped. By moving that logic to skipped property, we can save 40s of useless tokenizer downloads.
cc @stas00
| 10-18-2020 19:48:05 | 10-18-2020 19:48:05 | Merging as this just fixes tests. |
transformers | 7,887 | closed | Github actions: more readable/informative output | ### TLDR
The goal of this Issue is to make the output of https://github.com/huggingface/transformers/runs/1269744369?check_suite_focus=true more readable, without changing the fact that the job failed.
#### Current Workflow:
I randomly remember to check github actions every 3 days, scroll a long way to see what broke, figure out who to assign (sometimes easy/sometimes hard), and then make issues or slack that person. The goal of this PR is to try to make that process easier by attacking the first two steps, but ideas for automating parts of the second two steps are welcome.
#### Goals:
- user can click on github actions and see what broke quickly.
- (Optional) user can see 50 slowest tests
- (HARD/Optional) user can subscribe to slack message/email that pings them when action breaks (Related: https://github.community/t/subscribing-to-actions-notifications/135328)
#### How to attack this PR
I would make a new github action file that runs only on your branch and makes the output/artifacts files you desire. Then, once it is working, apply similar logic to `.github/workflows/self-scheduled.yml`. Once PR is merged, send follow up PR applying same logic to `.github/workflows/self-push.yml`
| 10-18-2020 19:37:51 | 10-18-2020 19:37:51 | I will surely give it a try, @sshleifer <|||||>So the idea of running my own gh action didn't quite work https://github.com/huggingface/transformers/pull/7890 as I don't have the right permissions to do so.
But that doesn't mean this issue can't be worked on.
# Making information easier/quicker to find
Basically we have the logs of the tests run and nothing else. Currently finding something in those logs is not trivial, so this part can definitely be improved. We could `tee` the logs into a file and then do a postprocessing on those (will need to figure out how to run a subsequent task despite the test suite task failure, but I'm sure in the worst case some bash magic could work around it). So we can make various clickable tasks that will show just specific outputs. I can also check whether there are some pytest extensions that might help here. Or we might write our own extension that will do just that - log different interesting things into different files.
You can tell me which info chunks you want and I can work on generating those from the log file. Currently you mentioned:
- show slow tests (already in the log file, just need to make them a separate one-click accessible item, I think pytest has an option to log that into a separate report)
- show just the errors
anything else?
This part would be just as useful to normal CIs.
# Communicating with specific user
a. the first problem is which user? a scheduled job may have dozens of merges in it - how do you find out which of them made the commit that caused a failure?
Unless you make some kind of contact-table and look up the contact based on which test file failed. I was under the impression you wanted to contact not the specific maintainer, but the original PR creator.
b. any push communications require some kind of auth, which would be very tricky with public repo. it could probably use some webhooks to ping some external server that could do something with that info. e.g. it could have the right auth and post on failure to a slack channel which devs can watch - easier than watching git hub actions (pull model).
<|||||>@sgugger suggested that CircleCI has an Artifacts tab, where we could add various bit of info.<|||||>I want exactly the artifacts file of the circleci job (just uses `|tee output.txt`) available in github actions, but it sounds like that is hard.
Re, communicating with specific user > can be just lysandre for now. I already added the SLACK_WEBHOOK_URL to secrets, but this should definitely not be your task given privs.
<|||||>To automate the finding of the specific user we will need to `git bisect` for the range of commits since the last check with a rerun of just the failing tests , which will give us the first hash, and thus the PR that caused it.
This efficient approach may not work 100% of the time, if the failing test happens due to some other test (test dependency) affecting it, so if this speedy reduction doesn't work, then the full test suite will need to be re-run in conjunction with `git bisect`.<|||||>WIP https://github.com/huggingface/transformers/pull/7995 |
transformers | 7,886 | closed | Question answering example errors with BrokenPipeError: [Errno 32] Broken pipe | ## Environment info
- `transformers` version: 3.3.1
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.7.1
- PyTorch version (GPU?): 1.4.0+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: No
### Who can help
No option for question answering in your list
## Information
question-answering pipeline example was used.
The problem arises when using:
the official example scripts
The tasks I am working on is:
question-answering
## To reproduce
Steps to reproduce the behavior:
1. Run
```
from transformers import pipeline
# From https://huggingface.co/transformers/usage.html
nlp = pipeline("question-answering")
context = r"""
Extractive Question Answering is the task of extracting an answer from a text given a question. An example of a
question answering dataset is the SQuAD dataset, which is entirely based on that task. If you would like to fine-tune
a model on a SQuAD task, you may leverage the `run_squad.py`.
"""
print(nlp(question="What is extractive question answering?", context=context))
print(nlp(question="What is a good example of a question answering dataset?", context=context))
```
2. Get error:
```
I1018 21:00:26.369285 15020 filelock.py:274] Lock 2310999501064 acquired on C:\Users\User/.cache\torch\transformers\c2341a51039a311cb3c7dc71b3d21970e6a127876f067f379f8bcd77ef870389.6a09face0659d64f93c9919f323e2ad4543ca9af5d2417b1bfb1a36f2f6b94a4.lock
Downloading: 100%|███████████████████████████████████████████████████████████████████████| 473/473 [00:00<00:00, 118kB/s]
I1018 21:00:27.404981 15020 filelock.py:318] Lock 2310999501064 released on C:\Users\User/.cache\torch\transformers\c2341a51039a311cb3c7dc71b3d21970e6a127876f067f379f8bcd77ef870389.6a09face0659d64f93c9919f323e2ad4543ca9af5d2417b1bfb1a36f2f6b94a4.lock
I1018 21:00:28.506618 15020 filelock.py:274] Lock 2310999482320 acquired on C:\Users\User/.cache\torch\transformers\cee054f6aafe5e2cf816d2228704e326446785f940f5451a5b26033516a4ac3d.e13dbb970cb325137104fb2e5f36fe865f27746c6b526f6352861b1980eb80b1.lock
Downloading: 100%|█████████████████████████████████████████████████████████████████████| 213k/213k [00:00<00:00, 299kB/s]
I1018 21:00:30.234217 15020 filelock.py:318] Lock 2310999482320 released on C:\Users\User/.cache\torch\transformers\cee054f6aafe5e2cf816d2228704e326446785f940f5451a5b26033516a4ac3d.e13dbb970cb325137104fb2e5f36fe865f27746c6b526f6352861b1980eb80b1.lock
I1018 21:00:31.294145 15020 filelock.py:274] Lock 2310999651048 acquired on C:\Users\User/.cache\torch\transformers\414816afc2ab8922d082f893dbf90bcb9a43f09838039249c6c8ca3e8b77921f.455d944f3d1572ab55ed579849f751cf37f303e3388980a42d94f7cd57a4e331.lock
Downloading: 100%|██████████████████████████████████████████████████████████████████████| 230/230 [00:00<00:00, 51.5kB/s]
I1018 21:00:32.335124 15020 filelock.py:318] Lock 2310999651048 released on C:\Users\User/.cache\torch\transformers\414816afc2ab8922d082f893dbf90bcb9a43f09838039249c6c8ca3e8b77921f.455d944f3d1572ab55ed579849f751cf37f303e3388980a42d94f7cd57a4e331.lock
I1018 21:00:33.984491 15020 filelock.py:274] Lock 2310999583152 acquired on C:\Users\User/.cache\torch\transformers\3efcb155a9475fe6b9318b8a8d5278bce1972d30291f97f2a8faeb50d02acabc.087b9fac49619019e540876a2d8ecb497884246b5aa8c9e8b7a0292cfbbe7c52.lock
Downloading: 100%|████████████████████████████████████████████████████████████████████| 261M/261M [00:42<00:00, 6.14MB/s]
I1018 21:01:16.665997 15020 filelock.py:318] Lock 2310999583152 released on C:\Users\User/.cache\torch\transformers\3efcb155a9475fe6b9318b8a8d5278bce1972d30291f97f2a8faeb50d02acabc.087b9fac49619019e540876a2d8ecb497884246b5aa8c9e8b7a0292cfbbe7c52.lock
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\spawn.py", line 105, in spawn_main
Traceback (most recent call last):
File "e:\Work\Python\transformers_question_answering.py", line 11, in <module>
exitcode = _main(fd)print(nlp(question="What is extractive question answering?", context=context))
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\spawn.py", line 114, in _main
File "E:\WPy-3710\python-3.7.1.amd64\lib\site-packages\transformers\pipelines.py", line 1691, in __call__
prepare(preparation_data)
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\spawn.py", line 225, in prepare
for example in examples
_fixup_main_from_path(data['init_main_from_path']) File "E:\WPy-3710\python-3.7.1.amd64\lib\site-packages\transformers\pipelines.py", line 1691, in <listcomp>
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "E:\WPy-3710\python-3.7.1.amd64\lib\runpy.py", line 263, in run_path
for example in examples
File "E:\WPy-3710\python-3.7.1.amd64\lib\site-packages\transformers\data\processors\squad.py", line 345, in squad_convert_examples_to_features
pkg_name=pkg_name, script_name=fname)
File "E:\WPy-3710\python-3.7.1.amd64\lib\runpy.py", line 96, in _run_module_code
with Pool(threads, initializer=squad_convert_example_to_features_init, initargs=(tokenizer,)) as p:
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\context.py", line 119, in Pool
mod_name, mod_spec, pkg_name, script_name)
File "E:\WPy-3710\python-3.7.1.amd64\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "e:\Work\Python\transformers_question_answering.py", line 11, in <module>
print(nlp(question="What is extractive question answering?", context=context))
File "E:\WPy-3710\python-3.7.1.amd64\lib\site-packages\transformers\pipelines.py", line 1691, in __call__
for example in examples
File "E:\WPy-3710\python-3.7.1.amd64\lib\site-packages\transformers\pipelines.py", line 1691, in <listcomp>
for example in examples
File "E:\WPy-3710\python-3.7.1.amd64\lib\site-packages\transformers\data\processors\squad.py", line 345, in squad_convert_examples_to_features
with Pool(threads, initializer=squad_convert_example_to_features_init, initargs=(tokenizer,)) as p:
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\context.py", line 119, in Pool
context=self.get_context())context=self.get_context())
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\pool.py", line 177, in __init__
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\pool.py", line 177, in __init__
self._repopulate_pool()self._repopulate_pool()
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\pool.py", line 238, in _repopulate_pool
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\pool.py", line 238, in _repopulate_pool
self._wrap_exception)
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\pool.py", line 257, in _repopulate_pool_static
self._wrap_exception)
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\pool.py", line 257, in _repopulate_pool_static
w.start()
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\process.py", line 112, in start
w.start()
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)self._popen = self._Popen(self)
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\context.py", line 322, in _Popen
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
return Popen(process_obj)
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)reduction.dump(process_obj, to_child)
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\reduction.py", line 60, in dump
_check_not_importing_main()
File "E:\WPy-3710\python-3.7.1.amd64\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe
```
## Expected behavior
Output:
```
{'score': 0.622232091629833, 'start': 34, 'end': 96, 'answer': 'the task of extracting an answer from a text given a question.'}
{'score': 0.5115299158662765, 'start': 147, 'end': 161, 'answer': 'SQuAD dataset,'}
```
| 10-18-2020 19:25:39 | 10-18-2020 19:25:39 | Same problem<|||||>This was patched on `master`, cannot reproduce anymore. Can you reproduce on `master`?<|||||>> This was patched on `master`, cannot reproduce anymore. Can you reproduce on `master`?
I installed the newest version (3.4.0) by pip, but this problem still arises.<|||||>Facing the same issue with 3.5.1<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Fixed on version 4.0.1<|||||>transformers 4.0.1?? |
transformers | 7,885 | closed | [testing] the test suite is many times slower than 2 weeks ago | We are going to have a CI-side running reports when this is merged https://github.com/huggingface/transformers/pull/7884, but we can already start looking at what caused a 4-5 times slowdown in the test suite about 10 days ago. I'm not sure the exact moment, but I checked a few reports and it appears that the change happened around Oct 8th +/- a few days.
e.g. before:
https://app.circleci.com/pipelines/github/huggingface/transformers/13323/workflows/5984ea0e-e280-4a41-bc4a-b4a3d72fc411/jobs/95699
after:
https://app.circleci.com/pipelines/github/huggingface/transformers/13521/workflows/d235c864-66fa-4408-a787-2efab850a781/jobs/97329
@sshleifer suggested a diagnostic to resolve this by adding a pytorch `--durations=N` flag, except if it's a missing `@slow` it won't work on my machine because I already have all the models pre-downloaded, so the following is just the slow execution:
Here is the report on my machine running all tests normally
```
$ pytest -n 3 --durations=0 tests
[...]
76.92s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_train_pipeline_custom_model
54.38s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_graph_mode
49.85s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_compile_tf_model
48.98s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_save_pretrained
44.11s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_compile_tf_model
38.42s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_graph_mode
35.94s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_tokenization_python_rust_equals
35.86s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_create_token_type_ids
35.81s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_embeded_special_tokens
35.58s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_max_length_equal
35.54s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_padding
35.36s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_is_fast
35.14s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_pretokenized_inputs
35.10s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_special_tokens_map_equal
35.07s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_num_special_tokens_to_add_equal
35.02s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_build_inputs_with_special_tokens
34.94s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_prepare_for_model
31.60s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_compile_tf_model
31.03s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_train_pipeline_custom_model
29.11s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_compile_tf_model
29.10s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_train_pipeline_custom_model
27.62s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_pt_tf_model_equivalence
26.36s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_compile_tf_model
25.12s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_save_pretrained
24.85s call tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_graph_mode
24.66s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_compile_tf_model
24.04s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_tokenization_python_rust_equals
23.15s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_from_pretrained
23.10s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_graph_mode
23.08s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_padding
22.99s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_compile_tf_model
22.78s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_train_pipeline_custom_model
22.69s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_keras_save_load
22.67s call tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_train_pipeline_custom_model
22.43s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_pretokenized_inputs
22.38s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_create_token_type_ids
22.35s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_prepare_for_model
22.28s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_attentions
22.25s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_max_length_equal
22.19s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_embeded_special_tokens
22.06s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_special_tokens_map_equal
21.95s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_is_fast
21.92s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_build_inputs_with_special_tokens
21.85s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_num_special_tokens_to_add_equal
21.61s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_graph_mode
21.49s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_add_special_tokens
21.32s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_add_tokens
21.21s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_alignement_methods
21.09s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_batch_encode_dynamic_overflowing
21.06s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_fast_only_inputs
20.95s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript
20.86s call tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_offsets_mapping
20.06s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_model_outputs_equivalence
20.01s call tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_train_pipeline_custom_model
19.62s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_attention_outputs
19.39s call tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_attentions
18.78s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_save_pretrained
18.63s call tests/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_train_pipeline_custom_model
18.36s call tests/test_modeling_tf_roberta.py::TFRobertaModelTest::test_compile_tf_model
18.08s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_tokenization_python_rust_equals
17.85s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_save_load
17.54s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_train_pipeline_custom_model
17.39s call tests/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_train_pipeline_custom_model
17.28s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_embeded_special_tokens
17.25s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_special_tokens_map_equal
16.88s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_train_pipeline_custom_model
16.84s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_graph_mode
16.74s call tests/test_modeling_electra.py::ElectraModelTest::test_torchscript
16.73s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_padding
16.63s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_max_length_equal
16.56s call tests/test_modeling_fsmt.py::FSMTModelTest::test_lm_head_model_random_beam_search_generate
16.55s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_hidden_state
16.53s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_build_inputs_with_special_tokens
16.53s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_is_fast
16.49s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_create_token_type_ids
16.45s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_num_special_tokens_to_add_equal
16.43s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_graph_mode
16.42s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_prepare_for_model
16.09s call tests/test_tokenization_albert.py::AlbertTokenizationTest::test_pretokenized_inputs
16.02s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_model_outputs_equivalence
15.80s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_train_pipeline_custom_model
15.50s call tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_translation
15.30s call tests/test_modeling_tf_ctrl.py::TFCTRLModelTest::test_train_pipeline_custom_model
15.00s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_graph_mode
14.96s call tests/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_compile_tf_model
14.07s call tests/test_modeling_tf_openai.py::TFOpenAIGPTModelTest::test_train_pipeline_custom_model
14.03s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_pt_tf_model_equivalence
13.77s call tests/test_modeling_rag.py::RagDPRT5Test::test_model_generate
13.29s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_hidden_states_output
13.10s call tests/test_modeling_gpt2.py::GPT2ModelTest::test_model_outputs_equivalence
12.69s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_model_outputs_equivalence
12.43s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_save_pretrained
11.76s call tests/test_modeling_tf_openai.py::TFOpenAIGPTModelTest::test_compile_tf_model
11.73s call tests/test_modeling_tf_roberta.py::TFRobertaModelTest::test_graph_mode
11.66s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_pt_tf_model_equivalence
11.63s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_special_tokens_map_equal
11.60s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_batch_encode_dynamic_overflowing
11.51s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_add_special_tokens
11.50s call tests/test_modeling_tf_ctrl.py::TFCTRLModelTest::test_compile_tf_model
11.36s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_prepare_for_model
11.34s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_add_tokens
11.23s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_tokenization_python_rust_equals
11.19s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_fast_only_inputs
11.17s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_offsets_mapping
11.09s call tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_no_configs_xla
11.05s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_alignement_methods
11.04s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_is_fast
10.95s call tests/test_tokenization_fast.py::WordPieceFastTokenizerTest::test_offsets_with_special_characters
10.81s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_embeded_special_tokens
10.71s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_max_length_equal
10.59s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_build_inputs_with_special_tokens
10.59s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_num_special_tokens_to_add_equal
10.56s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_pt_tf_model_equivalence
10.42s call tests/test_modeling_bert.py::BertModelTest::test_torchscript_output_hidden_state
10.39s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_create_token_type_ids
10.36s call tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_compile_tf_model
10.34s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_attention_outputs
10.31s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_model_outputs_equivalence
10.25s call tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript
10.15s call tests/test_modeling_bert.py::BertModelTest::test_torchscript_output_attentions
9.78s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_pt_tf_model_equivalence
9.76s call tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_with_configs
9.76s call tests/test_modeling_bert.py::BertModelTest::test_model_outputs_equivalence
9.72s call tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript
9.50s call tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_hidden_state
9.37s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_pt_tf_model_equivalence
9.31s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_batch_encode_dynamic_overflowing
9.31s call tests/test_modeling_albert.py::AlbertModelTest::test_model_outputs_equivalence
9.30s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_save_load
9.10s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_model_outputs_equivalence
9.01s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_keras_save_load
8.88s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_add_tokens
8.81s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_offsets_mapping
8.80s call tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_keras_save_load
8.73s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_add_special_tokens
8.66s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_fast_only_inputs
8.60s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_pt_tf_model_equivalence
8.59s call tests/test_tokenization_fast.py::RobertaFastTokenizerTest::test_alignement_methods
8.57s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_model_outputs_equivalence
8.50s call tests/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_compile_tf_model
8.34s call tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_summarization
8.33s call tests/test_modeling_tf_ctrl.py::TFCTRLModelTest::test_lm_head_model_random_beam_search_generate
8.17s call tests/test_modeling_albert.py::AlbertModelTest::test_torchscript
8.01s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_model_outputs_equivalence
7.96s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_model_outputs_equivalence
7.88s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_attention_outputs
7.87s call tests/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_graph_mode
7.84s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_attention_outputs
7.84s call tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript
7.58s call tests/test_modeling_albert.py::AlbertModelTest::test_torchscript_output_attentions
7.57s call tests/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_train_pipeline_custom_model
7.51s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_save_load
7.47s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_attention_outputs
7.46s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_keyword_and_dict_args
7.36s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_save_load
7.36s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_determinism
7.32s call tests/test_modeling_tf_openai.py::TFOpenAIGPTModelTest::test_graph_mode
7.23s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_attention_outputs
7.16s call tests/test_pipelines.py::MonoColumnInputTestCase::test_tf_text_generation
7.05s call tests/test_modeling_electra.py::ElectraModelTest::test_torchscript_output_attentions
7.04s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_resize_token_embeddings
7.02s call tests/test_modeling_tf_openai.py::TFOpenAIGPTModelTest::test_lm_head_model_random_beam_search_generate
6.90s call tests/test_modeling_tf_roberta.py::TFRobertaModelTest::test_train_pipeline_custom_model
6.88s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_hidden_states_output
6.82s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_save_load
6.73s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_lm_head_model_random_beam_search_generate
6.70s call tests/test_modeling_flaubert.py::FlaubertModelTest::test_model_outputs_equivalence
6.60s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_save_load
6.57s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_keras_save_load
6.48s call tests/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_graph_mode
6.47s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_attention_outputs
6.44s call tests/test_modeling_encoder_decoder.py::GPT2EncoderDecoderModelTest::test_encoder_decoder_model_generate
6.44s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_inputs_embeds
6.35s call tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_compile_tf_model
6.25s call tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_compile_tf_model
6.25s call tests/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_lm_head_model_random_beam_search_generate
6.11s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_loss_computation
6.05s call tests/test_modeling_layoutlm.py::LayoutLMModelTest::test_torchscript_output_attentions
5.98s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_save_load
5.93s call tests/test_modeling_tf_xlm.py::TFXLMModelTest::test_train_pipeline_custom_model
5.77s call tests/test_modeling_tf_xlm.py::TFXLMModelTest::test_compile_tf_model
5.72s call tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_attentions
5.69s call tests/test_modeling_squeezebert.py::SqueezeBertModelTest::test_torchscript_output_hidden_state
5.65s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_fast_only_inputs
5.64s call tests/test_modeling_squeezebert.py::SqueezeBertModelTest::test_multigpu_data_parallel_forward
5.60s call tests/test_modeling_blenderbot.py::Blenderbot90MIntegrationTests::test_90_generation_from_short_input
5.58s call tests/test_modeling_electra.py::ElectraModelTest::test_model_outputs_equivalence
5.57s call tests/test_modeling_rag.py::RagDPRBartTest::test_model_with_encoder_outputs
5.56s call tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_attentions
5.54s call tests/test_modeling_tf_roberta.py::TFRobertaModelTest::test_attention_outputs
5.54s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_padding
5.51s call tests/test_modeling_squeezebert.py::SqueezeBertModelTest::test_torchscript_output_attentions
5.46s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_alignement_methods
5.46s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_offsets_mapping
5.44s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_add_special_tokens
5.40s call tests/test_tokenization_fast.py::NoPaddingTokenFastTokenizerMatchingTest::test_add_tokens
5.33s call tests/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_keras_save_load
5.30s call tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_hidden_state
5.27s call tests/test_modeling_tf_xlm.py::TFXLMModelTest::test_graph_mode
5.22s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_hidden_states_output
5.20s call tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_graph_mode
5.12s call tests/test_pipelines.py::NerPipelineTests::test_tf_only_ner
5.11s call tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript_output_hidden_state
5.10s call tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript_output_attentions
5.09s call tests/test_modeling_electra.py::ElectraModelTest::test_torchscript_output_hidden_state
5.07s call tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_graph_mode
5.05s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_maximum_encoding_length_pair_input
5.04s call tests/test_modeling_tf_ctrl.py::TFCTRLModelTest::test_graph_mode
5.01s call tests/test_modeling_distilbert.py::DistilBertModelTest::test_torchscript_output_hidden_state
4.99s call tests/test_modeling_fsmt.py::FSMTHeadTests::test_generate_fp16
4.94s call tests/test_modeling_distilbert.py::DistilBertModelTest::test_model_outputs_equivalence
4.93s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_hidden_states_output
4.90s call tests/test_modeling_layoutlm.py::LayoutLMModelTest::test_torchscript_output_hidden_state
4.85s call tests/test_modeling_roberta.py::RobertaModelTest::test_torchscript
4.84s call tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_train_pipeline_custom_model
4.82s call tests/test_pipelines.py::QAPipelineTests::test_tf_question_answering
4.81s call tests/test_modeling_encoder_decoder.py::BertEncoderDecoderModelTest::test_save_and_load_from_encoder_decoder_pretrained
4.78s call tests/test_modeling_roberta.py::RobertaModelTest::test_torchscript_output_hidden_state
4.76s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_maximum_encoding_length_single_input
4.73s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_pretokenized_inputs
4.72s call tests/test_pipelines.py::QAPipelineTests::test_torch_question_answering
4.68s call tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_pt_tf_model_equivalence
4.63s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_add_special_tokens
4.60s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_save_load
4.59s call tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_model_outputs_equivalence
4.59s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_identifier_non_existent
4.57s call tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_encoder_decoder_with_configs
4.56s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_hidden_states_output
4.51s call tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_text2text
4.48s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_keras_save_load
4.48s call tests/test_tokenization_marian.py::MarianTokenizationTest::test_tokenizer_equivalence_en_de
4.47s call tests/test_modeling_tf_roberta.py::TFRobertaModelTest::test_keras_save_load
4.44s call tests/test_modeling_flaubert.py::FlaubertModelTest::test_attention_outputs
4.43s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_hidden_states_output
4.41s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_keyword_and_dict_args
4.39s call tests/test_modeling_marian.py::TestMarian_en_ROMANCE::test_pipeline
4.34s call tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_save_load
4.33s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_hidden_states_output
4.28s call tests/test_tokenization_fsmt.py::FSMTTokenizationTest::test_pickle_tokenizer
4.28s call tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_text_generation
4.27s call tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_attentions
4.16s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_resize_token_embeddings
4.12s call tests/test_modeling_tf_t5.py::TFT5ModelTest::test_compile_tf_model
4.10s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_keras_save_load
4.09s call tests/test_modeling_tf_t5.py::TFT5ModelTest::test_train_pipeline_custom_model
4.09s call tests/test_modeling_squeezebert.py::SqueezeBertModelTest::test_save_load
4.08s call tests/test_modeling_openai.py::OpenAIGPTModelTest::test_head_pruning_integration
4.07s call tests/test_modeling_tf_ctrl.py::TFCTRLModelTest::test_keras_save_load
4.06s call tests/test_modeling_roberta.py::RobertaModelTest::test_torchscript_output_attentions
4.04s call tests/test_pipelines.py::MonoColumnInputTestCase::test_tf_fill_mask
3.96s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_determinism
3.96s call tests/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_pt_tf_model_equivalence
3.95s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_keras_save_load
3.95s setup tests/test_modeling_marian.py::TestMarian_FR_EN::test_batch_generation_fr_en
3.93s call tests/test_modeling_reformer.py::ReformerLocalAttnModelTest::test_model_outputs_equivalence
3.88s call tests/test_pipelines.py::NerPipelineTests::test_ner_grouped
3.87s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_determinism
3.86s call tests/test_pipelines.py::NerPipelineTests::test_torch_ner
3.86s call tests/test_pipelines.py::MonoColumnInputTestCase::test_tf_feature_extraction
3.85s call tests/test_pipelines.py::MonoColumnInputTestCase::test_tf_sentiment_analysis
3.82s call tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript
3.75s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_loss_computation
3.75s call tests/test_modeling_t5.py::T5ModelTest::test_export_to_onnx
3.73s call tests/test_modeling_tf_t5.py::TFT5ModelTest::test_lm_head_model_random_beam_search_generate
3.72s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_keras_save_load
3.70s call tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_fill_mask_with_targets
3.69s call tests/test_modeling_tf_roberta.py::TFRobertaModelTest::test_hidden_states_output
3.64s call tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_hidden_state
3.62s call tests/test_pipelines.py::MonoColumnInputTestCase::test_tf_fill_mask_with_targets
3.60s call tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_feature_extraction
3.60s call tests/test_pipelines.py::NerPipelineTests::test_tf_ner
3.60s call tests/test_pipelines.py::ZeroShotClassificationPipelineTests::test_torch_zero_shot_classification
3.59s call tests/test_modeling_bert.py::BertModelTest::test_torchscript
3.58s call tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_sentiment_analysis
3.57s setup tests/test_modeling_marian.py::TestMarian_EN_DE_More::test_auto_config
3.53s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_resize_token_embeddings
3.50s call tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_fill_mask
3.50s call tests/test_modeling_tf_openai.py::TFOpenAIGPTModelTest::test_keras_save_load
3.50s call tests/test_modeling_dpr.py::DPRModelTest::test_torchscript
3.46s call tests/test_modeling_bart.py::BARTModelTest::test_tiny_model
3.46s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_inputs_embeds
3.44s call tests/test_modeling_albert.py::AlbertModelTest::test_torchscript_output_hidden_state
3.42s call tests/test_modeling_xlnet.py::XLNetModelTest::test_model_outputs_equivalence
3.42s call tests/test_pipelines.py::ZeroShotClassificationPipelineTests::test_tf_zero_shot_classification
3.42s call tests/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_pt_tf_model_equivalence
3.40s setup tests/test_modeling_marian.py::TestMarian_en_ROMANCE::test_batch_generation_en_ROMANCE_multi
3.40s call tests/test_tokenization_albert.py::AlbertTokenizationTest::test_maximum_encoding_length_pair_input
3.40s call tests/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_model_outputs_equivalence
3.39s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_keyword_and_dict_args
3.36s call tests/test_modeling_dpr.py::DPRModelTest::test_model_outputs_equivalence
3.35s setup tests/test_modeling_marian.py::TestMarian_EN_DE_More::test_batch_generation_en_de
3.33s call tests/test_modeling_bart.py::BARTModelTest::test_model_outputs_equivalence
3.32s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_determinism
3.32s call tests/test_pipelines.py::NerPipelineTests::test_tf_ner_grouped
3.31s setup tests/test_modeling_marian.py::TestMarian_en_ROMANCE::test_tokenizer_handles_empty
3.30s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_resize_token_embeddings
3.29s call tests/test_modeling_tf_openai.py::TFOpenAIGPTModelTest::test_pt_tf_model_equivalence
3.28s call tests/test_modeling_bert.py::BertModelTest::test_head_pruning_integration
3.28s call tests/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_model_outputs_equivalence
3.27s call tests/test_modeling_dpr.py::DPRModelTest::test_torchscript_output_attentions
3.26s call tests/test_modeling_openai.py::OpenAIGPTModelTest::test_model_outputs_equivalence
3.23s setup tests/test_modeling_marian.py::TestMarian_en_zh::test_batch_generation_eng_zho
3.23s setup tests/test_modeling_marian.py::TestMarian_EN_FR::test_batch_generation_en_fr
3.22s call tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript
3.20s call tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_attention_outputs
3.20s setup tests/test_modeling_marian.py::TestMarian_RU_FR::test_batch_generation_ru_fr
3.19s call tests/test_modeling_tf_ctrl.py::TFCTRLModelTest::test_lm_head_model_random_no_beam_search_generate
3.18s call tests/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_keras_save_load
3.17s call tests/test_modeling_tf_xlm.py::TFXLMModelTest::test_pt_tf_model_equivalence
3.15s call tests/test_modeling_tf_t5.py::TFT5ModelTest::test_graph_mode
3.13s call tests/test_modeling_tf_roberta.py::TFRobertaModelTest::test_pt_tf_model_equivalence
3.13s setup tests/test_modeling_marian.py::TestMarian_MT_EN::test_batch_generation_mt_en
3.12s call tests/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_compile_tf_model
3.12s setup tests/test_modeling_marian.py::TestMarian_EN_DE_More::test_forward
3.11s call tests/test_modeling_tf_openai.py::TFOpenAIGPTModelTest::test_model_outputs_equivalence
3.10s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_keyword_and_dict_args
```
I made a 3-sec cut-off for this listing.
@sshleifer, @sgugger, @LysandreJik, @thomwolf | 10-18-2020 19:02:54 | 10-18-2020 19:02:54 | Please note that this is runtime on my machine and not CIs - so make sure you're evaluating the report relative to itself and not CI. Once the PR is merged we will start getting CI reports.
Total run time 4248s.
So it looks like on the torch-side `test_tokenization_fast.py` accounts for the main culprit adding up to 1300 secs. ~1/3rd of all test run.
And the bulk of slowdown is tf tests.
|time| tests|
|-------|---------------------------------|
|1300 | test_tokenization_fast.py|
|1089 | the rest of torch tests|
|1859 | tf tests|
|-------------|-----|
|4248 | Total|
Another thing I noticed `tests/test_modeling_marian.py` spends almost 40 secs in setup (11 x 3.3secs) - that's very slow:
```
3.95s setup tests/test_modeling_marian.py::TestMarian_FR_EN::test_batch_generation_fr_en
3.57s setup tests/test_modeling_marian.py::TestMarian_EN_DE_More::test_auto_config
3.40s setup tests/test_modeling_marian.py::TestMarian_en_ROMANCE::test_batch_generation_en_ROMANCE_multi
3.35s setup tests/test_modeling_marian.py::TestMarian_EN_DE_More::test_batch_generation_en_de
3.31s setup tests/test_modeling_marian.py::TestMarian_en_ROMANCE::test_tokenizer_handles_empty
3.23s setup tests/test_modeling_marian.py::TestMarian_en_zh::test_batch_generation_eng_zho
3.23s setup tests/test_modeling_marian.py::TestMarian_EN_FR::test_batch_generation_en_fr
3.20s setup tests/test_modeling_marian.py::TestMarian_RU_FR::test_batch_generation_ru_fr
3.13s setup tests/test_modeling_marian.py::TestMarian_MT_EN::test_batch_generation_mt_en
3.12s setup tests/test_modeling_marian.py::TestMarian_EN_DE_More::test_forward
3.06s setup tests/test_modeling_marian.py::TestMarian_en_ROMANCE::test_pipeline
```<|||||>I fixed marian in https://github.com/huggingface/transformers/pull/7888. Do you want to try to fix `test_tokenization_fast` or let somebody else?<|||||>> Do you want to try to fix test_tokenization_fast
I will give it a go.
**edit**: Except it was just removed by the merge that just happened. So I have to start from scratch.<|||||>https://github.com/huggingface/transformers/pull/7659 may have fixed the slow tokenization tests. Checking the most recent run it's back to ~2min for the torch-only job.
https://app.circleci.com/pipelines/github/huggingface/transformers/13951/workflows/244749ce-d1ee-488f-a59d-d891fbc38ed6/jobs/100800
I will check a few more and close it if that was the culprit.
<|||||>this test for some reason has `@slow` commented out - it takes 20+ seconds - can we put it back on?
This is not a test that tests functionality that is going to change much, so should be safe to turn it off for normal CIs.
https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_auto.py#L42
```
pytest --durations=0 tests/test_tokenization_auto.py &
22.15s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_from_pretrained
4.42s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_identifier_non_existent
2.75s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_from_model_type
2.57s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_from_tokenizer_class
2.36s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_from_pretrained_identifier
2.06s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_from_pretrained_use_fast_toggle
2.05s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_identifier_with_correct_config
```<|||||>This one should probably also be `@slow` - all the other tests around it are `@slow`:
```
15.51s call tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_translation
```<|||||>Wrote a one liner to calculate the sub-totals for whatever pattern in the output of `pytest --durations=0` stats, as in:
```
22.15s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_from_pretrained
4.42s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_identifier_non_existent
2.75s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_from_model_type
2.57s call tests/test_tokenization_auto.py::AutoTokenizerTest::test_tokenizer_from_tokenizer_class
```
Total runtime:
```
$ cat stats.txt | perl -ne 's|^(.*?)s.|$x+=$1|e; END {print int $x}'
3308
```
Total tf runtime:
```
grep _tf_ stats.txt | perl -ne 's|^(.*?)s.|$x+=$1|e; END {print int $x}'
1609
```<|||||>It this common test a good candidate for `@slow`?
```
grep test_model_outputs_equivalence stats.txt | perl -ne 's|^(.*?)s.|$x+=$1|e; END {print int $x}'
230
```
At least a few of them are quite slow:
```
20.11s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_model_outputs_equivalence
16.19s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_model_outputs_equivalence
13.49s call tests/test_modeling_gpt2.py::GPT2ModelTest::test_model_outputs_equivalence
9.94s call tests/test_modeling_bert.py::BertModelTest::test_model_outputs_equivalence
9.56s call tests/test_modeling_albert.py::AlbertModelTest::test_model_outputs_equivalence
8.81s call tests/test_modeling_flaubert.py::FlaubertModelTest::test_model_outputs_equivalence
8.29s call tests/test_modeling_tf_xlm.py::TFXLMModelTest::test_model_outputs_equivalence
7.98s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_model_outputs_equivalence
7.87s call tests/test_modeling_xlnet.py::XLNetModelTest::test_model_outputs_equivalence
6.85s call tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_model_outputs_equivalence
6.81s call tests/test_modeling_squeezebert.py::SqueezeBertModelTest::test_model_outputs_equivalence
6.30s call tests/test_modeling_tf_roberta.py::TFRobertaModelTest::test_model_outputs_equivalence
6.25s call tests/test_modeling_roberta.py::RobertaModelTest::test_model_outputs_equivalence
5.90s call tests/test_modeling_reformer.py::ReformerLocalAttnModelTest::test_model_outputs_equivalence
5.81s call tests/test_modeling_electra.py::ElectraModelTest::test_model_outputs_equivalence
5.79s call tests/test_modeling_distilbert.py::DistilBertModelTest::test_model_outputs_equivalence
5.69s call tests/test_modeling_xlm.py::XLMModelTest::test_model_outputs_equivalence
5.35s call tests/test_modeling_tf_t5.py::TFT5ModelTest::test_model_outputs_equivalence
4.64s call tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_model_outputs_equivalence
4.34s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_model_outputs_equivalence
3.79s call tests/test_modeling_dpr.py::DPRModelTest::test_model_outputs_equivalence
3.71s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_model_outputs_equivalence
3.61s call tests/test_modeling_bart.py::BARTModelTest::test_model_outputs_equivalence
3.58s call tests/test_modeling_openai.py::OpenAIGPTModelTest::test_model_outputs_equivalence
3.57s call tests/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_model_outputs_equivalence
3.53s call tests/test_modeling_ctrl.py::CTRLModelTest::test_model_outputs_equivalence
3.40s call tests/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_model_outputs_equivalence
3.31s call tests/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_model_outputs_equivalence
3.19s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_model_outputs_equivalence
3.12s call tests/test_modeling_tf_openai.py::TFOpenAIGPTModelTest::test_model_outputs_equivalence
2.98s call tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_model_outputs_equivalence
2.93s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_model_outputs_equivalence
2.80s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_model_outputs_equivalence
2.59s call tests/test_modeling_longformer.py::LongformerModelTest::test_model_outputs_equivalence
2.37s call tests/test_modeling_transfo_xl.py::TransfoXLModelTest::test_model_outputs_equivalence
2.17s call tests/test_modeling_funnel.py::FunnelModelTest::test_model_outputs_equivalence
2.13s call tests/test_modeling_fsmt.py::FSMTModelTest::test_model_outputs_equivalence
2.02s call tests/test_modeling_bert_generation.py::BertGenerationEncoderTest::test_model_outputs_equivalence
1.94s call tests/test_modeling_funnel.py::FunnelBaseModelTest::test_model_outputs_equivalence
1.70s call tests/test_modeling_t5.py::T5ModelTest::test_model_outputs_equivalence
1.60s call tests/test_modeling_deberta.py::DebertaModelTest::test_model_outputs_equivalence
1.44s call tests/test_modeling_lxmert.py::LxmertModelTest::test_model_outputs_equivalence
1.22s call tests/test_modeling_layoutlm.py::LayoutLMModelTest::test_model_outputs_equivalence
1.11s call tests/test_modeling_reformer.py::ReformerLSHAttnModelTest::test_model_outputs_equivalence
0.86s call tests/test_modeling_tf_ctrl.py::TFCTRLModelTest::test_model_outputs_equivalence
0.33s call tests/test_modeling_blenderbot.py::BlenderbotTesterMixin::test_model_outputs_equivalence
```<|||||>Here is another possible candidate for `@slow`:
```
grep test_torchscript stats.txt | perl -ne 's|^(.*?)s.|$x+=$1|e; END {print int $x}'
289
```
```
18.89s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_attentions
18.65s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript
11.37s call tests/test_modeling_bert.py::BertModelTest::test_torchscript_output_hidden_state
11.02s call tests/test_modeling_bert.py::BertModelTest::test_torchscript_output_attentions
9.69s call tests/test_modeling_electra.py::ElectraModelTest::test_torchscript
8.90s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_hidden_state
8.35s call tests/test_modeling_albert.py::AlbertModelTest::test_torchscript
7.82s call tests/test_modeling_roberta.py::RobertaModelTest::test_torchscript
7.78s call tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_attentions
7.71s call tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_hidden_state
7.71s call tests/test_modeling_electra.py::ElectraModelTest::test_torchscript_output_attentions
7.68s call tests/test_modeling_albert.py::AlbertModelTest::test_torchscript_output_attentions
7.65s call tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript
7.37s call tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript
7.12s call tests/test_modeling_roberta.py::RobertaModelTest::test_torchscript_output_attentions
7.01s call tests/test_modeling_squeezebert.py::SqueezeBertModelTest::test_torchscript_output_hidden_state
6.61s call tests/test_modeling_roberta.py::RobertaModelTest::test_torchscript_output_hidden_state
6.51s call tests/test_modeling_squeezebert.py::SqueezeBertModelTest::test_torchscript_output_attentions
5.48s call tests/test_modeling_distilbert.py::DistilBertModelTest::test_torchscript_output_hidden_state
5.06s call tests/test_modeling_xlm.py::XLMModelTest::test_torchscript
4.78s call tests/test_modeling_xlnet.py::XLNetModelTest::test_torchscript_output_attentions
4.71s call tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_attentions
4.71s call tests/test_modeling_xlnet.py::XLNetModelTest::test_torchscript_output_hidden_state
4.66s call tests/test_modeling_xlm.py::XLMModelTest::test_torchscript_output_hidden_state
4.59s call tests/test_modeling_xlnet.py::XLNetModelTest::test_torchscript
4.44s call tests/test_modeling_funnel.py::FunnelModelTest::test_torchscript_output_attentions
4.32s call tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_attentions
3.98s call tests/test_modeling_electra.py::ElectraModelTest::test_torchscript_output_hidden_state
3.73s call tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_hidden_state
3.61s call tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript_output_attentions
3.55s call tests/test_modeling_bart.py::BARTModelTest::test_torchscript_output_hidden_state
3.55s call tests/test_modeling_dpr.py::DPRModelTest::test_torchscript
3.54s call tests/test_modeling_bert.py::BertModelTest::test_torchscript
3.50s call tests/test_modeling_albert.py::AlbertModelTest::test_torchscript_output_hidden_state
3.46s call tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript
3.36s call tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript_output_hidden_state
3.33s call tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript
3.31s call tests/test_modeling_dpr.py::DPRModelTest::test_torchscript_output_attentions
3.27s call tests/test_modeling_layoutlm.py::LayoutLMModelTest::test_torchscript_output_attentions
3.25s call tests/test_modeling_layoutlm.py::LayoutLMModelTest::test_torchscript_output_hidden_state
3.07s call tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript_output_attentions
2.58s call tests/test_modeling_flaubert.py::FlaubertModelTest::test_torchscript_output_hidden_state
2.56s call tests/test_modeling_t5.py::T5ModelTest::test_torchscript_output_hidden_state
2.50s call tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_attentions
2.30s call tests/test_modeling_t5.py::T5ModelTest::test_torchscript_output_attentions
2.28s call tests/test_modeling_openai.py::OpenAIGPTModelTest::test_torchscript_output_attentions
2.19s call tests/test_modeling_bert_generation.py::BertGenerationEncoderTest::test_torchscript_output_hidden_state
2.13s call tests/test_modeling_distilbert.py::DistilBertModelTest::test_torchscript_output_attentions
2.02s call tests/test_modeling_gpt2.py::GPT2ModelTest::test_torchscript_output_hidden_state
1.87s call tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript_output_hidden_state
1.82s call tests/test_modeling_bert_generation.py::BertGenerationEncoderTest::test_torchscript_output_attentions
1.78s call tests/test_modeling_funnel.py::FunnelBaseModelTest::test_torchscript
1.69s call tests/test_modeling_bart.py::BARTModelTest::test_torchscript
1.51s call tests/test_modeling_layoutlm.py::LayoutLMModelTest::test_torchscript
1.34s call tests/test_modeling_dpr.py::DPRModelTest::test_torchscript_output_hidden_state
0.89s call tests/test_modeling_fsmt.py::FSMTModelTest::test_torchscript
0.79s call tests/test_modeling_bert_generation.py::BertGenerationEncoderTest::test_torchscript
0.05s call tests/test_modeling_lxmert.py::LxmertModelTest::test_torchscript_output_attentions
0.05s call tests/test_modeling_lxmert.py::LxmertModelTest::test_torchscript_output_hidden_state
0.01s call tests/test_modeling_reformer.py::ReformerLocalAttnModelTest::test_torchscript_output_attentions
0.01s call tests/test_modeling_ctrl.py::CTRLModelTest::test_torchscript_output_attentions
0.01s call tests/test_modeling_reformer.py::ReformerLocalAttnModelTest::test_torchscript_output_hidden_state
```<|||||>I am fine with marking any of these @slow , but don't care as much now that the issue is resolved.
I do think we should have a repo-wide rule about what should be slow, and you have to write a comment if you want to override it.
#### Proposed Rule for testing.rst
+ All tests that take longer than 5s should be marked slow, that way we can save a lot of back and forth.
+ For common tests to be marked slow, the slowest iteration of that common test must be > 15s.
5s/15s was arbitrary, don't care much what the value is. WDYT @LysandreJik @sgugger ?
<|||||>That's a fabulous suggestion!
A few caveats for `testing.rst`:
* < 5s should include model download overhead - my numbers above exclude this, so once we get the data from CI it'll be the true measurement.
* 5s as measured on CI, since otherwise each hardware is different
<|||||>While we are at it - there is a ton of very slow tf tests - I suppose the same rule applies there, right?<|||||>As much as I love moving quickly, we need to wait for others to agree to the rule before we apply it.
My proposed rule does not differentiate between tf and torch.<|||||>My communication wasn't clear - I meant to do that after we agreed on the threshold and regardless this PR https://github.com/huggingface/transformers/pull/7884 needs to be merged first to perform the correct measurements.
I just saw that there was a **lot** of tf tests that were very slow and which were not marked as such, so I thought perhaps there was a special reason for them not to be `@slow`.<|||||>I'm not sure setting up a 5/15 or any specific time requirement on tests to classify them as slow would be best. Some tests, like the `test_model_outputs_equivalence` are important, and running them on contributors' PR when their changes affect the modeling internals is too.
I think the following proposition would be more suited:
if the test is focused on one of the library's internal components (e.g., modeling files, tokenization files, pipelines), then we should run that test in the non-slow test suite. If it's focused on an other aspect of the library, such as the documentation, the examples, then we should run these tests in the slow test suite. And then, to refine this approach we should have exceptions:
- All tests that need a specific set of weights (e.g., model or tokenizer integration tests, pipeline integration tests) should be set to slow.
- All tests that need to do a training (e.g, trainer integration tests) should be set to slow.
- We can introduce exceptions if some of these should-be-non-slow tests are excruciatingly long, and set them to slow. Some examples are some auto modeling tests, which save and load large files to disk, which are set to slow.
- Others?
To that end, we should aim for all the non-slow tests to cover entirely the different internals, while making sure that the tests keep a fast execution time. Having some very small models in the tests (e.g, 2 layers, 10 vocab size, etc.) helps in that regard, as does having dummy sets of weights like the `sshleifer/tiny-xxx-random` weights. On that front, there seems to be something fishy going on with the MobileBERT model, as it's supposed to be an efficient model but takes a while to be tested. There's probably something to do for this model.
Willing to iterate on this wording, or specify/change some aspects if you think of something better.
Following this approach:
For the [tokenization_auto tests](https://github.com/huggingface/transformers/issues/7885#issuecomment-711443513), we can definitely uncomment the `@slow`.
For the [MonoColumnInputTestCase](https://github.com/huggingface/transformers/issues/7885#issuecomment-711444043), we can also set it as a slow test.
<|||||>I would change all things that need to do a training to all thing that need to do a real training.
I spent a lot of time making a mock training fast for the tests of the Trainer and I don't want that marked as slow ;-)<|||||>OK, sounds like @stas00 can mark a few at slow.
Longformer test also uses 5 layers for some reason, not sure if that matters.<|||||>@LysandreJik, just a clarification - so you propose not to have a fixed speed threshold in any of the "clauses". i.e. for non-essential tests as defined by you they should be marked as slow regardless of their speed, correct? I suppose this is smart since even very fast tests still add up to a lot since there could be many of them.<|||||>OK, so here is the full non-slow run's report on CI:
https://pastebin.com/8pkaZKjH (quoted from [this report](https://circleci.com/api/v1.1/project/github/huggingface/transformers/101622/output/108/0?file=true&allocation-id=5f8db77a5d41f9372b3f92cf-0-build%2F6825DA7E))
The top slow ones are (cut off at 10sec):
```
131.69s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_train_pipeline_custom_model
101.16s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_graph_mode
79.24s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_compile_tf_model
40.68s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_keras_save_load
38.58s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_compile_tf_model
35.05s call tests/test_modeling_marian.py::TestMarian_en_ROMANCE::test_pipeline
32.99s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_train_pipeline_custom_model
27.40s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_graph_mode
26.53s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_compile_tf_model
26.17s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_graph_mode
25.97s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_model_outputs_equivalence
20.57s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_train_pipeline_custom_model
18.71s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_pt_tf_model_equivalence
18.59s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_attentions
17.73s call tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_translation
17.72s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_train_pipeline_custom_model
17.27s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript
17.15s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_compile_tf_model
16.72s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_hidden_state
16.49s call tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_train_pipeline_custom_model
16.20s call tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_no_configs
15.91s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_pt_tf_model_equivalence
15.64s call tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_compile_tf_model
15.52s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_compile_tf_model
15.36s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_pretokenized_inputs
15.32s call tests/test_benchmark_tf.py::TFBenchmarkTest::test_train_with_configs
15.28s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_compile_tf_model
15.28s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_model_outputs_equivalence
15.24s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_pt_tf_model_equivalence
15.14s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_maximum_encoding_length_pair_input
14.94s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_train_pipeline_custom_model
14.62s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_compile_tf_model
14.34s call tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_graph_mode
14.32s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_model_outputs_equivalence
14.12s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_add_special_tokens
13.75s call tests/test_modeling_tf_xlm.py::TFXLMModelTest::test_compile_tf_model
13.72s call tests/test_modeling_tf_t5.py::TFT5ModelTest::test_compile_tf_model
13.67s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_graph_mode
13.33s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_maximum_encoding_length_single_input
13.12s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_attention_outputs
11.84s call tests/test_modeling_tf_mobilebert.py::TFMobileBertModelTest::test_save_load
11.78s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_attention_outputs
11.69s call tests/test_modeling_tf_roberta.py::TFRobertaModelTest::test_compile_tf_model
11.56s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_attention_outputs
11.54s call tests/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_train_pipeline_custom_model
11.41s call tests/test_modeling_tf_lxmert.py::TFLxmertModelTest::test_train_pipeline_custom_model
11.40s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_train_pipeline_custom_model
11.35s call tests/test_modeling_tf_roberta.py::TFRobertaModelTest::test_train_pipeline_custom_model
11.30s call tests/test_modeling_tf_xlm.py::TFXLMModelTest::test_train_pipeline_custom_model
10.82s call tests/test_modeling_tf_electra.py::TFElectraModelTest::test_graph_mode
10.77s call tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_save_load
10.74s call tests/test_modeling_tf_bert.py::TFBertModelTest::test_train_pipeline_custom_model
10.71s call tests/test_modeling_tf_xlm.py::TFXLMModelTest::test_graph_mode
10.60s call tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_compile_tf_model
10.57s call tests/test_modeling_tf_longformer.py::TFLongformerModelTest::test_save_load
10.55s call tests/test_modeling_blenderbot.py::Blenderbot90MIntegrationTests::test_90_generation_from_short_input
10.39s call tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_graph_mode
10.24s call tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_graph_mode
10.08s call tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_encoder_decoder_with_configs
```
@sshleifer, here is a highlight for you:
```
35.05s call tests/test_modeling_marian.py::TestMarian_en_ROMANCE::test_pipeline
```
Other slow torch tests by group:
```
18.59s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_attentions
17.27s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript
16.72s call tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torchscript_output_hidden_state
17.73s call tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_translation
15.36s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_pretokenized_inputs
15.14s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_maximum_encoding_length_pair_input
14.12s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_add_special_tokens
13.33s call tests/test_tokenization_deberta.py::DebertaTokenizationTest::test_maximum_encoding_length_single_input
10.55s call tests/test_modeling_blenderbot.py::Blenderbot90MIntegrationTests::test_90_generation_from_short_input
```<|||||>> @LysandreJik, just a clarification - so you propose not to have a fixed speed threshold in any of the "clauses". i.e. for non-essential tests as defined by you they should be marked as slow regardless of their speed, correct? I suppose this is smart since even very fast tests still add up to a lot since there could be many of them.
Yes, I think that would be best! I don't think there are many non-essential tests that are not slow though. We'd still like to get full coverage of the library's internals using only non-@slow tests, so getting these tests below a certain time threshold would still be important so that every PR could get quick feedback on the CI's status.<|||||>Thank you for that clarification, @LysandreJik
Please have a look at how your suggestions have been integrated into the testing doc:
https://github.com/huggingface/transformers/pull/7895/files |
transformers | 7,884 | closed | [CIs] report slow tests add --durations=0 to some pytest jobs | It appears that over the last 10 days or so the CIs have gone 4-5 times slower. From 2-3 minutes to 10-11 minutes for `run_tests_torch`.
This PR adds a report, generated by `pytest`, that lists the slowest tests so that we could quickly detect such regressions in our test suite's performance.
As suggested by @sshleifer, pytest can do the diagnostics using `--durations=` flag. I'm not sure what the best number to set it to - let's try with 50:
I propose adding this to:
* `run_tests_torch_and_tf` job - as it runs all non-slow tests in "real-time"
* all scheduled jobs that run slow tests
**edit**: Several fixes have been merged since this slowdown was found, so we are much better off and there is an effort to perform a major clean up here https://github.com/huggingface/transformers/pull/7895 so support this effort let's start with reporting the runtime of all tests, that is `--durations=0` (`pytest` currently doesn't have an option to report only tests slower than a certain run time) and once the clean up has been done, then we dial down to `--durations=50` to only monitor any new outliers.
@sshleifer, @sgugger, @LysandreJik | 10-18-2020 18:42:59 | 10-18-2020 18:42:59 | |
transformers | 7,883 | closed | style: fix typo | # What does this PR do?
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR. | 10-18-2020 12:34:12 | 10-18-2020 12:34:12 | |
transformers | 7,882 | closed | style: fix typo in the README | # What does this PR do?
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 10-18-2020 12:30:49 | 10-18-2020 12:30:49 | |
transformers | 7,881 | closed | Error(s) in loading state_dict for BertForTokenClassification | ### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
examples/token-classification: @stefan-it
-->
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* my own modified scripts: (give details below):
I am trying to run the following script
```
from transformers import BertTokenizer, BertConfig
from transformers import BertForTokenClassification, AdamW
import torch
import argparse
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
parser = argparse.ArgumentParser(description='BERT Keyword Extractor')
parser.add_argument('--sentence', type=str, default=' ',
help='sentence to get keywords')
parser.add_argument('--path', type=str, default='model.pt',
help='path to load model')
args = parser.parse_args()
tag2idx = {'B': 0, 'I': 1, 'O': 2}
tags_vals = ['B', 'I', 'O']
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
model = BertForTokenClassification.from_pretrained("bert-base-uncased", num_labels=len(tag2idx))
def keywordextract(sentence, path):
text = sentence
tkns = tokenizer.tokenize(text)
indexed_tokens = tokenizer.convert_tokens_to_ids(tkns)
segments_ids = [0] * len(tkns)
tokens_tensor = torch.tensor([indexed_tokens]).to(device)
segments_tensors = torch.tensor([segments_ids]).to(device)
model.load_state_dict(torch.load(path))
model.eval()
prediction = []
logit = model(tokens_tensor, token_type_ids=None,
attention_mask=segments_tensors)
logit = logit.detach().cpu().numpy()
prediction.extend([list(p) for p in np.argmax(logit, axis=2)])
for k, j in enumerate(prediction[0]):
if j==1 or j==0:
print(tokenizer.convert_ids_to_tokens(tokens_tensor[0].to('cpu').numpy())[k], j)
keywordextract(args.sentence, args.path)
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* my own task or dataset: (give details below)
SemEval2017 task 10
## To reproduce
Steps to reproduce the behavior:
1. run this script with any string ("Huggingface is a blessing for transfer learning")
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
The error is
```
Traceback (most recent call last):
File "keyword-extractor.py", line 40, in <module>
keywordextract(args.sentence, args.path)
File "keyword-extractor.py", line 28, in keywordextract
model.load_state_dict(torch.load(path))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1045, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for BertForTokenClassification:
size mismatch for classifier.weight: copying a param with shape torch.Size([2, 768]) from checkpoint, the shape in current model is torch.Size([3, 768]).
size mismatch for classifier.bias: copying a param with shape torch.Size([2]) from checkpoint, the shape in current model is torch.Size([3]).
```
If I put in a model directly downloaded from huggingface archive, the error changes to
```
Traceback (most recent call last):
File "keyword-extractor.py", line 40, in <module>
keywordextract(args.sentence, args.path)
File "keyword-extractor.py", line 28, in keywordextract
model.load_state_dict(torch.load(path))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1045, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for BertForTokenClassification:
Missing key(s) in state_dict: "bert.embeddings.position_ids", "bert.embeddings.LayerNorm.weight", "bert.embeddings.LayerNorm.bias", "bert.encoder.layer.0.attention.output.LayerNorm.weight", "bert.encoder.layer.0.attention.output.LayerNorm.bias", "bert.encoder.layer.0.output.LayerNorm.weight", "bert.encoder.layer.0.output.LayerNorm.bias", "bert.encoder.layer.1.attention.output.LayerNorm.weight", "bert.encoder.layer.1.attention.output.LayerNorm.bias", "bert.encoder.layer.1.output.LayerNorm.weight", "bert.encoder.layer.1.output.LayerNorm.bias", "bert.encoder.layer.2.attention.output.LayerNorm.weight", "bert.encoder.layer.2.attention.output.LayerNorm.bias", "bert.encoder.layer.2.output.LayerNorm.weight", "bert.encoder.layer.2.output.LayerNorm.bias", "bert.encoder.layer.3.attention.output.LayerNorm.weight", "bert.encoder.layer.3.attention.output.LayerNorm.bias", "bert.encoder.layer.3.output.LayerNorm.weight", "bert.encoder.layer.3.output.LayerNorm.bias", "bert.encoder.layer.4.attention.output.LayerNorm.weight", "bert.encoder.layer.4.attention.output.LayerNorm.bias", "bert.encoder.layer.4.output.LayerNorm.weight", "bert.encoder.layer.4.output.LayerNorm.bias", "bert.encoder.layer.5.attention.output.LayerNorm.weight", "bert.encoder.layer.5.attention.output.LayerNorm.bias", "bert.encoder.layer.5.output.LayerNorm.weight", "bert.encoder.layer.5.output.LayerNorm.bias", "bert.encoder.layer.6.attention.output.LayerNorm.weight", "bert.encoder.layer.6.attention.output.LayerNorm.bias", "bert.encoder.layer.6.output.LayerNorm.weight", "bert.encoder.layer.6.output.LayerNorm.bias", "bert.encoder.layer.7.attention.output.LayerNorm.weight", "bert.encoder.layer.7.attention.output.LayerNorm.bias", "bert.encoder.layer.7.output.LayerNorm.weight", "bert.encoder.layer.7.output.LayerNorm.bias", "bert.encoder.layer.8.attention.output.LayerNorm.weight", "bert.encoder.layer.8.attention.output.LayerNorm.bias", "bert.encoder.layer.8.output.LayerNorm.weight", "bert.encoder.layer.8.output.LayerNorm.bias", "bert.encoder.layer.9.attention.output.LayerNorm.weight", "bert.encoder.layer.9.attention.output.LayerNorm.bias", "bert.encoder.layer.9.output.LayerNorm.weight", "bert.encoder.layer.9.output.LayerNorm.bias", "bert.encoder.layer.10.attention.output.LayerNorm.weight", "bert.encoder.layer.10.attention.output.LayerNorm.bias", "bert.encoder.layer.10.output.LayerNorm.weight", "bert.encoder.layer.10.output.LayerNorm.bias", "bert.encoder.layer.11.attention.output.LayerNorm.weight", "bert.encoder.layer.11.attention.output.LayerNorm.bias", "bert.encoder.layer.11.output.LayerNorm.weight", "bert.encoder.layer.11.output.LayerNorm.bias", "classifier.weight", "classifier.bias".
Unexpected key(s) in state_dict: "cls.predictions.bias", "cls.predictions.transform.dense.weight", "cls.predictions.transform.dense.bias", "cls.predictions.transform.LayerNorm.gamma", "cls.predictions.transform.LayerNorm.beta", "cls.predictions.decoder.weight", "cls.seq_relationship.weight", "cls.seq_relationship.bias", "bert.pooler.dense.weight", "bert.pooler.dense.bias", "bert.embeddings.LayerNorm.gamma", "bert.embeddings.LayerNorm.beta", "bert.encoder.layer.0.attention.output.LayerNorm.gamma", "bert.encoder.layer.0.attention.output.LayerNorm.beta", "bert.encoder.layer.0.output.LayerNorm.gamma", "bert.encoder.layer.0.output.LayerNorm.beta", "bert.encoder.layer.1.attention.output.LayerNorm.gamma", "bert.encoder.layer.1.attention.output.LayerNorm.beta", "bert.encoder.layer.1.output.LayerNorm.gamma", "bert.encoder.layer.1.output.LayerNorm.beta", "bert.encoder.layer.2.attention.output.LayerNorm.gamma", "bert.encoder.layer.2.attention.output.LayerNorm.beta", "bert.encoder.layer.2.output.LayerNorm.gamma", "bert.encoder.layer.2.output.LayerNorm.beta", "bert.encoder.layer.3.attention.output.LayerNorm.gamma", "bert.encoder.layer.3.attention.output.LayerNorm.beta", "bert.encoder.layer.3.output.LayerNorm.gamma", "bert.encoder.layer.3.output.LayerNorm.beta", "bert.encoder.layer.4.attention.output.LayerNorm.gamma", "bert.encoder.layer.4.attention.output.LayerNorm.beta", "bert.encoder.layer.4.output.LayerNorm.gamma", "bert.encoder.layer.4.output.LayerNorm.beta", "bert.encoder.layer.5.attention.output.LayerNorm.gamma", "bert.encoder.layer.5.attention.output.LayerNorm.beta", "bert.encoder.layer.5.output.LayerNorm.gamma", "bert.encoder.layer.5.output.LayerNorm.beta", "bert.encoder.layer.6.attention.output.LayerNorm.gamma", "bert.encoder.layer.6.attention.output.LayerNorm.beta", "bert.encoder.layer.6.output.LayerNorm.gamma", "bert.encoder.layer.6.output.LayerNorm.beta", "bert.encoder.layer.7.attention.output.LayerNorm.gamma", "bert.encoder.layer.7.attention.output.LayerNorm.beta", "bert.encoder.layer.7.output.LayerNorm.gamma", "bert.encoder.layer.7.output.LayerNorm.beta", "bert.encoder.layer.8.attention.output.LayerNorm.gamma", "bert.encoder.layer.8.attention.output.LayerNorm.beta", "bert.encoder.layer.8.output.LayerNorm.gamma", "bert.encoder.layer.8.output.LayerNorm.beta", "bert.encoder.layer.9.attention.output.LayerNorm.gamma", "bert.encoder.layer.9.attention.output.LayerNorm.beta", "bert.encoder.layer.9.output.LayerNorm.gamma", "bert.encoder.layer.9.output.LayerNorm.beta", "bert.encoder.layer.10.attention.output.LayerNorm.gamma", "bert.encoder.layer.10.attention.output.LayerNorm.beta", "bert.encoder.layer.10.output.LayerNorm.gamma", "bert.encoder.layer.10.output.LayerNorm.beta", "bert.encoder.layer.11.attention.output.LayerNorm.gamma", "bert.encoder.layer.11.attention.output.LayerNorm.beta", "bert.encoder.layer.11.output.LayerNorm.gamma", "bert.encoder.layer.11.output.LayerNorm.beta".
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
To extract the keywords from a string | 10-18-2020 11:30:35 | 10-18-2020 11:30:35 | Hello, I would say that there's a mismatch between the number of labels you've provided and the number of labels supported by your `model.pt`.
How did you obtain your `model.pt`?<|||||>I downloaded it directly from the links provided for each bert models with `!wget`<|||||>Which links? Why did you switch from using the `from_pretrained` utility to the `load_state_dict` torch util?<|||||>I had to use the same model for two tasks. Since the downloaded model using `.from_pretrained` method go to root/ directory in colab, they aren't directly accessible (I actually opened an issue about that as well [here](https://github.com/huggingface/transformers/issues/7847)). Thus, to use the model for both tasks, I downloaded it manually in a model/ directory using `!wget https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json`<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I had the case when we tried to provide our own tokenizer with vocab.txt file.
The mistake we did was that we update the config.json file and set vocab_size as the total line number in the new vocab.txt file.
This was the wrong step we took. config.json has to be the one exactly for model.pt we want to use. If the model.pt is from the hub, then no need to change the config.json.
|
transformers | 7,880 | closed | Fix bug in _sorted_checkpoints | I'm using transformers 3.3.1 and run a training script with `--save_total_limit 3`. I hit the exception below, and after debugging the code found that it wrongly tries to index into the `best_model_checkpoint`'s *str* rather than the `sorted_checkpoints` array. When running without the fix I got this exception:
```
Traceback (most recent call last):
File "/<HOME>/.conda/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 921, in _save_training
self._rotate_checkpoints(use_mtime=True)
File "/<HOME>/.conda/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 1283, in _rotate_checkpoints
checkpoints_sorted = self._sorted_checkpoints(use_mtime=use_mtime)
File "/<HOME>/.conda/envs/transformers/lib/python3.7/site-packages/transformers/trainer.py", line 1274, in _sorted_checkpoints
checkpoints_sorted[best_model_index],
TypeError: 'str' object does not support item assignment
```
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-18-2020 09:13:44 | 10-18-2020 09:13:44 | Ooh, that's an ugly typo. Thanks for fixing! Will make sure to add a test of this soon so it doesn't break with me being stupid like that. |
transformers | 7,879 | closed | question about `add_special_tokens` and embedding layer | # ❓ Questions & Help
when I call add_special_tokens in the tokniezer, new token that is not in the existing vocab is added. At this time, the size is different from the existing embedding layer, what happens inside?
1. Abandon the existing embedding layer and train a new.
2. train only new tokens (expand existing embedding layer) | 10-18-2020 08:33:35 | 10-18-2020 08:33:35 | When you add special tokens to the tokenizer, you should resize the model embedding matrix so that it contains the extra embeddings.
You should then do an additional training/fine-tuning on data containing those special tokens so that the model may tune this new column in the embedding matrix.<|||||>thanks ! |
transformers | 7,878 | closed | [multiple models] skip saving/loading deterministic state_dict keys | Implementing https://github.com/huggingface/transformers/issues/7296
* [x] make testing save/load of `keys_to_never_save` a common test by moving it from `fsmt` modeling test
* [x] implement skipping for all models with static positional embeddings: mbart, pegasus, marian (fsmt had this feature in first place)
Fixes: #7296
@LysandreJik, @sshleifer, @sgugger | 10-18-2020 04:29:14 | 10-18-2020 04:29:14 | OK, so how would you recommend we run a specific common test for a model whose test module isn't running common tests at all?
Please have a look at the approach I used for `mbart` https://github.com/huggingface/transformers/pull/7878/files#diff-f3b9af7b4721bdf972a72e35a35e8742ddc986f12218e728cd2b6ad30a156abb
I basically made a very stripped down `ModelTester` and then used an alias hack to run just a specific common test via:
```
class SelectiveCommonTest(unittest.TestCase):
test_save_load_keys_to_never_save = ModelTesterMixin.test_save_load_keys_to_never_save
```
Is this any good, or would you recommend a different approach - the idea is not to copy the same test to each test module.
<|||||>I like how you did it.
Adding @LysandreJik and @sgugger to see if they agree before you do it 5 times.
<|||||>> I imagine you double-checked the test was indeed executed.
Yes.
Indeed, my initial test that I wrote for fsmt wasn't executing, since I renamed the keys in the code and forgot the test and the test was no-op if it didn't find the right keys. So some print()s helped to verify it indeed worked in this new incarnation.<|||||>I think `test_save_load_missing_keys` should be handled separately since it can add a significant overhead if run by all models, since I noticed tf models are **very** slow at save/load tests.. I reverted moving it from fsmt to common tests for now - back to just fsmt - will do a separate PR about it.
<|||||>Anybody with a good knowledge of XLM? @sshleifer suggested to add xlm to this PR, but I'm not sure whether it's safe to not save `self.position_embeddings`, since as you can see here:
https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_xlm.py#L439
the deterministic non-trained weights happen only if `config.sinusoidal_embeddings` is True. Otherwise it gets trained like any other embed layer.
Which leaves just: `position_ids` which is tiny a set of 512-2048 floats, which is not worth bothering for IMHO.
<|||||>wrt t5 - I don't see anything positional or deterministic for that matter, but this is quite a different model - are there any state_dict keys here that would be a good candidate for not saving?<|||||>I implemented this for: mbart, pegasus, marian - all subclasses of `BartForConditionalGeneration` w/ `static_position_embeddings=True`. `fsmt` already had it.
Anything else that I may have missed?
If not, this PR is good to go. |
transformers | 7,877 | closed | [wip] [pegasus] encode s/\r?\n/<n>/g + test | As discussed in https://github.com/huggingface/transformers/issues/7743
* [x] On encode: `s/\r?\n/<n>/g` + test
* [ ] On decode: `s/<n>/\n/g` - not sure what to do since `run_eval.py` can't work with multiline outputs as it uses readline to get each record and inserting `\n` would break the records if they are now multiline.
@sshleifer
Fixes: #7743 | 10-18-2020 03:57:10 | 10-18-2020 03:57:10 | I have some concerns defaulting this logic to `True`, given that some `sshleifer/` models like `sshleifer/pegasus-cnn-ft-v2/`, `distill-pegasus-cnn-16-4/` were trained without newlines in the dataset. I'm sorry I forgot to mention these until now.
I would merge this if the logic to do the regex were defaulted to False and we checked that `PegasusFastTokenizer` did the same thing.
I would also be fine abandoning, since you have already solved the Pegasus replication issue with your `build/` scripts, and we haven't had anybody external ask for this yet.<|||||>Sure, we can just leave it unmerged and if someone reports an issue get back to sorting this out. It was a trivial change, so I have no problem with closing this. It's your call, @sshleifer.<|||||>will reopen if needed, thx for being flexible @stas00 |
transformers | 7,876 | closed | xlm-mlm-17-1280 & xlm-mlm-100-1280 include languages? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
Where is the list showing what languages are supported by each of these two models? | 10-18-2020 02:59:59 | 10-18-2020 02:59:59 | Hi, you can find this data [here](https://github.com/facebookresearch/XLM#the-17-and-100-languages). |
transformers | 7,875 | closed | How to do categorical sequence classification? | Whenever I have a sequence of max length y of batch size x: (x, y) and categorical (z) classification (x, z) I get the error that batch size isn't the same x vs x*z, how do I fix this? | 10-17-2020 22:40:48 | 10-17-2020 22:40:48 | Solved by adding `num_labels` to the constructor: `BertForSequenceClassification.from_pretrained("...", num_labels=z)`. Then the onehot-encoding wasn't necessary either. Another error would probably make it easier to spot this though. |
transformers | 7,874 | closed | [Rag] extend_enc_output fails when number of retrieved documents not equal to RagConfig.n_docs | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): Rag
The problem arises when using:
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: `dummy_dataset`
## To reproduce
Steps to reproduce the behavior:
1. Use caling retriever seperately example and but modify number of retrieved document parameter `n_docs` as follows -
```
docs_dict = retriever(input_ids.numpy(), question_hidden_states.detach().numpy(), return_tensors="pt", n_docs=2)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Code snippet -
```
tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq")
retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True)
model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", use_dummy_dataset=True)
input_dict = tokenizer.prepare_seq2seq_batch("What is capital of France?", return_tensors="pt")
input_ids = input_dict["input_ids"]
# Caling retriever seperately
question_hidden_states = model.question_encoder(input_ids)[0]
# Default value of RagConfig.n_docs is 5 hence change n_docs to different than default value
docs_dict = retriever(input_ids.numpy(), question_hidden_states.detach().numpy(), return_tensors="pt", n_docs=2)
doc_scores = torch.bmm(question_hidden_states.unsqueeze(1), docs_dict["retrieved_doc_embeds"].float().transpose(1, 2)).squeeze(1)
outputs = model.generate(context_input_ids=docs_dict["context_input_ids"], context_attention_mask=docs_dict["context_attention_mask"], doc_scores=doc_scores)
generated_string = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(generated_string)
```
Stacktrace -
```
/usr/local/lib/python3.6/dist-packages/transformers/modeling_rag.py in generate(self, input_ids, attention_mask, context_input_ids, context_attention_mask, doc_scores, max_length, min_length, early_stopping, use_cache, num_beams, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, bad_words_ids, num_return_sequences, decoder_start_token_id, **kwargs)
1353
1354 # correctly extend last_hidden_state and attention mask
-> 1355 context_attention_mask = extend_enc_output(context_attention_mask, num_beams=num_beams)
1356 encoder_outputs["last_hidden_state"] = extend_enc_output(last_hidden_state, num_beams=num_beams)
1357
/usr/local/lib/python3.6/dist-packages/transformers/modeling_rag.py in extend_enc_output(tensor, num_beams)
1346 def extend_enc_output(tensor, num_beams=None):
1347 # split into `batch_size`, `num_beams`, `num_docs`
-> 1348 tensor = tensor[None, None, :].reshape((batch_size, 1, self.config.n_docs) + tensor.shape[1:])
1349 # repeat same last hidden states over `num_beams` dimension
1350 tensor = tensor.expand((batch_size, num_beams, self.config.n_docs) + tensor.shape[3:])
RuntimeError: shape '[0, 1, 5, 300]' is invalid for input of size 600
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I think `extend_enc_output` should use tensor dimension instead of `self.config.n_docs` as follows -
```
tensor = tensor[None, None, :].reshape((batch_size, 1, tensor.shape[0:]) + tensor.shape[1:])
```
Let me know if proposed fix is fine, I can raise PR. | 10-17-2020 21:35:25 | 10-17-2020 21:35:25 | @lalitpagaria - good point! I think the cleanest solution here would actually be to add `n_docs` as a argument to all RagModel forward's function that defaults to `self.config.n_docs` (exactly as it is the case for `use_cache` or `output_attentions`) => it would be great if you could open a PR for this :-) <|||||>@patrickvonplaten I think this approach also will not solve a situation when retriever return less docs than `n_docs`. For example let's say a dataset only have `4` documents but `n_docs` value is `5`.
How about solving this as follows -
1) When fetching docs from retriever use `n_docs` value (either as a argument or default `self.config.n_docs`)
2) All other places use value from number of documents returned by retriever ie dimension `0` of `context_input_ids` can be used
Let me know what do you think.<|||||>I think this is an edge case - if a dataset only has 4 documents, then the user should not define `n_docs` as 5. I agree that it would be easier to just use the incoming tensor's dimension as `n_docs`, but it would be less readable and can also lead to confusion. `n_docs` should always be the number of docs actually given to Rag. So I'd prefer to not use the retriever's dimension here. We could add an assert that the dimension 0 is always equal `n_docs` which would be helpful for the user, but I think we should still rely on the `n_docs` parameter.<|||||>Sure I will create PR (using n_docs with assert) with related tests. |
transformers | 7,873 | closed | can't set evaluation_strategy to "epoch" | ## Environment info
- `transformers` version: 3.3.1
### Who can help
@sgugger
## Information
Seems like we can't use the evaluation_strategy=epoch option. The default value of evaluate_during_training is boolean (False). In transformers/training_args.py, __post_init__ function, there is a check - `if self.evaluate_during_training is not None:` (line 326). If I'm not mistaken, it will always result in True. I think that if you change the line into `if self.evaluate_during_training:` the problem will be solved.
Thank you so much for your awesome work :) | 10-17-2020 19:55:17 | 10-17-2020 19:55:17 | This has been done already, see [here](https://github.com/huggingface/transformers/blob/master/src/transformers/training_args.py#L335). But you need to install from source for that, since the change is recent,<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,872 | closed | Fix Rag example docstring | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #7829
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-17-2020 19:45:14 | 10-17-2020 19:45:14 | |
transformers | 7,871 | closed | RAG generate function uses input_ids even when context_input_ids are given. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Linux-5.4.0-51-generic-x86_64-with-debian-buster-sid
- Python version: 3.6.8
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): 2.3.1 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
I think @patrickvonplaten has been checking RAG issues
## Information
Model I am using RagTokenForGeneration:
The problem arises when using:
* [x ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x ] my own task or dataset: (give details below)
## To reproduce
One can use the demo for RAG [currently in PR](https://github.com/huggingface/transformers/pull/7455) but it will happen in any case.
1. Load a RagTokenForGeneration model.
2. Generate the context_input_ids (in the demo done doing a forward pass.
3. use the generate function without giving `input_ids` which is supposed to be an optional input.
4. The function check the `batch_size` using `input_ids` and breaks since they are a None in this line: https://github.com/huggingface/transformers/blob/9f7b2b243230a0ff7e61f48f852e3a5b6f6d86fa/src/transformers/modeling_rag.py#L1311
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
query = "My question"
tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq")
rag_conf = RagConfig.from_pretrained("facebook/rag-token-nq")
retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", question_encoder_tokenizer = tokenizer.question_encoder, generator_tokenizer = tokenizer.generator, index_name="custom", indexed_dataset=dataset)
model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever)
device = "cuda:0"
input_ids = tokenizer(query, return_tensors="pt").input_ids.to(device)
with torch.no_grad():
# retrieve support docs
retrieved_outputs = model(input_ids, labels=None, output_retrieved=True)
dl_scores = retrieved_outputs.doc_scores[0].tolist()
dp_scores = retrieved_outputs.doc_scores.softmax(dim=-1)[0].tolist()
doc_dicts = retriever.index.get_doc_dicts(retrieved_outputs.retrieved_doc_ids)[0]
support_docs = [
{"score": ls, "proba": ns, "title": ti, "text": te}
for ls, ns, ti, te in zip(dl_scores, dp_scores, doc_dicts["title"], doc_dicts["text"])
]
# generate answers
generated_ids = model.generate(
context_input_ids=retrieved_outputs.context_input_ids,
context_attention_mask=retrieved_outputs.context_attention_mask,
doc_scores=retrieved_outputs.doc_scores,
num_beams=4,
num_return_sequences=4,
min_length=2,
max_length=64,
length_penalty=1.0,
)
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The batch size should be obtained differently. For instance `batch_size = doc_scores.shape[0]`. | 10-17-2020 18:56:53 | 10-17-2020 18:56:53 | Hey @LittlePea13 - yes you are very much correct here. This PR linked to the issue should fix it. Thanks a lot! |
transformers | 7,870 | closed | Add missing comma | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-17-2020 15:55:18 | 10-17-2020 15:55:18 | |
transformers | 7,869 | closed | Raise error when using AMP on non-CUDA device | The trainer currently implements native AMP in such a way that a GradScaler is always used. AFAIK, and as supported by [this post](https://github.com/pytorch/pytorch/issues/36169#issuecomment-611261781) and [this report on the forums](https://discuss.huggingface.co/t/training-gpt2-on-cpus/1603), this will only work on CUDA devices. Therefore, an error should probably be thrown when a user tries to use `--fp16` alongside a non-CUDA device.
This PR is currently very brief because I am uncertain about a few things:
- I am not sure if the current implementation of `--fp16` in the trainer works with TPUs;
- I am not sure about differences in behaviour between using `apex` and native AMP in this respect;
- I am not entirely sure whether `_post_init` is the right place to do an argument sanity check. Particularly because I am now calling `self.device` which will set the device through `_set_devices`, even though you may (?) want to delay that as long as possible until the arguments are used in the trainer.
cc @sgugger | 10-17-2020 12:48:28 | 10-17-2020 12:48:28 | > I think the postinit is a great place for this sanity check, so the error comes fast to the user, and I don't think our current FP16 implementations (Apex and Amp) work with TPUs. I can check the latter on Monday and report back here.
If you'd rather be more forgiving, let me know. In that case we can do a logging.warning(f"AMP only supported when using CUDA. Turning off AMP because not using a CUDA device ({self.device.type})") and subsequently automatically turning off --fp16. <|||||>Ok, just tested on TPUs and FP16 does not work on them indeed, though native amp fails graciously (testing on TPUs with PyTorch < 1.6 is suicide so I did not try APEX). So if you fix the styling issue, I think this is good to be merged!
<|||||>It has to be run by the user, I can force-push the styling on your branch if you don't have the setup to run it easily.<|||||>> It has to be run by the user, I can force-push the styling on your branch if you don't have the setup to run it easily.
Thanks but it should work fine now. Updated local dependencies and all that and reformatted.<|||||>Thanks for your contribution! |
transformers | 7,868 | closed | Julibert model card | Model card for Julibert Catalan model | 10-17-2020 12:06:44 | 10-17-2020 12:06:44 | |
transformers | 7,867 | closed | Add AI-SOCO models | # What does this PR do?
Adding AI-SOCO language and classification models.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@julien-c. | 10-17-2020 09:45:37 | 10-17-2020 09:45:37 | |
transformers | 7,866 | closed | AttributeError: module 'tensorflow_core.keras.activations' has no attribute 'swish' | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
tensorflow 2.0.0
after "pip install transformers", i import transformers, meet this error.
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 10-17-2020 09:09:15 | 10-17-2020 09:09:15 | > # ❓ Questions & Help
> ## Details
> tensorflow 2.0.0
> after "pip install transformers", i import transformers, meet this error.
>
> **A link to original question on the forum/Stack Overflow**:
兄弟,升级tf到2.3<|||||>Hi, you should install TensorFlow 2.3 as mentioned by @wmathor |
transformers | 7,865 | closed | labels and decoder_input_ids | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
what is the difference between labels and decoder_input_ids for EncoderDecoderModel in the text summarization task?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 10-17-2020 06:10:18 | 10-17-2020 06:10:18 | I also get confused by the notation and would appreciate some clarifications.
Thank you.<|||||>@AI678 I think I got it. Do not take my words as absolute truth since I am new to the subject.
If you are talking about a full Transformer architecture (e.g. BART, T5, PEGASUS), the labels are the token ids to which you compare the logits generated by the Decoder in order to compute the cross-entropy loss. This should be the only input necessary during training or fine tuning.
On the other hand, the decoder_input_ids have the exact same structure of the labels, but are shifted one position to the right in order to add the \<start of sentence\> token. The decoder_input_ids are then passed to the decoder (along with the mask, which masks all unseen tokens!!). The first input of the decoder will be the \<start of sentence\> token so that it starts generating the sentence.
Hope this help, waiting for somebody to acknowledge my answer.<|||||>Hi, the glossary has been completed with these terms, you can check it out [here](https://huggingface.co/transformers/glossary.html), let me know if it's still not clear enough. |
transformers | 7,864 | closed | Create model card for pre-trained NLI models. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-17-2020 02:40:40 | 10-17-2020 02:40:40 | Hi @julien-c, thank you so much for the pointer. I have added the dataset identifier and license.<|||||>Thanks! Your model is now linked from https://huggingface.co/datasets/anli (and the other datasets) |
transformers | 7,863 | closed | [testing] rename skip targets + docs | As discussed in https://github.com/huggingface/transformers/issues/6349 this PR brings consistency to skip decorators, so now we will have:
* `require_torch` - this test will run only under torch
* `require_torch_gpu` - as `require_torch` plus requires at least 1 GPU
* `require_torch_multigpu` - as `require_torch` plus requires at least 2 GPUs
* `require_torch_non_multigpu` - as `require_torch` plus requires 0 or 1 GPUs
* `require_torch_tpu` - as `require_torch` plus requires at least 1 TPU
Documentation updated and expanded.
The main change was done by running:
```
find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's|require_multigpu|require_torch_multigpu|g' {} \;
find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's|require_torch_and_cuda|require_torch_gpu|g' {} \;
find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's|require_non_multigpu|require_torch_non_multigpu|g' {} \;
```
Fixes: #6349
@LysandreJik, @sgugger, @sshleifer | 10-17-2020 02:17:56 | 10-17-2020 02:17:56 | I resolved conflicts - it's good to go now. |
transformers | 7,862 | closed | unshared Albert | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
How can I remove ALBERT's parameter sharing during fine-tuning? (just use the albert as bert)
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 10-17-2020 02:05:30 | 10-17-2020 02:05:30 | Hello! The layers are shared in ALBERT because even though there are `config.num_hidden_layers=n`, there's usually `config.num_hidden_groups = 1`. Change that value to `n` to have `n` different layers.<|||||>Thanks for your reply, I have another question about that
How can I load the AlbertModel parameter(in transformer block) to unshared Albert, When I load just

###_IncompatibleKeys(missing_keys=['embeddings.position_ids'], unexpected_keys=[])###
whether is it wrong? can you tell me how I can load parameters correctly?
looking forward to your reply<|||||>Could you show me the full code used that generated this error? Or can you give an example so I can reproduce it?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,861 | closed | [testing] remove USE_CUDA | As discussed at https://github.com/huggingface/transformers/issues/6349 this PR removes `USE_CUDA`.
I don't think I needed to add `CUDA_VISIBLE_DEVICES=""` to `.circleci` config files since those CIs have no gpus anyway.
@LysandreJik, @sgugger, @sshleifer
fixes: #6349 | 10-17-2020 01:53:55 | 10-17-2020 01:53:55 | |
transformers | 7,860 | closed | [fsmt test] basic config test with online model + super tiny model | This PR does:
* [x] Seeing the issue of https://github.com/huggingface/transformers/pull/7659 I realized fsmt didn't have a basic non-slow test that loads the tokenizer via online model files. Luckily a totally unrelated examples test caught this issue in that PR, so adding a very simple quick test in the main test suite, so that it runs by the normal CI.
* [x] while I was building this test, I needed a new tiny model, so I refined the script https://github.com/huggingface/transformers/blob/master/scripts/fsmt/fsmt-make-tiny-model.py and made a new one that creates a 50 times smaller model, so we are now at 60KB, instead of 3MB.
@LysandreJik, @sshleifer
| 10-17-2020 01:27:45 | 10-17-2020 01:27:45 | Hi Stas, thanks, #7659 will fix this (we will now require at least one example checkpoint for each tokenizer and we test it automatically).<|||||>@thomwolf, I'm not certain why you closed this. This is a tokenizer test that is needed - the issue caught in examples was just a flag that there was a missing test in the normal tests.
Actually, not only it's needed, I will have to expand this test to verify that it **doesn't** get the hardcoded default values, but fetches the correct values from `tokenizer_config.json`. And to change the default values to be different from `TINY_FSMT`, since otherwise it won't be testing the right thing.
Feel free to add it as part of https://github.com/huggingface/transformers/pull/7659 but please make sure you used different from `TINY_FSMT` hardcoded defaults.
I hope this makes sense. <|||||>Hmm, but you copied `TINY_FSMT` in https://github.com/huggingface/transformers/commit/1885ca7f29ffa5373ea848edad8efab02809c268, how will then the tests check it can fetch that data if this is now hardcoded? I am not following this. Do I need to create `TINY_FSMT2` with different values?
I haven't read the new code in depth, but my gut feeling is that the defaults may mask a problem.<|||||>Hi stas, I'll let you read the new code and then we can have a look together.
The basic idea is that we now require a full and working checkpoint for the tokenizers to be fully tested in various conditions and the slow vs. fast compared.
The question of testing that tokenizers load and use `tokenizer_config.json` is another question and we should indeed address it in a subsequent PR if it's not addressed already indeed.<|||||>That works.
But please re-open this PR, since we need it anyway. I will add more changes to it after your big PR merge to ensure that the loading of the tokenizer is properly tested.<|||||>This PR is complete now. |
transformers | 7,859 | closed | [s2s testing] turn all to unittests, use auto-delete temp dirs | Currently in `examples/seq2seq/test_seq2seq_examples.py` many tmp dirs remain uncleaned by the end of the test:
This PR does:
* Now that we have `parameterized`, convert all tests into unittests
* Now that we have unittests, we can use auto-delete tmp dirs easily using `get_auto_remove_tmp_dir()`
* convert 4 test files
* moved out the broken multi_gpu test - working on it separately here https://github.com/huggingface/transformers/pull/7281 - will re-add when it's complete.
@sshleifer | 10-16-2020 20:37:34 | 10-16-2020 20:37:34 | @sshleifer, one thing I wasn't sure about - since now the temp dir creation/destruction is automated and it's easy to hardwire those by just passing the explicit path `self.get_auto_remove_tmp_dir("/tmp/favoritedir")` when debugging - I removed the prefices that were used in `tempfile.mkdtemp`. Do you still want to have the option of having a specific prefix? With prefix you still have to hunt for the unique ID which changes on every test rerun. This is a much simpler option, IMHO.
To me it sounds it'd be useful during a debug to have one common path prefix for all tmp dirs of the same test and then add suffices instead for the different specific tmp dir of the test. <|||||>Whatever you think is best! <|||||>I think the prefixes are no longer needed. So it's good to go. |
transformers | 7,858 | closed | Trainer with Iterable Dataset | # What does this PR do?
Fixes #5990
Follows #5995 (closed because stale).
Merges all commits from master.
* accomodate iterable dataset without predefined length
* set it as 1 use case: provide max_steps, and NO num_epochs
* Is a merge of master and PR 5995
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
@sgugger
| 10-16-2020 20:25:19 | 10-16-2020 20:25:19 | Now investigating this FAILED test.
So far, it is PASSED on 3 different environments I have tried.
`FAILED tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_add_special_tokens` from `run_tests_torch_and_tf`<|||||>@sgugger
CircleCI `run_tests_torch_and_tf` is FAILED, because
```
FAILED tests/test_tokenization_fast.py::SentencePieceFastTokenizerTest::test_add_special_tokens
==== 1 failed, 2964 passed, 493 skipped, 386 warnings in 925.76s (0:15:25) =====
```
On 2 different environments, I have a PASSED.
I reproduced the CI run step by step, and it is still PASSED:
```
conda create -n trans_test_tf_torch python=3.6
conda activate trans_test_tf_torch
git clone [email protected]:huggingface/transformers.git
cd transformers
git fetch origin pull/7858/head:pull/7858
git pull origin pull/7858
git checkout pull/7858
pip install --upgrade pip
pip install git+https://github.com/huggingface/datasets
pip install .[sklearn,tf-cpu,torch,testing]
pip install codecov pytest-cov
```
Result:
`python -m pytest -n auto --dist=loadfile -s -v tests/test_tokenization_fast.py --cov`
`91 passed, 11 warnings in 1851.05s (0:30:51)`
For all the tests:
`python -m pytest -n auto --dist=loadfile -s -v ./tests/ --cov `
`2965 passed, 493 skipped, 385 warnings in 3124.66s (0:52:04)`<|||||>Thanks for your contribution! |
transformers | 7,857 | closed | Create README.md |
# What does this PR do?
Creare model card for `cambridgeltl/BioRedditBERT-uncased`.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-16-2020 18:16:04 | 10-16-2020 18:16:04 | |
transformers | 7,856 | closed | Remove duplicated mish activation function | # What does this PR do?
The mish activation function was repeated in the file. I removed the duplicate.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-16-2020 17:42:30 | 10-16-2020 17:42:30 | |
transformers | 7,855 | closed | Model card for German BERT fine-tuned for LER/NER | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-16-2020 17:18:26 | 10-16-2020 17:18:26 | |
transformers | 7,854 | closed | Fixing issue #7810 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # 7810
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@LysandreJik you suggested to open this PR here: https://github.com/huggingface/transformers/issues/7810
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
| 10-16-2020 17:14:47 | 10-16-2020 17:14:47 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 7,853 | closed | SequenceSummary class in modeling_utils.py | Hello,
I have a question about the documentation strings provided for the forward function of the SequenceSummary class from modeling_utils.py:
https://github.com/huggingface/transformers/blob/dc552b9b7025ea9c38717f30ad3d69c2a972049d/src/transformers/modeling_utils.py#L1484
So when `cls_index` is not specified as the argument in SequenceSummary() statement, is the last token of the sequence used for the classification task? The entire sentence for the description is somewhat awkward...
Thanks, | 10-16-2020 14:51:59 | 10-16-2020 14:51:59 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,852 | closed | Upgrade PyTorch Lightning to 1.0.2 | Updates Pytorch Lightning to 1.0.2, and moves early stopping callback into the general callbacks due to deprecation. | 10-16-2020 13:09:30 | 10-16-2020 13:09:30 | @sshleifer @julien-c
mind helping land this?
this upgrades HF to use lightning 1.0+
the 1.0 version has the stable API along with major fixes (many of the open issues with lightning and HF are fixed here).
Finally, a few notes about the changes with the earlier versions (earliest versions) of lightning:
1. models are decoupled from data (but still supported for research purposes).
2. lightningModules for the HF case are likely better used to pass in models to be trained in a specific way or even easier defined as a lightningmodule to modifying and subclassing can be easier.
3. all LMs now offer first class support for exporting to onnx and torchscript.
thanks for the help!<|||||>Thanks for the contribution!
I updated (pushed to) your branch to include some new multi-gpu tests that are on master from @stas00.
For hf tests marked `@slow`, circleci won't run them, so you should run:
```
RUN_SLOW=1 pytest/examples/seq2seq/finetune.py
RUN_SLOW=1 pytest/examples/seq2seq/test_bash_script.py # I added --do_predict to this to document another issue. This is failing on master, so feel free to remove it if you can't get it working
```
On a machine with >1 GPUs, run:
```bash
RUN_SLOW=1 pytest/examples/seq2seq/test_seq2seq_examples_multi_gpu.py
```
We have had some issues getting `trainer.test` working in DDP (@nateraw thinks our usage of `hparams` is the culprit). I can't replicate the issue with the PL `BoringModel`, but you may run into it now. Would love to know if you see anything, but if it's unfixable, we can remove `--do_predict` from test_bash_script.py.
<|||||>Thanks @sshleifer will investigate :)<|||||>I've made a few changes in the code and the tests are passing!
Regarding the barrier I don't think it should stay like this, but its necessary due to the test logic after picking the best model. Current functionality means testing happens across all GPUs, in sync.
The save function in PL probably should add a barrier to ensure all processes are in sync when saving to prevent further issues, and I can open a followup issue to handle this (cc @williamFalcon)
@sshleifer could you verify things work from your end as well if poss?<|||||>@sshleifer @julien-c any chance we can land this? everything is passing and things are fixed!
:)<|||||>@SeanNaren is
> Running into other failing tests, this might be the pytorch 1.7 upgrade...
resolved?<|||||>cc @stas00, @patil-suraj for awareness.<|||||>@sshleifer yeah just needed to merge master<|||||>Thanks @SeanNaren for your work on this. It is very much appreciated!<|||||>I propose an easy solution for such changes - in addition to updating the requirement file, we could add a run-time version check at the top of `lightning_base.py`, which gets adjusted when a breaking change is done like in this PR.<|||||>Incidentally this upgrade solves the problem I have been battling since last night.
a test using PL w/ 1 gpu, followed by a test using 2 gpus was failing with (same pytest process):
```
pytorch_lightning.utilities.exceptions.MisconfigurationException:
You requested GPUs: [0, 1]
But your machine only has: [0]
```
after this update, it's gone! Clearly it was some bug in PL that kept some stale state.
<|||||>Thank you guys for this awesome repo :)
@stas00 that's a neat solution, we're hoping to assist out on doing some small refactors to the lightning code to make it simpler and easier whilst being less intrusive. More to follow in time and we'll keep you guys in the loop<|||||>How hard would it be to get the validation metrics sync'd/averaged between processes?
I posted an issue a while back on your repo where checkpoint saving was not making decisions based on averaged metrics, and I know you guys have made some progress there.
I'm specifically interested in `examples/seq2seq/finetune.py` which uses [this function](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/callbacks.py#L87) to build a `ModelCheckpoint` callback.
Even if it were just a simple average across processes of the relevant metric this would be very valuable to us.<|||||>> How hard would it be to get the validation metrics sync'd/averaged between processes?
The new Metrics API in Lightning 1.0 supports distributed syncing out of the box (doesn't even require lightning to work). Any metrics you need can definitely be [implemented](https://pytorch-lightning.readthedocs.io/en/latest/metrics.html#implementing-a-metric) fairly easily.
<|||||>Are there any documented examples where `dist_reduce_fx="mean"`?<|||||>@sshleifer Right now I can't find any of our current metrics using ``dist_reduce_fx="mean"``. Is there a specific issue you are running into?<|||||>Since I already have code to compute all the metrics I want, I am trying to make a class that just gathers the computed metrics from all processes and averages whatever it is sent. (Maybe in future I will try to overweight ranks that processed more data.)
My issue is that the type of value sent to update has to be coerced to a `torch.tensor` on the correct device, which I have not figured out yet. My attempt at
https://github.com/huggingface/transformers/pull/8269
raises `RuntimeError: Tensors must be CUDA and dense` during the `allgather` step.
(pl 1.0.4, torch 1.6)<|||||>Will take a look @sshleifer :) |
transformers | 7,851 | closed | Trainer accepts iterable datasets | Fixes #5990
Follows PR #5995 (got closed for being stale...)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
@sgugger @LysandreJik
| 10-16-2020 13:03:56 | 10-16-2020 13:03:56 | Hi there, thanks for the PR! Unfortunately it's not based on current master, so we can't review properly as it suggests changes from code that does not exist anymore.<|||||>Some merging work to be done. |
transformers | 7,850 | closed | OperatorNotAllowedInGraphError in dbmdz/bert-base-italian-cased for Token Classification | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Linux-5.4.0-1028-aws-x86_64-with-debian-buster-sid
- Python version: 3.7.7
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@LysandreJik @stefan-it
## Information
Model I am using (Bert, XLNet ...):
Bert (dbmdz/bert-base-italian-cased)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I am following the tutorial in https://huggingface.co/transformers/custom_datasets.html#token-classification-with-w-nut-emerging-entities in order to fine-tune a custom Named Entity Recognition NN.
1. Load the tokenizer and tokenize the data
```
tokenizer = BertTokenizerFast.from_pretrained('dbmdz/bert-base-italian-cased')
```
following the steps from [tutorial](https://huggingface.co/transformers/custom_datasets.html#token-classification-with-w-nut-emerging-entities) to obtain `train_dataset` and `val_dataset` .
2. Load the Token Classifier with pretrained weights from the model
```
from transformers import TFBertForTokenClassification
model = TFBertForTokenClassification.from_pretrained('dbmdz/bert-base-italian-cased', num_labels=len(unique_tags))
```
As a side note, this gives the following warning which I understand is to be expected since the loaded model was not fine tuned for Token Classification:
```
Some weights of the model checkpoint at dbmdz/bert-base-italian-cased were not used when initializing TFBertForTokenClassification: ['mlm___cls', 'nsp___cls']
- This IS expected if you are initializing TFBertForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing TFBertForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of TFBertForTokenClassification were not initialized from the model checkpoint at dbmdz/bert-base-italian-cased and are newly initialized: ['dropout_37', 'classifier']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
4. Train
```
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
model.compile(optimizer=optimizer, loss=model.compute_loss)
model.fit(train_dataset.shuffle(100).batch(16), epochs=3, batch_size=16)
```
This raises the following error:
```
Epoch 1/3
---------------------------------------------------------------------------
OperatorNotAllowedInGraphError Traceback (most recent call last)
<ipython-input-23-5eb528629965> in <module>
1 optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
2 model.compile(optimizer=optimizer, loss=model.compute_loss)
----> 3 model.fit(train_dataset.shuffle(100).batch(16), epochs=3, batch_size=16)
~/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
64 def _method_wrapper(self, *args, **kwargs):
65 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
---> 66 return method(self, *args, **kwargs)
67
68 # Running inside `run_distribute_coordinator` already.
~/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
851 batch_size=batch_size):
852 callbacks.on_train_batch_begin(step)
--> 853 tmp_logs = train_function(iterator)
854 # Catch OutOfRangeError for Datasets of unknown size.
855 # This blocks until the batch has finished executing.
~/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
578 xla_context.Exit()
579 else:
--> 580 result = self._call(*args, **kwds)
581
582 if tracing_count == self._get_tracing_count():
~/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
625 # This is the first call of __call__, so we have to initialize.
626 initializers = []
--> 627 self._initialize(args, kwds, add_initializers_to=initializers)
628 finally:
629 # At this point we know that the initialization is complete (or less
~/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
504 self._concrete_stateful_fn = (
505 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
--> 506 *args, **kwds))
507
508 def invalid_creator_scope(*unused_args, **unused_kwds):
~/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
2444 args, kwargs = None, None
2445 with self._lock:
-> 2446 graph_function, _, _ = self._maybe_define_function(args, kwargs)
2447 return graph_function
2448
~/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
2775
2776 self._function_cache.missed.add(call_context_key)
-> 2777 graph_function = self._create_graph_function(args, kwargs)
2778 self._function_cache.primary[cache_key] = graph_function
2779 return graph_function, args, kwargs
~/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
2665 arg_names=arg_names,
2666 override_flat_arg_shapes=override_flat_arg_shapes,
-> 2667 capture_by_value=self._capture_by_value),
2668 self._function_attributes,
2669 # Tell the ConcreteFunction to clean up its graph once it goes out of
~/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
979 _, original_func = tf_decorator.unwrap(python_func)
980
--> 981 func_outputs = python_func(*func_args, **func_kwargs)
982
983 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
~/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
439 # __wrapped__ allows AutoGraph to swap in a converted function. We give
440 # the function a weak reference to itself to avoid a reference cycle.
--> 441 return weak_wrapped_fn().__wrapped__(*args, **kwds)
442 weak_wrapped_fn = weakref.ref(wrapped_fn)
443
~/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
966 except Exception as e: # pylint:disable=broad-except
967 if hasattr(e, "ag_error_metadata"):
--> 968 raise e.ag_error_metadata.to_exception(e)
969 else:
970 raise
OperatorNotAllowedInGraphError: in user code:
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:571 train_function *
outputs = self.distribute_strategy.run(
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:951 run **
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica
return fn(*args, **kwargs)
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:533 train_step **
y, y_pred, sample_weight, regularization_losses=self.losses)
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/keras/engine/compile_utils.py:205 __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/keras/losses.py:143 __call__
losses = self.call(y_true, y_pred)
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/keras/losses.py:246 call
return self.fn(y_true, y_pred, **self._fn_kwargs)
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/transformers/modeling_tf_utils.py:178 compute_loss
if tf.math.reduce_any(labels == -1):
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:778 __bool__
self._disallow_bool_casting()
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:545 _disallow_bool_casting
"using a `tf.Tensor` as a Python `bool`")
/home/ubuntu/anaconda3/envs/doc_backoffice/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:532 _disallow_when_autograph_enabled
" decorating it directly with @tf.function.".format(task))
OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did not convert this function. Try decorating it directly with @tf.function.
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I would expect the model to train without errors, thank you for your kind help!
| 10-16-2020 12:19:08 | 10-16-2020 12:19:08 | This might be of interest to @jplu <|||||>Hello!
For now you cannot use the `.fit()` method to train a TF model. This will be possible in a next release.
If you want to train a NER model please use the TF trainer.<|||||>Thanks for your answer, I will try TF Trainer<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,849 | closed | how to save and load fine-tuned model? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
```python
class MyModel(nn.Module):
def __init__(self, num_classes):
super(MyModel, self).__init__()
self.bert = BertModel.from_pretrained('hfl/chinese-roberta-wwm-ext', return_dict=True).to(device)
self.fc = nn.Linear(768, num_classes, bias=False)
def forward(self, x_input_ids, x_type_ids, attn_mask):
outputs = self.bert(x_input_ids, token_type_ids=x_type_ids, attention_mask=attn_mask)
pred = self.fc(outputs.pooler_output)
return pred
model = MyModel(num_classes).to(device)
# save
# load
```
I have defined my model via huggingface, but I don't know how to save and load the model, hopefully someone can help me out, thanks! | 10-16-2020 10:34:16 | 10-16-2020 10:34:16 | To save your model, first create a directory in which everything will be saved. In Python, you can do this as follows:
```
import os
os.makedirs("path/to/awesome-name-you-picked")
```
Next, you can use the `model.save_pretrained("path/to/awesome-name-you-picked")` method. This will save the model, with its weights and configuration, to the directory you specify. Next, you can load it back using `model = .from_pretrained("path/to/awesome-name-you-picked")`.
Source: https://huggingface.co/transformers/model_sharing.html<|||||>> To save your model, first create a directory in which everything will be saved. In Python, you can do this as follows:
>
> ```
> import os
> os.makedirs("path/to/awesome-name-you-picked")
> ```
>
> Next, you can use the `model.save_pretrained("path/to/awesome-name-you-picked")` method. This will save the model, with its weights and configuration, to the directory you specify. Next, you can load it back using `model = .from_pretrained("path/to/awesome-name-you-picked")`.
>
> Source: https://huggingface.co/transformers/model_sharing.html
Should I save the model parameters separately, save the BERT first and then save my own nn.linear. Is this the only way to do the above? Is there an easy way? Thank you for your reply<|||||>I validate the model as I train it, and save the model with the highest scores on the validation set using `torch.save(model.state_dict(), output_model_file)`. As shown in the figure below

Then I trained again and loaded the previously saved model instead of training from scratch, but it didn't work well, which made me feel like it wasn't saved or loaded successfully ?
<|||||>Hi, I'm also confused about this. Have you solved this probelm? If yes, could you please show me your code of saving and loading model in detail. THX ! :) <|||||>> Hi, I'm also confused about this. Have you solved this probelm? If yes, could you please show me your code of saving and loading model in detail. THX ! :)
are you chinese? if you are, i could reply you by chinese<|||||>> > Hi, I'm also confused about this. Have you solved this probelm? If yes, could you please show me your code of saving and loading model in detail. THX ! :)
>
> are you chinese? if you are, i could reply you by chinese
哈哈哈,想请问一下你,该怎么保存模型。<|||||>我问了一位台湾友人,他跟我说,huggingface的预训练模型也是torch写的,所以直接使用torch的方式正常加载和保存模型就行了
```python
model = MyModel(num_classes).to(device)
optimizer = AdamW(model.parameters(), lr=2e-5, weight_decay=1e-2)
output_model = './models/model_xlnet_mid.pth'
# save
def save(model, optimizer):
# save
torch.save({
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict()
}, output_model)
save(model, optimizer)
# load
checkpoint = torch.load(output_model, map_location='cpu')
model.load_state_dict(checkpoint['model_state_dict'])
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
```<|||||>> 我问了一位台湾友人,他跟我说,huggingface的预训练模型也是torch写的,所以直接使用torch的方式正常加载和保存模型就行了
>
> ```python
> model = MyModel(num_classes).to(device)
> optimizer = AdamW(model.parameters(), lr=2e-5, weight_decay=1e-2)
> output_model = './models/model_xlnet_mid.pth'
>
> # save
> def save(model, optimizer):
> # save
> torch.save({
> 'model_state_dict': model.state_dict(),
> 'optimizer_state_dict': optimizer.state_dict()
> }, output_model)
>
> save(model, optimizer)
>
> # load
> checkpoint = torch.load(output_model, map_location='cpu')
> model.load_state_dict(checkpoint['model_state_dict'])
> optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
> ```
哦哦,好的,谢谢了!<|||||>> 我问了一位台湾友人,他跟我说,huggingface的预训练模型也是torch写的,所以直接使用torch的方式正常加载和保存模型就行了
>
> ```python
> model = MyModel(num_classes).to(device)
> optimizer = AdamW(model.parameters(), lr=2e-5, weight_decay=1e-2)
> output_model = './models/model_xlnet_mid.pth'
>
> # save
> def save(model, optimizer):
> # save
> torch.save({
> 'model_state_dict': model.state_dict(),
> 'optimizer_state_dict': optimizer.state_dict()
> }, output_model)
>
> save(model, optimizer)
>
> # load
> checkpoint = torch.load(output_model, map_location='cpu')
> model.load_state_dict(checkpoint['model_state_dict'])
> optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
> ```
马克一下<|||||>Hi all,
I have saved a keras fine tuned model on my machine, but I would like to use it in an app to deploy.
I loaded the model on github, I wondered if I could load it from the directory it is in github?
That does not seem to be possible, does anyone know where I could save this model for anyone to use it?
Huggingface provides a hub which is very useful to do that but this is not a huggingface model.
Let me know if you can help please :) <|||||>I know the `huggingface_hub` library provides a utility class called `ModelHubMixin` to save and load any PyTorch model from the hub (see original [tweet](https://twitter.com/julien_c/status/1372613568244371461?s=19)). I wonder whether something similar exists for Keras models?
cc @julien-c <|||||>That would be ideal. But I wonder; if there are no public hubs I can host this keras model on, does this mean that no trained keras models can be publicly deployed on an app?<|||||>^Tagging @osanseviero and @nateraw on this!<|||||>Having an easy way to save and load Keras models is in our short-term roadmap and we expect to have updates soon!
> if there are no public hubs I can host this keras model on, does this mean that no trained keras models can be publicly deployed on an app?
I'm not sure I fully understand your question. Using Hugging Face Inference API, you can make inference with Keras models and easily share the models with the rest of the community. Note that you can also share the model using the Hub and use other hosting alternatives or even run your model on-device.<|||||>Thanks @osanseviero for your reply!
What i'm wondering is whether i can have my keras model loaded on the huggingface hub (or another) like I have for my BertForSequenceClassification fine tuned model (see the screeshot)?
This allows to deploy the model publicly since anyone can load it from any machine. I would like to do the same with my Keras model. Does that make sense? If yes, do you know how? That would be awesome since my model performs greatly! it's for a summariser:)

<|||||>哈喽,可以保存整个模型而不是参数模型吗。<|||||>> ```python
> model = MyModel(num_classes).to(device)
> optimizer = AdamW(model.parameters(), lr=2e-5, weight_decay=1e-2)
> output_model = './models/model_xlnet_mid.pth'
>
> # save
> def save(model, optimizer):
> # save
> torch.save({
> 'model_state_dict': model.state_dict(),
> 'optimizer_state_dict': optimizer.state_dict()
> }, output_model)
>
> save(model, optimizer)
>
> # load
> checkpoint = torch.load(output_model, map_location='cpu')
> model.load_state_dict(checkpoint['model_state_dict'])
> optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
> ```
hey, what is output_model parameter?? what should be it's value?? |
transformers | 7,848 | closed | RuntimeError with DistributedDataParallel | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Ubuntu 18.04
- Python version: 3.7.7.final.0
- PyTorch version (GPU?): 1.6.0 gpu
- Tensorflow version (GPU?): X
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: distributed set-up
- Conda : 4.8.3
- cuda =11.0
### Who can help
@LysandreJik @sgugger
## Information
Model I am using (Bert, XLNet ...): bert mulitilingual cased
The problem arises when using:
* [x] my own modified scripts : Triplet Loss applied to sentence similarity, to practice writing efficient distributed learning code.
The tasks I am working on is:
* [x] my own task or dataset: Triplet Loss applied to sentence similarity, to practice writing efficient distributed learning code.
## To reproduce
I have not been able to generte a minimal version yet.
Stack trace
```
Traceback (most recent call last):
[...]
File "my_program.py", line 246, in epoch
loss.backward()
File "conda/lib/python3.7/site-packages/torch/tensor.py", line 185, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "conda/lib/python3.7/site-packages/torch/autograd/__init__.py", line 127, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [1, 200]] is at version 9; expected version 7 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
```
My model :
```python
def __init__(self, trainfile, devfile, outdir, rank=0):
# default config
self.max_sent_len = 200
self.batchsize = 10
self.lr = 2e-5
self.accumulated_batches = 6
self.max_epochs = 10
self.seed = 42
self.embs = "bert-base-multilingual-cased"
self.tokenizer, self.model = self.transformers(self.embs)
self.tripletloss = lossless_triplet_loss.LosslessTripletLoss()
print("use %s gpus" % torch.cuda.device_count())
self.model = self.model.to(rank)
self.model = torch.nn.parallel.DistributedDataParallel(self.model, device_ids=[rank], output_device=rank, find_unused_para meters=True)
self.rank = rank
half = False
self.model = self.model.to(self.rank)
self.optimizer = optim.Adam(filter(lambda param: param.requires_grad, self.model.parameters()), lr=self.lr)
self.traindataset = data.TripleDataset2(trainfile, self.tokenizer, self.max_sent_len)
self.devdataset = data.TripleDataset2(devfile, self.tokenizer, self.max_sent_len)
self.traindataloader = DataLoader(self.traindataset,
collate_fn=self.collate_fn,
batch_size = self.batchsize,
num_workers = 0)
self.devdataloader = DataLoader(self.devdataset,
collate_fn=self.collate_fn,
batch_size = self.batchsize,
num_workers = 0)
def train(self, epoch):
self.model.train()
iterations = int(math.ceil(len(self.traindataset)/self.batchsize))
iterator = tqdm.tqdm(enumerate(self.traindataloader),
ncols=100,
ascii=True,
desc="epoch %d" % (epoch),
mininterval=1.0,
total = iterations)
lastloss = None
for batch_idx, anchor_pos_neg in iterator:
(anchor, amask), (pos, pmask), (neg, nmask) = anchor_pos_neg
anchor = anchor.to(self.rank)
pos = pos.to(self.rank)
neg = neg.to(self.rank)
amask = amask.to(self.rank)
pmask = pmask.to(self.rank)
nmask = nmask.to(self.rank)
anchor_vec = self.tokens2vec(anchor, attention_mask=amask)
pos_vec = self.tokens2vec(pos, attention_mask=pmask)
neg_vec = self.tokens2vec(neg, attention_mask=nmask)
loss, distp, distn = self.calc_triplet_loss(anchor_vec, pos_vec, neg_vec)
loss.backward()
if (batch_idx+1) % self.accumulated_batches == 0:
self.optimizer.step()
self.optimizer.zero_grad()
def tokens2vec(self, tokens_tensor, attention_mask=None):
hidden_states = self.model(tokens_tensor, attention_mask=attention_mask)[0]
token_vecs = torch.squeeze(hidden_states, dim=0)
a = torch.sum(torch.mul(token_vecs, torch.unsqueeze(attention_mask, 2)), dim=1)
m = torch.unsqueeze(torch.sum(attention_mask, dim=1), 1)
sentence_embedding = torch.sigmoid(torch.div(a, m))
return sentence_embedding
def transformers(self, name):
if name.startswith("bert"):
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained(name, do_lower_case=False)
model = BertModel.from_pretrained(name)
else:
tokenizer = None
model = None
print("ERROR: invalid embeddings name <%s>" % name, file=sys.stderr)
return tokenizer, model
```
The error happens with batch_idx = 0.
200 is my maximum sequence length. I only use Longtensor as input to BertModel (for anchor, pos and neg).
What I find strange is that I am sending samples in batches of size 10, so I am wondering if my problem is really with the embedding layer, or rather caused by my custom collate_fn :
```python
def collate_fn(self, batch):
anchor = []
pos = []
neg = []
anchor_mask = []
pos_mask = []
neg_mask = []
for a,p,n in batch:
anchor.append(a[0])
pos.append(p[0])
neg.append(n[0])
anchor_mask.append(a[1])
pos_mask.append(p[1])
neg_mask.append(n[1])
return (torch.stack(anchor, dim=0),torch.stack(anchor_mask, dim=0)), \
(torch.stack(pos, dim=0),torch.stack(pos_mask, dim=0)), \
(torch.stack(neg, dim=0),torch.stack(neg_mask, dim=0))
```
My model is working well with only one GPU, or when using DataParallel. My issue arises only with DistributedDataParallel.
## Expected behavior
A running training. | 10-16-2020 08:43:48 | 10-16-2020 08:43:48 | The `BertModel` runs absolutely fine in `DistributedDataParallel` (just retried) so the issue probably comes from the rest of your code and not the library. I can't see the code for the `LosslessTripletLoss`, maybe the issues comes from there.
In any case, to debug I would follow the instruction given in the error message:
```
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [1, 200]] is at version 9; expected version 7 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
```<|||||>Well, thanks for your tips.
But when using set_detect_anomaly, my problem seems to be located in EmbeddingBackward :
```
[W python_anomaly_mode.cpp:60] Warning: Error detected in EmbeddingBackward. Traceback of forward call that caused the error:
File "<string>", line 1, in <module>
File "/lib/python3.7/multiprocessing/spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "/lib/python3.7/multiprocessing/spawn.py", line 118, in _main
return self._bootstrap()
File "/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
fn(i, *args)
File "my_program.py", line 525, in demo_basic
f4n.train()
File "my_program.py", line 213, in train
self.epoch(epoch)
File "my_program.py", line 249, in epoch
pos_vec = self.tokens2vec(pos, attention_mask=pmask)
File "my_program.py", line 444, in tokens2vec
hidden_states = self.model(tokens_tensor, attention_mask=attention_mask)[0]
File "/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 511, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/lib/python3.7/site-packages/transformers/modeling_bert.py", line 831, in forward
input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
File "/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/lib/python3.7/site-packages/transformers/modeling_bert.py", line 198, in forward
position_embeddings = self.position_embeddings(position_ids)
File "/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 126, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/lib/python3.7/site-packages/torch/nn/functional.py", line 1814, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
```
The problem occurs when calling BertModel the second time in the epoch, when calculating "pos".<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hi, have you solved this problem? I also met this issue.<|||||>No, sorry, I finally gave up.<|||||>Same Error. Any one has solved the problem?<|||||>I've solved this problem. Mine was because when I build the input sequences as follows:
`decoder_inputs = labels.narrow(1, 0, seqlen - 1).clone()`
`decoder_masks = label_masks.narrow(1, 0, seqlen - 1).clone()`
`decoder_labels = labels.narrow(1, 1, seqlen-1).clone()`
`decoder_label_masks = label_masks.narrow(1, 1, seqlen - 1).clone()`
`decoder_labels[~decoder_label_masks.bool()] = -100`
I forgot to add `.clone()` at the beginning. So when I change some of the `decoder_labels` to be -100, some of the elements in `decoder_inputs` were also changed, which caused the error.
Hope this information could help you a bit!<|||||>> I've solved this problem. Mine was because when I build the input sequences as follows:
> `decoder_inputs = labels.narrow(1, 0, seqlen - 1).clone()`
> `decoder_masks = label_masks.narrow(1, 0, seqlen - 1).clone()`
> `decoder_labels = labels.narrow(1, 1, seqlen-1).clone()`
> `decoder_label_masks = label_masks.narrow(1, 1, seqlen - 1).clone()`
> `decoder_labels[~decoder_label_masks.bool()] = -100`
>
> I forgot to add `.clone()` at the beginning. So when I change some of the `decoder_labels` to be -100, some of the elements in `decoder_inputs` were also changed, which caused the error.
>
> Hope this information could help you a bit!
Thanks a lot!<|||||>I met exact same issue with triplet loss using transformer 4.5.1 in NCCL distributed training setting.
I found this problem may occur to using multiple forward Bert output and compute loss at once.
I solved this by concat inputs ahead and chunk it to calculate loss:
```python
input_ids = torch.cat([anchor_input_ids, pos_input_ids, neg_input_ids], dim=0)
attention_mask = torch.cat([anchor_attention_mask, pos_attention_mask, neg_attention_mask], dim=0)
token_type_ids = torch.cat([anchor_token_type_ids, pos_token_type_ids, neg_token_type_ids], dim=0)
emb = model(input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids)
anchor_emb, pos_emb, neg_emb = emb.pooled_output.chunk(3, dim=0)
loss = criterion(anchor_emb, anchor_emb, anchor_emb)
```
<|||||>My problem was solved after adding `broadcast_buffers=False` to `torch.nn.parallel.DistributedDataParallel`
Following https://github.com/ashkamath/mdetr/issues/16#issuecomment-878388469 |
transformers | 7,847 | closed | Where do the models go in colab? | ## Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
documentation: @sgugger
-->
## Information
Model I am using (Bert):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [.] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [.] my own task or dataset: (give details below)
Semeval 2017 task 10 keyword boundary detection
## To reproduce
Steps to reproduce the behavior:
1. Run `train.py` from [this repo](https://github.com/pranav-ust/BERT-keyphrase-extraction)
2. Load the model as `model = BertForTokenClassification.from_pretrained('bert-base-uncased', num_labels=len(params.tag2idx))`
3. Run `eval.py` as `python evaluate.py --data_dir data/task1/ --bert_model_dir model/ --model_dir experiments/base_model --restore_file best`
This gives the following error:
`Traceback (most recent call last):
File "evaluate.py", line 117, in <module>
config = BertConfig.from_json_file(config_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 436, in from_json_file
config_dict = cls._dict_from_json_file(json_file)
File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 441, in _dict_from_json_file
with open(json_file, "r", encoding="utf-8") as reader:
FileNotFoundError: [Errno 2] No such file or directory: 'model/bert_config.json'
2020-10-16 07:41:50.582535: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
Loading the dataset...
done.
Traceback (most recent call last):
File "evaluate.py", line 117, in <module>
config = BertConfig.from_json_file(config_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 436, in from_json_file
config_dict = cls._dict_from_json_file(json_file)
File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 441, in _dict_from_json_file
with open(json_file, "r", encoding="utf-8") as reader:
FileNotFoundError: [Errno 2] No such file or directory: 'model/bert_config.json'`
Which I believe is caused by me passing `model/` as `--bert-model-dir` argument.
Where do the downloaded models go on colab? Or more importantly, can I pass a path argument to download them in a specific folder?
### Edit
If I download a model as `!wget https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz` and extract it in the model/ folder, then `python eval.py` gives the following error:
`Loading the dataset...
- done.
Traceback (most recent call last):
File "evaluate.py", line 118, in <module>
model = BertForTokenClassification(config, num_labels=len(params.tag2idx))
TypeError: __init__() got an unexpected keyword argument 'num_labels'` | 10-16-2020 07:51:05 | 10-16-2020 07:51:05 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,846 | closed | BartTokenizer prepare_seq2seq_batch() does not return decoder_input_ids, decoder_attention_mask as per document after passing tgt_texts | I am trying to train a seq2seq model using BartModel. As per BartTokenizer documentation if I pass tgt_texts then it should return decoder_attention_mask and decoder_input_ids please check the attachment for clarity.

But I am only getting input_ids, attention_mask and labels.

| 10-16-2020 07:00:21 | 10-16-2020 07:00:21 | I am facing the same issue and I noticed that the method indeed returns the `["input_ids"]` of `tgt_texts` as labels. I think I could easily fix this to return both `input_ids` and `attention_mask` of `tgt_texts` (as `decoder_...`) but I noticed the same pattern in other seq2seq models, like T5. I am not sure what's the proper solution but if it is similar to what I suggest, than I'd be happy to make a pull request.
@LysandreJik I'd be happy to hear an opinion and start working on this.<|||||>I think https://github.com/huggingface/transformers/pull/6654/ and https://github.com/huggingface/transformers/issues/6624 are related - the PR changed `decoder_input_ids` to `labels`. Probably the documentation should be changed but I have to get more familiar with the respective issue and PR to be sure.<|||||>Thanks for the feedback @freespirit. Hopefully, they will update the documentation as it is a little bit confusing. But what I found that the modeling_bart.py file already handles the problem. _prepare_bart_decoder_inputs() and shift_tokens_right() solving that if I am not wrong. But I think I have to go deeper for understanding which I am trying to.


<|||||>Pinging @sshleifer for advice<|||||>@MojammelHossain is correct, the docs are wrong.
The correct usage is to allow `_prepare_bart_decoder_inputs` to make `decoder_input_ids` and `decoder_attention_mask` for you. For training, you only need to pass the 3 keys returned by `prepare_seq2seq_batch`.
|
transformers | 7,845 | closed | [seq2seq testing] improve readability | improve readability
@sshleifer | 10-16-2020 05:23:53 | 10-16-2020 05:23:53 | |
transformers | 7,844 | closed | [seq2seq distributed] child process stuck on error |
Filing an issue for myself to help move along https://github.com/huggingface/transformers/pull/7281
```
pytest -sv examples/seq2seq/test_seq2seq_examples_multi_gpu.py
```
gets stuck if one of the child subprocesses throws an error.
Remove `skip_output_dir_check=True,` to reproduce the problem.
It could be related to realtime async read/write pipes getting into a deadlock - see the XXX warning in that test, need to first try whether the problem goes away if I switch to non-real time pipes.
I will get back to it later.
| 10-16-2020 05:01:01 | 10-16-2020 05:01:01 | OK, this is the problem with PL if anything, as the command:
```
PYTHONPATH="src" /home/stas/anaconda3/envs/main-38/bin/python /mnt/nvme1/code/huggingface/transformers-distr-train-test/examples/seq2seq/distillation.py --supervise_forward --normalize_hidden --label_smoothing=0.0 --eval_beams=1 --val_metric=loss --save_top_k=1 --adafactor --early_stopping_patience=-1 --logger_name=default --length_penalty=0.5 --cache_dir= --task=summarization --num_workers=2 --alpha_hid=0 --freeze_embeds --sortish_sampler --student_decoder_layers=1 --val_check_interval=0.5 --output_dir=/tmp/tmpqajqhzwo --no_teacher --fp16_opt_level=O1 --gpus=2 --max_grad_norm=1.0 --do_train --do_predict --accumulate_grad_batches=1 --seed=42 --model_name_or_path=sshleifer/tinier_bart --config_name= --tokenizer_name=facebook/bart-large --learning_rate=0.3 --lr_scheduler=linear --weight_decay=0.0 --adam_epsilon=1e-08 --warmup_steps=0 --max_epochs=2 --train_batch_size=1 --eval_batch_size=2 --max_source_length=12 --max_target_length=12 --val_max_target_length=12 --test_max_target_length=12 --n_train=-1 --n_val=-1 --n_test=-1 --student_encoder_layers=1 --freeze_encoder --data_dir=/tmp/tmpo_9t6k7e --alpha_mlm=0.2 --alpha_ce=0.8 --teacher=sshleifer/bart-tiny-random
```
hangs on its own, so this is definitely not an issue of the newly added distributed support for `pytest`.
I think that when one of the workers fails (e.g. asserts on output directory already exists) and the coordinating process gets stuck waiting for the worker to send something back. |
transformers | 7,843 | closed | [seq2seq] get_git_info fails gracefully | It looks like gitpython is broken when one tries to use it from `git checkout hash`, see: https://github.com/gitpython-developers/GitPython/issues/633
While debugging/bisecting we need to be able to step through different commits and for the code to still work.
Currently if I do:
```
git checkout fb94b8f1e16eb
```
and run an examples test, like `examples/seq2seq/test_seq2seq_examples_multi_gpu.py` (soon to be merged) it fails with:
```
ERR: File "./examples/seq2seq/distillation.py", line 504, in <module>
ERR: distill_main(args)
ERR: File "./examples/seq2seq/distillation.py", line 494, in distill_main
ERR: model = create_module(args)
ERR: File "./examples/seq2seq/distillation.py", line 411, in create_module
ERR: model = module_cls(args)
ERR: File "/mnt/nvme1/code/huggingface/transformers-multigpu/examples/seq2seq/finetune.py", line 58, in __init__
ERR: save_git_info(self.hparams.output_dir)
ERR: File "/mnt/nvme1/code/huggingface/transformers-multigpu/examples/seq2seq/utils.py", line 355, in save_git_info
ERR: repo_infos = get_git_info()
ERR: File "/mnt/nvme1/code/huggingface/transformers-multigpu/examples/seq2seq/utils.py", line 374, in get_git_info
ERR: "repo_branch": str(repo.active_branch),
ERR: File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/git/repo/base.py", line 705, in active_branch
ERR: return self.head.reference
ERR: File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/git/refs/symbolic.py", line 272, in _get_reference
ERR: raise TypeError("%s is a detached symbolic reference as it points to %r" % (self, sha))
ERR: TypeError: HEAD is a detached symbolic reference as it points to 'fb94b8f1e16eb21d166174f52e3e49e669ef0ac4'
```
The odd leading `ERR` string is just a replay of stderr from the sub-process - please ignore this nuance.
This PR provides a workaround.
@sshleifer
| 10-16-2020 04:13:52 | 10-16-2020 04:13:52 | Thanks for returning the same keys! |
transformers | 7,842 | closed | [testing] disable FutureWarning in examples tests | same as tests/conftest.py, we can't resolve those warning, so turn the noise off.
same as PR: https://github.com/huggingface/transformers/pull/7079 | 10-16-2020 03:24:46 | 10-16-2020 03:24:46 | |
transformers | 7,841 | closed | [DOC] Typo and fix the input of labels to `cross_entropy` | # What does this PR do?
The current version caused some errors. The proposed changes fixed it for me. Hope this is helpful!
- [ x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
@sgugger | 10-15-2020 23:09:02 | 10-15-2020 23:09:02 | |
transformers | 7,840 | closed | Token Type IDs returned from the tokenizer for T5 don't work with special tokens | With `transformers-3.3.1`:
```
import transformers
t = transformers.AutoTokenizer.from_pretrained('t5-small')
t.encode_plus(["a"], ["b"], add_special_tokens=True, return_token_type_ids=True)
```
This results in
```
{'input_ids': [9, 1, 115, 1], 'token_type_ids': [0, 1], 'attention_mask': [1, 1, 1, 1]}
```
As you can see, the token type IDs don't align with the other outputs. | 10-15-2020 23:08:51 | 10-15-2020 23:08:51 | The PR linked to this issue should fix it. However note that T5 does not use `token_type_ids`, so that we will simply return all 0. |
transformers | 7,839 | closed | Small fixes to HP search | # What does this PR do?
HP search has two small bugs in `Trainer` right now:
- it pops things form the metrics dict, which are then improperly logged
- it doesn't pop out the `total_flos`, so the hp search tries to maximize the flos (which I guess is @TevenLeScao master plan to conquer the universe ;-) ) | 10-15-2020 21:29:05 | 10-15-2020 21:29:05 | |
transformers | 7,838 | closed | State of ONNX | Hi, love the work that's going on with ONNX. I wanted to share the current state of ONNX support in case others were wondering about it (sorry for another table).
Pipeline | Supported
--- | ---
feature-extraction | ✓
sentiment-analysis | ✓
ner | ✓
question-answering | ✓
fill-mask | ✓
text-generation | ✓
translation | Broken https://github.com/huggingface/transformers/issues/5948#issuecomment-701699251
summarization |
zero-shot-classification |
conversational |
text2text-generation |
I was able to export models for both summarization and zero-shot-classification, but they both error without a specific token length due to a reshape inside the ONNX model (code in https://github.com/huggingface/transformers/issues/7404#issuecomment-703966076). If you have any ideas for how to prevent this, I'm happy to try and put together a PR.
---
A note for Mac Catalina users: exporting models may error with:
```text
[libprotobuf ERROR google/protobuf/descriptor_database.cc:394] Invalid file descriptor data passed to EncodedDescriptorDatabase::Add().
[libprotobuf FATAL google/protobuf/descriptor.cc:1356] CHECK failed: GeneratedDatabase()->Add(encoded_file_descriptor, size):
libc++abi.dylib: terminating with uncaught exception of type google::protobuf::FatalException: CHECK failed: GeneratedDatabase()->Add(encoded_file_descriptor, size):
```
Use an older version of `protobuf` to avoid this (https://github.com/onnx/onnx/issues/2940#issuecomment-669979419):
```sh
pip3 uninstall protobuf
pip3 install protobuf==3.11.3
``` | 10-15-2020 20:37:44 | 10-15-2020 20:37:44 | Hey, I tried exporting T5 for summarization but get the below error:
**You have to specify either decoder_input_ids or decoder_inputs_embeds**
I get a similar error for translation pipeline as well. Any workarounds available for this?
@patrickvonplaten @sshleifer <|||||>Hey @amanpreet692 - you need to provide both `input_ids` and `decoder_input_ids` for `EncoderDecoderModels`. <|||||>Hey @patrickvonplaten yep I get that, but the code implementation is such that we don't pass in the sample inputs for ONNX, the sample tokens are passed directly from within Pytorch onnx.export code I think which are consumed by the encoder and decoder inputs are empty.
I used https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb from @mfuntowicz as a reference with the additional parameter of 'translation' as pipeline.
Please let me know if there is an immediate solution else I am gonna look into this next week :)
Thanks!<|||||>Hey @amanpreet692 - sorry this is not really my area of expertise here... @mfuntowicz - could you take a look? <|||||>Hey @amanpreet692 are you able to resolve this error while exporting t5 **You have to specify either decoder_input_ids or decoder_inputs_embeds**?<|||||>Was this ever resolved @amanpreet692 @dharm033075 @mfuntowicz? I am having the same issue trying to export t5.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 7,837 | closed | [testing] fix/hide warnings | in `test_trainer_callback.py`:
- fixes PytestCollectionWarning - pytest doesn't like non-unittest classes starting with `Test`
- hides scatter_gather warnings - as it was suggested they are just so
@sgugger
Fixes: https://github.com/huggingface/transformers/issues/7832
| 10-15-2020 19:54:09 | 10-15-2020 19:54:09 | |
transformers | 7,836 | closed | model card for arabic-ner model | README file for the Arabic NER model
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-15-2020 19:34:50 | 10-15-2020 19:34:50 | Thanks for sharing! |
transformers | 7,835 | closed | [cleanup] assign todos, faster bart-cnn test | I went through each TODO(SS), and either did it, or made a self-assigned github issue with it.
I also deleted some commented out code blocks. | 10-15-2020 19:29:15 | 10-15-2020 19:29:15 | |
transformers | 7,834 | closed | [utils/check_copies.py] fix DeprecationWarning | in `tests/test_utils_check_copies.py` I was getting intermittently:
```
utils/check_copies.py:52
/mnt/nvme1/code/transformers-comet/utils/check_copies.py:52: DeprecationWarning: invalid escape sequence \s
while line_index < len(lines) and re.search(f"^{indent}(class|def)\s+{name}", lines[line_index]) is None:
```
So this should fix it. Not sure why it wasn't showing up all the time.
@sgugger | 10-15-2020 19:19:39 | 10-15-2020 19:19:39 | Thanks! |
transformers | 7,833 | closed | [s2s trainer] tests fail on multi-gpu machine | #### Command
```bash
RUN_SLOW=1 USE_CUDA=1 pytest examples/seq2seq/test_finetune_trainer.py
```
#### Traceback
```python
=========================================================== test session starts ===========================================================
platform linux -- Python 3.7.4, pytest-5.3.5, py-1.8.1, pluggy-0.13.1
rootdir: /home/shleifer/transformers_fork, inifile: pytest.ini
plugins: forked-1.1.3, hydra-core-1.0.0, xdist-1.31.0, requests-mock-1.8.0
collected 2 items
examples/seq2seq/test_finetune_trainer.py /home/shleifer/transformers_fork/src/transformers/training_args.py:339: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options)
FutureWarning,
F/home/shleifer/transformers_fork/src/transformers/training_args.py:339: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options)
FutureWarning,
F
================================================================ FAILURES =================================================================
__________________________________________________________ test_finetune_trainer __________________________________________________________
def test_finetune_trainer():
> output_dir = run_trainer(1, "12", MBART_TINY, 1)
examples/seq2seq/test_finetune_trainer.py:19:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
examples/seq2seq/test_finetune_trainer.py:105: in run_trainer
main()
examples/seq2seq/finetune_trainer.py:294: in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
src/transformers/trainer.py:583: in train
train_dataloader = self.get_train_dataloader()
src/transformers/trainer.py:386: in get_train_dataloader
train_sampler = self._get_train_sampler()
examples/seq2seq/seq2seq_trainer.py:108: in _get_train_sampler
self.args.per_device_train_batch_size, distributed=self.args.n_gpu > 1
examples/seq2seq/utils.py:156: in make_sortish_sampler
return DistributedSortishSampler(self, batch_size, shuffle=shuffle, **kwargs)
examples/seq2seq/utils.py:368: in __init__
num_replicas = dist.get_world_size()
../miniconda3/envs/nb/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py:582: in get_world_size
return _get_group_size(group)
../miniconda3/envs/nb/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py:196: in _get_group_size
_check_default_pg()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def _check_default_pg():
"""
Helper that checks if the default ProcessGroup has been initialized, with
assertion
"""
assert _default_pg is not None, \
> "Default process group is not initialized"
E AssertionError: Default process group is not initialized
../miniconda3/envs/nb/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py:187: AssertionError
_______________________________________________________ test_finetune_trainer_slow ________________________________________________________
@slow
def test_finetune_trainer_slow():
# TODO(SS): This will fail on devices with more than 1 GPU.
# There is a missing call to __init__process_group somewhere
> output_dir = run_trainer(eval_steps=2, max_len="128", model_name=MARIAN_MODEL, num_train_epochs=3)
examples/seq2seq/test_finetune_trainer.py:30:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
examples/seq2seq/test_finetune_trainer.py:105: in run_trainer
main()
examples/seq2seq/finetune_trainer.py:294: in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
src/transformers/trainer.py:583: in train
train_dataloader = self.get_train_dataloader()
src/transformers/trainer.py:386: in get_train_dataloader
train_sampler = self._get_train_sampler()
examples/seq2seq/seq2seq_trainer.py:108: in _get_train_sampler
self.args.per_device_train_batch_size, distributed=self.args.n_gpu > 1
examples/seq2seq/utils.py:156: in make_sortish_sampler
return DistributedSortishSampler(self, batch_size, shuffle=shuffle, **kwargs)
examples/seq2seq/utils.py:368: in __init__
num_replicas = dist.get_world_size()
../miniconda3/envs/nb/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py:582: in get_world_size
return _get_group_size(group)
../miniconda3/envs/nb/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py:196: in _get_group_size
_check_default_pg()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def _check_default_pg():
"""
Helper that checks if the default ProcessGroup has been initialized, with
assertion
"""
assert _default_pg is not None, \
> "Default process group is not initialized"
E AssertionError: Default process group is not initialized
../miniconda3/envs/nb/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py:187: AssertionError
========================================================= short test summary info =========================================================
FAILED examples/seq2seq/test_finetune_trainer.py::test_finetune_trainer - AssertionError: Default process group is not initialized
FAILED examples/seq2seq/test_finetune_trainer.py::test_finetune_trainer_slow - AssertionError: Default process group is not initialized
=========================================================== 2 failed in 11.51s ============================================================
```
| 10-15-2020 19:18:30 | 10-15-2020 19:18:30 | @stas00 would you be interested in taking a look at this, possibly reusing the fix in https://github.com/huggingface/transformers/pull/7281 ?
If that doesn't work we can hack it like `tests/test_trainer.py`:
https://github.com/huggingface/transformers/blob/a1d1b332d07a40177ae1959609ab70dab34018b8/tests/test_trainer.py#L245
<|||||>cc @patil-suraj <|||||>Yes, I will work on it today, Sam.<|||||>the other temp fix option is to use `@require_non_multigpu`<|||||>This is not the test's issue, but the script's one - this fails with the same error.
```
python examples/seq2seq/finetune_trainer.py --model_name_or_path sshleifer/tiny-mbart --data_dir examples/seq2seq/test_data/wmt_en_ro --output_dir /tmp/test_outputsarhj9od --overwrite_output_dir --n_train 8 --n_val 8 --max_source_length 12 --max_target_length 12 --val_max_target_length 12 --do_train --do_eval --do_predict --num_train_epochs 1 --per_device_train_batch_size 4 --per_device_eval_batch_size 4 --learning_rate 3e-4 --warmup_steps 8 --evaluate_during_training --predict_with_generate --logging_steps 0 --save_steps 1 --eval_steps 1 --sortish_sampler --label_smoothing 0.1 --adafactor --task translation --tgt_lang ro_RO --src_lang en_XX
```
I just dumped the args the test was invoking.
`AssertionError: Default process group is not initialized` means that the distributed setup is not done.
I will look more into it tomorrow morning.
On the other hand - if we sort it out - perhaps we could do the same for distributed eval!? It'd be much much better to delegate to PL all that forking, etc.
> If that doesn't work we can hack it like tests/test_trainer.py: line 245
Can you please clarify how do you think it could help? that line of code you quoted does nothing - it's just used for testing and it'll result in `n_gpu=2` anyway. Perhaps you meant somewhere else in that file?<|||||>You need to launch with
```bash
python -m torch.distributed.launch --nproc_per_node=2 finetune_trainer.py
```
caught me up as well.<|||||>In which case, yes, this would be 100% the same as https://github.com/huggingface/transformers/pull/7281 - let's finish it first, then refactor all that new code and use it here.
until then you can use `@require_non_multigpu` so that it doesn't interfere.<|||||>I thought PL had a way of handling distributed internally w/o the user needing to call `-m torch.distributed.launch` - is it not working or I misread it?<|||||>These tests don't use PL. |
transformers | 7,832 | closed | [testing] test_trainer_callback.py cannot collect test class 'TestTrainerCallback' | something is wrong here:
```
$ pytest tests/test_trainer_callback.py
tests/test_trainer_callback.py:24
/mnt/nvme1/code/transformers-comet/tests/test_trainer_callback.py:24: PytestCollectionWarning: cannot collect test class 'TestTrainerCallback' because it has a __init__ constructor (from: tests/test_trainer_callback.py)
class TestTrainerCallback(TrainerCallback):
```
another one in the same test:
```
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/parallel/_functions.py:64: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
tests/test_trainer_callback.py::TrainerCallbackTest::test_event_flow
```
hint: these are in the warnings section of the test.
@sgugger | 10-15-2020 19:08:26 | 10-15-2020 19:08:26 | It's just pytest thinking that class is a test because it begins by Test. Will fix, but this is very low priority for me as it runs as intended since pytest does not collect it. No idea about the other warning, but again, not important since the goal of that test is not to test gathered tensors, just the event flow.<|||||>Yes, of course, there is no rush. I can take care of some of them.
In general - the problem with stray warnings is that if there is a problem one has to read through warnings to see if something related is related and if there is a lot of noise in there this wastes a huge amount of time, sifting grain from chaff. Warnings really should be treated as errors, IMHO, or be removed if they aren't important.<|||||>I'm guessing you don't have TensorFlow installed then :-)<|||||>I do:
```
- `transformers` version: 3.3.1
- Platform: Linux-4.15.0-118-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.0.dev20201014 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```<|||||>And you don't get a thousand warnings from it?<|||||>You must be alluding to tf generating a gazillion of useless logs and therefore what's another warning, correct?
I'd say if we had resources to do so there should be no warnings emitted in our test suite. So that when one appears we know that something is not right and fix it. It's like a house with a broken window that doesn't get mended. Others start breaking more windows because there is one broken already.
FWIW, most of the time, because our test suite is so noisy, I run pytest via `pytest --disable-warnings`, unless I'm trying to debug a failing test and I want to see if any warnings might be relevant to the issue at hand.<|||||>Moreover, I have been contributing for about 3 months now and I'm yet to be able to complete the test suite 100% error-free. There is always a handful of failing tests, which seem to be just fine on CIs. I'd like to work on that `USE_CUDA` elimination task, so I was hoping I could get `make test` to pass so that I'd have a working base line, but it's not quite there yet.<|||||>The problem is that I don't know how to catch TF useless (or them not having fixed deprecation warnings for other python libs so not so useless) warnings and keep others.
As for running the whole test suite, I've never tried on non-CPU (like what the CI does) so I don't know what's failing there or if it's important ;-)<|||||>> The problem is that I don't know how to catch TF useless warnings
It'd be quite easy to do if we are talking about tf loggers. I wrote a function that you can do this selectively for any of the offenders of choice (or all of them). https://github.com/huggingface/transformers/issues/3050#issuecomment-682167272
But really we are going back to the PR that wasn't received well, where I proposed to optionally silence the loggers for external modules for the test suite - https://github.com/huggingface/transformers/pull/6816
> As for running the whole test suite, I've never tried on non-CPU (like what the CI does) so I don't know what's failing there or if it's important ;-)
It's probably important ;) Most applications of transformers are probably run on `gpus` - I could be wrong.
And it'd help to have a normal CI that would run on GPU.
Now that we are going to eliminate USE_CUDA, so that gpu will be used by default, it'll improve things a lot, as more developers will run into such problems and hopefully fix them.<|||||>I think the slow tests run the CI on GPU (even GPUs) so the common tests are all passing on that setup. You may have other issues linked to your env.<|||||>Only on scheduled CI which runs only a limited range of options. e.g. not testing newer pytorch releases. |
transformers | 7,831 | closed | [RAG] RagSequenceForGeneration should not load "facebook/rag-token-nq" and RagTokenForGeneration also should not load "facebook/rag-sequence-nq" | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): RAG
The problem arises when using:
* [X] the official example scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (give the name)
## To reproduce
Following usage of token and sequence models should not be allowed, it may give unintended result in forward pass-
```
# RagSequenceForGeneration with "facebook/rag-token-nq"
model = RagSequenceForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever)
# RagTokenForGeneration with "facebook/rag-sequence-nq"
model = RagTokenForGeneration.from_pretrained("facebook/rag-sequence-nq", retriever=retriever)
```
Also please correct example at https://huggingface.co/transformers/master/model_doc/rag.html#ragsequenceforgeneration
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Above usage should throw exception because both the models are incompatible with the each other. | 10-15-2020 18:53:29 | 10-15-2020 18:53:29 | The model weights are actually 1-1 compatible with each other, so I see no reason why we should throw an exception here.<|||||>Hi Patrick, I also believe there are typos regarding the examples :
On "sequence" based : https://huggingface.co/facebook/rag-sequence-nq , the examples use "token" arguments e.g.
```
retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True)
model = RagSequenceForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever)
```<|||||>@patrickvonplaten yes I think agree with you. I am closing this.<|||||>@patrickvonplaten
I am seeing very weird behaviour. Various RAG generator and model combination giving me very different output.
I am not able to understand why?
Check output of generators for **"What is capital of Germany?"** -
```
!pip install git+https://github.com/huggingface/transformers.git
!pip install datasets
!pip install faiss-cpu
!pip install torch torchvision
from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration, RagSequenceForGeneration
import torch
import faiss
tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq")
retriever = RagRetriever.from_pretrained("facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True)
input_dict = tokenizer.prepare_seq2seq_batch("What is capital of Germany?", return_tensors="pt")
input_ids = input_dict["input_ids"]
# RagTokenForGeneration with "facebook/rag-token-nq"
model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever)
generated_ids = model.generate(input_ids=input_ids)
generated_string = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print("Result of model = ", generated_string)
# RagSequenceForGeneration with "facebook/rag-sequence-nq"
model = RagSequenceForGeneration.from_pretrained("facebook/rag-sequence-nq", retriever=retriever)
generated_ids = model.generate(input_ids=input_ids)
generated_string = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print("Result of model = ", generated_string)
# RagSequenceForGeneration with "facebook/rag-token-nq"
model = RagSequenceForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever)
generated_ids = model.generate(input_ids=input_ids)
generated_string = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print("Result of model = ", generated_string)
# RagTokenForGeneration with "facebook/rag-sequence-nq"
model = RagTokenForGeneration.from_pretrained("facebook/rag-sequence-nq", retriever=retriever)
generated_ids = model.generate(input_ids=input_ids)
generated_string = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print("Result of model = ", generated_string)
```
Output of above run is (it is consistent behaviour) -
```
Result of model = [' german capital']
Result of model = ['']
Result of model = [' munich']
Result of model = [' germany']
```<|||||>> Hi Patrick, I also believe there are typos regarding the examples :
>
> On "sequence" based : https://huggingface.co/facebook/rag-sequence-nq , the examples use "token" arguments e.g.
>
> ```
> retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True)
> model = RagSequenceForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever)
> ```
Should be fixed - thanks :-)
https://github.com/huggingface/transformers/blob/master/model_cards/facebook/rag-sequence-nq/README.md<|||||>> @patrickvonplaten
>
> I am seeing very weird behaviour. Various RAG generator and model combination giving me very different output.
> I am not able to understand why?
>
> Check output of generators for **"What is capital of Germany?"** -
>
> ```
> !pip install git+https://github.com/huggingface/transformers.git
> !pip install datasets
> !pip install faiss-cpu
> !pip install torch torchvision
>
> from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration, RagSequenceForGeneration
> import torch
> import faiss
>
>
> tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq")
> retriever = RagRetriever.from_pretrained("facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True)
>
>
> input_dict = tokenizer.prepare_seq2seq_batch("What is capital of Germany?", return_tensors="pt")
> input_ids = input_dict["input_ids"]
>
> # RagTokenForGeneration with "facebook/rag-token-nq"
> model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever)
> generated_ids = model.generate(input_ids=input_ids)
> generated_string = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
> print("Result of model = ", generated_string)
>
> # RagSequenceForGeneration with "facebook/rag-sequence-nq"
> model = RagSequenceForGeneration.from_pretrained("facebook/rag-sequence-nq", retriever=retriever)
> generated_ids = model.generate(input_ids=input_ids)
> generated_string = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
> print("Result of model = ", generated_string)
>
> # RagSequenceForGeneration with "facebook/rag-token-nq"
> model = RagSequenceForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever)
> generated_ids = model.generate(input_ids=input_ids)
> generated_string = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
> print("Result of model = ", generated_string)
>
> # RagTokenForGeneration with "facebook/rag-sequence-nq"
> model = RagTokenForGeneration.from_pretrained("facebook/rag-sequence-nq", retriever=retriever)
> generated_ids = model.generate(input_ids=input_ids)
> generated_string = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
> print("Result of model = ", generated_string)
> ```
>
> Output of above run is (it is consistent behaviour) -
>
> ```
> Result of model = [' german capital']
> Result of model = ['']
> Result of model = [' munich']
> Result of model = [' germany']
> ```
Hey @lalitpagaria , the models are different in generating the answers - the results are not unexpected :-) If you take a closer look into the code you can see that both models expect the exact same weights, but have different generate() functions<|||||>Thanks @patrickvonplaten
I will play with few parameters of RegConfig. |
transformers | 7,830 | closed | fix wandb/comet problems | As discussed in https://github.com/huggingface/transformers/issues/7821
* handle the case where `comet_ml` is installed but not configured
* fix error in wandb code:
```
> combined_dict = {**model.config.to_dict(), **combined_dict}
E AttributeError: 'NoneType' object has no attribute 'to_dict'
```
Fixes: #7821
@sgugger | 10-15-2020 18:52:46 | 10-15-2020 18:52:46 | Thanks for this! W're working on a refactor on the comet_ml SDK to move some of the complications from here over to there.<|||||>@dsblank, also while you're at it, would it be possible to mend these in both projects. Getting these w/ py38 in the test suite. Thank you!
```
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/utils.py:30
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/utils.py:30: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
from collections import Mapping, defaultdict
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/monkey_patching.py:19
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/monkey_patching.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb/util.py:35
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb/util.py:35: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
from collections import namedtuple, Mapping, Sequence
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb/vendor/graphql-core-1.1/graphql/type/directives.py:55
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb/vendor/graphql-core-1.1/graphql/type/directives.py:55: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
assert isinstance(locations, collections.Iterable), 'Must provide locations for directive.'
```<|||||>@stas00 Sure, I'll take a look. I know some of these are caused by our support of Python 2.7, but perhaps there is a way to hide those.<|||||>Much appreciated, @dsblank! |
transformers | 7,829 | closed | [RAG] RagSequenceForGeneration: Running "retriever separately example" giving error | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@patrickvonplaten @LysandreJik
## Information
Model I am using (Bert, XLNet ...): RAG
The problem arises when using:
* [X] the official example scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: `dummy_dataset`
## To reproduce
Steps to reproduce the behavior:
1. Execute code snippets provided (Partially modified example script from https://huggingface.co/transformers/master/model_doc/rag.html)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Code snippets:
```
!pip install git+https://github.com/huggingface/transformers.git
!pip install datasets
!pip install faiss-cpu
from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration, RagSequenceForGeneration
import torch
import faiss
tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq")
model = RagSequenceForGeneration.from_pretrained("facebook/rag-sequence-nq", use_dummy_dataset=True)
retriever = RagRetriever.from_pretrained("facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True)
input_dict = tokenizer.prepare_seq2seq_batch("How many people live in Paris?", "In Paris, there are 10 million people.", return_tensors="pt")
input_ids = input_dict["input_ids"]
# Caling retriever seperately
question_hidden_states = model.question_encoder(input_ids)[0]
# 2. Retrieve
docs_dict = retriever(input_ids.numpy(), question_hidden_states.detach().numpy(), return_tensors="pt")
print(docs_dict)
doc_scores = torch.bmm(question_hidden_states.unsqueeze(1), docs_dict["retrieved_doc_embeds"].float().transpose(1, 2)).squeeze(1)
# 3. Forward to generator
outputs = model.generate(input_ids=input_ids, context_input_ids=docs_dict["context_input_ids"], context_attention_mask=docs_dict["context_attention_mask"], doc_scores=doc_scores)
generated_string = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(generated_string)
```
Stacktrace:
```
AssertionError Traceback (most recent call last)
<ipython-input-5-9f622b1f6353> in <module>()
7 doc_scores = torch.bmm(question_hidden_states.unsqueeze(1), docs_dict["retrieved_doc_embeds"].float().transpose(1, 2)).squeeze(1)
8 # 3. Forward to generator
----> 9 outputs = model.generate(input_ids=input_ids, context_input_ids=docs_dict["context_input_ids"], context_attention_mask=docs_dict["context_attention_mask"], doc_scores=doc_scores)
10 generated_string = tokenizer.batch_decode(outputs, skip_special_tokens=True)
11 print(generated_string)
5 frames
/usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
13 def decorate_context(*args, **kwargs):
14 with self:
---> 15 return func(*args, **kwargs)
16 return decorate_context
17
/usr/local/lib/python3.6/dist-packages/transformers/modeling_rag.py in generate(self, input_ids, attention_mask, context_input_ids, do_deduplication, num_return_sequences, num_beams, **kwargs)
902 # then, run model forwards to get nll scores:
903 new_input_ids = input_ids[index : index + 1].repeat(len(output_sequences), 1)
--> 904 outputs = self(new_input_ids, labels=output_sequences, exclude_bos_score=True)
905 top_cand_inds = (-outputs["loss"]).topk(num_doc_return_sequences)[1]
906
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
/usr/local/lib/python3.6/dist-packages/transformers/modeling_rag.py in forward(self, input_ids, attention_mask, encoder_outputs, decoder_input_ids, decoder_attention_mask, past_key_values, context_input_ids, context_attention_mask, doc_scores, use_cache, output_attentions, output_hidden_states, output_retrieved, exclude_bos_score, reduce_loss, labels, **kwargs)
767 output_attentions=output_attentions,
768 output_hidden_states=output_hidden_states,
--> 769 output_retrieved=output_retrieved,
770 )
771
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
/usr/local/lib/python3.6/dist-packages/transformers/modeling_rag.py in forward(self, input_ids, attention_mask, encoder_outputs, decoder_input_ids, decoder_attention_mask, past_key_values, doc_scores, context_input_ids, context_attention_mask, use_cache, output_attentions, output_hidden_states, output_retrieved)
589 assert (
590 context_input_ids is not None
--> 591 ), "Make sure that `context_input_ids` are passed, if no `retriever` is set. Alternatively, you can set a retriever using the `set_retriever(...)` function."
592 assert (
593 context_attention_mask is not None
AssertionError: Make sure that `context_input_ids` are passed, if no `retriever` is set. Alternatively, you can set a retriever using the `set_retriever(...)` function.
```
I suspect `context_input_ids` is not passed to `forward` method. And if model is not initialised with retriever then `forward` function complain about missing `context_input_ids` or `retriever`. Referring to following piece of code in `RagSequenceForGeneration` class and `generator` function.
```
# then, run model forwards to get nll scores:
new_input_ids = input_ids[index : index + 1].repeat(len(output_sequences), 1)
outputs = self(new_input_ids, labels=output_sequences, exclude_bos_score=True)
top_cand_inds = (-outputs["loss"]).topk(num_doc_return_sequences)[1]
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
It should work as intended as `RagTokenForGeneration` do. | 10-15-2020 18:47:38 | 10-15-2020 18:47:38 | Hey @lalitpagaria - spot on! Thanks a lot for your issue, you're 100% correct here.
I actually noticed that the `RagSequence` generate function is a bit more complex so that we cannot do the decomposed (embed, retrieve, generate) example here...
The PR linked to the issue removes the use case from the examples and fixes the one for `RagToken...`.<|||||>UPDATE: @patrickvonplaten Sorry for my miss-understanding. Yes without calling generate directly fixed this with your PR. Thanks you very much for fix.
-------------------
@patrickvonplaten Thank you for update. ~~I tried your changes on my code snippets and still got same error. If you see my example I am passing `context_input_ids`~~
|
transformers | 7,828 | closed | fix: ignore padding tokens in Bart loss | # What does this PR do?
There is a discrepancy between the [fine-tuning script](https://github.com/huggingface/transformers/blob/e7aa64838cc604abf7a49e69ca0ffe7af683d8ca/examples/seq2seq/finetune.py#L153) and the [BartForConditionalGeneration](https://github.com/huggingface/transformers/blob/e7aa64838cc604abf7a49e69ca0ffe7af683d8ca/src/transformers/modeling_bart.py#L1113) which is also noted in the comments.
From `examples/seq2seq/finetune.py`:
```python
# Same behavior as modeling_bart.py, besides ignoring pad_token_id
ce_loss_fct = torch.nn.CrossEntropyLoss(ignore_index=pad_token_id)
```
From `transformers/src/modeling_bart.py`:
```python
loss_fct = CrossEntropyLoss()
# TODO(SS): do we need to ignore pad tokens in labels?
```
Training with the `Trainer` and `BartForConditionalGeneration` results in a model that produces garbled text (lots of repetitions and no coherence). Adding the `ignore_index=self.config.pad_token_id` in the `CrossEntropyLoss` resolves the issue.
Besides a before and after run I did not study the behaviour in a systematic way since training the model requires a significant amount of time and compute. If you would like to see more testing let me know what you think is the best way to test this thoroughly.
| 10-15-2020 17:47:43 | 10-15-2020 17:47:43 | I don't seem to be able to add reviewers but I guess this would fall into the domain of @sshleifer. <|||||>Thx for the contribution!
FYI we have a seq2seq finetuner with this bugfix.
https://github.com/huggingface/transformers/blob/master/examples/seq2seq/seq2seq_trainer.py#L43
I worked on this at some point and thought I had fixed it.
Any issue with merging this @patil-suraj @patrickvonplaten ?<|||||>LGTM. but should be documented. seen few notebooks where people are setting pad tokens to -100 in labels . We should change this for T5 as well<|||||>> FYI we have a seq2seq finetuner with this bugfix.
> https://github.com/huggingface/transformers/blob/master/examples/seq2seq/seq2seq_trainer.py#L43
Thanks, I did not see that! With the fix in the model I was able to train Pegasus with the standard Trainer.<|||||>> LGTM. but should be documented. seen few notebooks where people are setting pad tokens to -100 in labels . We should change this for T5 as well
Good point, I remember that through me off because it explicitly says -100 works in the model's [docstring](https://github.com/huggingface/transformers/blob/4dbca500226e27be21dbb0eb08117dfd0c5264b3/src/transformers/modeling_bart.py#L1048).<|||||>I updated the docstring and added two assertions. Are these the assertions you were looking for @patil-suraj ?<|||||>I am not in favor of this PR to be honest.
1) By replacing the default `ignore_idx = -100` to `ignore_idx = pad_token_id` we constrain the user to be able to only ignore padding tokens but no other tokens. Previously a user could simply set tokens that should be ignored to -100 (pad_token, but in addition all other tokens). After this PR a user would not be able to ignore other tokens that the pad_token anymore.
2) This is not consistent with other models, which always only use -100 to ignore the loss
3) This is just a convenience function that should not be handled directly in the model itself. I'm 100% fine if this is handled in the `Seq2SeqTrainer` or `Trainer`
Already discussed offline with @sshleifer.
What are your thoughts on this @LysandreJik @sgugger @thomwolf ?<|||||>I agree that it would be nice to have a uniform pattern across the model architectures allowing to use the models interchangeably.
It seems there is some work needed to make this allow `-100` tokens in `Bart` since they break the embedding process in the forward pass:
```Python
/usr/local/lib/python3.6/dist-packages/transformers/modeling_bart.py in forward(self, input_ids, attention_mask, output_attentions, output_hidden_states, return_dict)
332 attention_mask = invert_mask(attention_mask)
333
--> 334 inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale
335 embed_pos = self.embed_positions(input_ids)
336 x = inputs_embeds + embed_pos
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/sparse.py in forward(self, input)
112 return F.embedding(
113 input, self.weight, self.padding_idx, self.max_norm,
--> 114 self.norm_type, self.scale_grad_by_freq, self.sparse)
115
116 def extra_repr(self):
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1482 # remove once script supports set_grad_enabled
1483 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1484 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1485
1486
RuntimeError: index out of range: Tried to access index -100 out of table with 50263 rows. at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:418
```<|||||>@lvwerra I think we should ignore `pad_token_id`, but if we go the -100 route it should be fine if you pass `decoder_input_ids` to BART? I don't see a call on your traceback so can't be sure.<|||||>@sshleifer I tried to pass `-100` in the `input_ids`:
```Python
from transformers import BartForConditionalGeneration, BartTokenizer
import torch
model = BartForConditionalGeneration.from_pretrained('facebook/bart-large-cnn')
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large-cnn')
model(torch.tensor([[0, -100]]))
```
But I get the same error with:
```Python
t =model(torch.tensor([[0, 1]]), decoder_input_ids=torch.tensor([[0, 0, -100]]))
```
It seems that the error comes from the line `x = self.embed_tokens(input_ids) * self.embed_scale` which is called in both forward passes of the encoder and decoder modules. How do you usually deal with this?
<|||||>I think to successfully implement the -100 strategy (Which I have never done),
you have to pass labels that contain -100 and decoder_input_ids that don't contain -100.<|||||>Yes I would tend to agree with @patrickvonplaten, I think that the usual philosophy of the lib is that we let the user handle this himself and have clear and simple exemple which shows that you should replace pad_token ids with ignore index in the labels.<|||||>I somehow missed the notification when @patrickvonplaten asked for advice earlier but I agree with what he said. We only handle a basic loss computation inside the model. We refused PRs to add weights for cross-entropy recently, for the same reason @thomwolf just pointed out: anything fancier should be done by the user themself, as we can't support every use case.
For the `Trainer` and `Seq2SeqTrainer`, there is any easy way to handle a custom loss computation, by subclassing and overriding the `compute_loss` function (see the [example in the docs](https://huggingface.co/transformers/main_classes/trainer.html).<|||||>Thanks for the feedback @thomwolf & @sgugger! From a user perspective, I think it would be great if one could use a model in combination with the `Trainer` for the model's standard task (e.g. conditional generation) without customisation or subclassing. If the default loss function of the model does not support that, then for which other use-cases would the default behaviour (not ignoring padding tokens in the labels) still be useful? Customisation already requires advanced knowledge of the inner workings of the `Trainer` class which not all users might have. If a user wants to do something more sophisticated than the standard task that requires modification of this behaviour, they could still write a custom `compute_loss` function.
If you want to go the `-100`-route the user has to "manually" right shift the `labels` tokens to create the `decoder_input_ids ` and then replace the `pad_token_id` in the `labels` with `-100`. As far as I can tell, this is always required to train the model for conditional generation, so I am wondering why it should not be the default behaviour inside the model `BartForConditionalGeneration`? Otherwise, the defaults are never used in practice and customisation is always required to train the model.
<|||||>Hey @lvwerra,
I think the main arguments against ignoring the `pad_token_id` inside `BartForConditionalGeneration` is that:
1. We cannot allow all models to have this behavior because some of them do not have a `pad_token_id`, *e.g.* GPT2. Because consistency between models is one of our top priorities, it is not a good idea to use -100 for some models and `pad_token_id` for others.
2. The are use cases where users want to not only ignore the padding token, but also other tokens, *e.g.* the eos token id. In this case it would be cleaner to set both pad and eos to -100 and ignore those tokens than setting the eos token to the pad token.<|||||>Ok, so if I understand correctly the minimal example to train a Bart model given a `dataset` object with columns `'text'` and `'summary'` would be to apply the following function (e.g. with `.map()`) before passing the model and the dataset to the `Trainer`:
```Python
from transformers.modeling_bart import shift_tokens_right
def convert_to_features(example_batch):
input_encodings = tokenizer.batch_encode_plus(example_batch['text'], pad_to_max_length=True)
target_encodings = tokenizer.batch_encode_plus(example_batch['summary'], pad_to_max_length=True)
labels = target_encodings['input_ids']
decoder_input_ids = shift_tokens_right(labels, model.config.pad_token_id)
labels[labels[:, :] == 0] = -100
encodings = {
'input_ids': input_encodings['input_ids'],
'attention_mask': input_encodings['attention_mask'],
'decoder_input_ids': decoder_input_ids,
'labels': labels,
}
return encodings
```
It took me quite a while reading examples and code reading to figure this out. Not only the thing with the padding tokens and -100 but also the difference between `decoder_input_ids` and `labels`. I am more than happy to update the docs to save the next person some time, since this seems not to be an edge case but the minimal work required to train Bart for conditional generation. Is there a good place to point this out?
<|||||>you could write a forums post and link to it from bart.rst? |
transformers | 7,827 | closed | Do I need to apply the softmax function to my logit before calculating the CrossEntropyLoss? | Hello, I am trying to compute the CrossEntropyLoss directly by using this code:
```
loss_fct = CrossEntropyLoss()
mc_loss = loss_fct(reshaped_logits, mc_labels)
```
If the reshaped_logits contain the logit values before softmax, should I apply `nn.softmax` function before I do `loss_fct(reshaped_logits, mc_labels)`? Thank you, | 10-15-2020 17:23:08 | 10-15-2020 17:23:08 | There is no need to add the nn.softmax function. Pls. refer to the [pytorch document](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html?highlight=crossentropyloss#torch.nn.CrossEntropyLoss).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,826 | closed | [Pipelines] Fix links to model lists | 10-15-2020 17:07:16 | 10-15-2020 17:07:16 | ||
transformers | 7,825 | closed | RFC: Move `_NoLayerEmbedTokens` to modeling_tf_utils.py | It has 0 computation and will be needed in many places. Here is the code:
```python
class _NoLayerEmbedTokens:
"""
this class wraps a the TFSharedEmbeddingTokens layer into a python 'no-keras-layer'
class to avoid problem with weight restoring. Also it makes sure that the layer is
called from the correct scope to avoid problem with saving/storing the correct weights
"""
def __init__(self, layer, abs_scope_name=None):
self._layer = layer
self._abs_scope_name = abs_scope_name
def call(self, inputs, mode="embedding"):
if self._abs_scope_name is None:
return self._layer.call(inputs, mode)
# if an abs scope name is given to the embedding variable, call variable from absolute scope
with tf.compat.v1.variable_scope(self._abs_scope_name, auxiliary_name_scope=False) as abs_scope_name:
with tf.name_scope(abs_scope_name.original_name_scope):
return self._layer.call(inputs, mode)
def __call__(self, inputs, mode="embedding"):
if self._abs_scope_name is None:
return self._layer(inputs, mode)
# if an abs scope name is given to the embedding variable, call variable from absolute scope
with tf.compat.v1.variable_scope(self._abs_scope_name, auxiliary_name_scope=False) as abs_scope_name:
with tf.name_scope(abs_scope_name.original_name_scope):
return self._layer(inputs, mode)
```
I will copy it into tfbart to avoid getting bogged down in debate, but wdyt @patrickvonplaten ? | 10-15-2020 17:00:09 | 10-15-2020 17:00:09 | `TFSharedEmbeddings` is in `modeling_tf_utils.py`
This seems similar.<|||||>I'm fine with moving this into `modeling_tf_utils.py` |
transformers | 7,824 | closed | Bart Caching: do we need encoder outputs after step 1? | since projected cross attention k,v are cached, this doesn't seem like it's actually needed.
Maybe we could save some mem if we deleted them after we were done. | 10-15-2020 16:41:07 | 10-15-2020 16:41:07 | Very good point! But I'm actually not sure if the memory consumption of `num_layers * cached k, v states` already makes memory consumption of `encoder_outputs` negligible. Would be cool to try out.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,823 | closed | Support for custom data_collator in Trainer.train() with datasets.Dataset | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Currently (transformers==3.3.1) Trainer removes unknown columns (not present in forward method of a model) from datasets.Dataset object. It prevents using custom DataCollator in `.train` method since it doesn't have columns that one would want to use.
I would be interested in an option to not remove unknown columns and allow user to handle them in DataCollator (or provide their own DataLoaders to the train method).
## Motivation
The example code I was trying to use for training looks like this:
```
from typing import List
from datasets import load_dataset
from transformers import RobertaForSequenceClassification, RobertaTokenizer, Trainer
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
model = RobertaForSequenceClassification.from_pretrained('roberta-base', return_dict=True)
train_dataset = load_dataset("yelp_polarity", split="train")
val_dataset = load_dataset("yelp_polarity", split="test")
class DataCollator:
def __init__(self, tokenizer):
self.tokenizer = tokenizer
def __call__(self, examples: List[dict]):
labels = [example['label'] for example in examples]
texts = [example['text'] for example in examples]
tokenizer_output = self.tokenizer(texts, truncation=True, padding=True)
tokenizer_output['input_ids'] = torch.tensor(tokenizer_output['input_ids'])
tokenizer_output['attention_mask'] = torch.tensor(tokenizer_output['attention_mask'])
output_dict = dict(labels=labels, **tokenizer_output)
return output_dict
data_collator = DataCollator(tokenizer)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset, # evaluation dataset
data_collator=data_collator # problem with removing unwanted columns by transformers.Trainer code. Doesn't really work, is it a bug?
)
trainer.train()
```
(with `trainer.train()` not working)
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
Before trying to fix things, I would be interested to learn if there is a suggested alternative way of doing that?
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 10-15-2020 16:15:22 | 10-15-2020 16:15:22 | Just set `remove_unused_columns=False` in your `TrainingArguments` to disable that behavior.<|||||>Great, I completely missed that! Thanks. |
transformers | 7,822 | closed | [testing] test_modeling_deberta.py is failing | ```
USE_CUDA=1 pytest tests/test_modeling_deberta.py
```
```
====================================================================== test session starts =======================================================================
platform linux -- Python 3.8.5, pytest-6.0.1, py-1.9.0, pluggy-0.13.1
rootdir: /mnt/nvme1/code/huggingface/transformers-master
plugins: xdist-2.1.0, forked-1.3.0, typeguard-2.9.1, flake8-1.0.6, hydra-core-1.0.0, cov-2.10.1, instafail-0.4.2, flakefinder-1.0.0
collected 33 items
tests/test_modeling_deberta.py ...F.FssFs............s..F....sss [100%]
============================================================================ FAILURES ============================================================================
______________________________________________________________ DebertaModelTest.test_deberta_model _______________________________________________________________
self = <tests.test_modeling_deberta.DebertaModelTest testMethod=test_deberta_model>
def test_deberta_model(self):
config_and_inputs = self.model_tester.prepare_config_and_inputs()
> self.model_tester.create_and_check_deberta_model(*config_and_inputs)
tests/test_modeling_deberta.py:210:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_modeling_deberta.py:159: in create_and_check_deberta_model
sequence_output = model(input_ids, attention_mask=input_mask, token_type_ids=token_type_ids)[0]
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:891: in forward
encoder_outputs = self.encoder(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:401: in forward
hidden_states = layer_module(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:324: in forward
attention_output = self.attention(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:257: in forward
self_output = self.self(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = DisentangledSelfAttention(
(in_proj): Linear(in_features=32, out_features=96, bias=False)
(dropout): StableDropout()
)
hidden_states = tensor([[[-0.1865, 1.6041, -1.3552, ..., -1.2947, -0.9667, 0.9630],
[ 1.1352, 0.1829, 0.3669, ..., -2.4... [-0.3743, -0.0251, -0.8356, ..., -0.9786, -1.3914, -1.9630]]],
device='cuda:0', grad_fn=<MulBackward0>)
attention_mask = tensor([[[[1, 1, 1, 0, 1, 1, 0],
[1, 1, 1, 0, 1, 1, 0],
[1, 1, 1, 0, 1, 1, 0],
[0, 0, 0,..., 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 1, 0, 0, 1]]]], device='cuda:0', dtype=torch.uint8)
return_att = False, query_states = None, relative_pos = None, rel_embeddings = None
def forward(
self,
hidden_states,
attention_mask,
return_att=False,
query_states=None,
relative_pos=None,
rel_embeddings=None,
):
"""Call the module
Args:
hidden_states (:obj:`torch.FloatTensor`):
Input states to the module usally the output from previous layer, it will be the Q,K and V in `Attention(Q,K,V)`
attention_mask (:obj:`torch.ByteTensor`):
An attention mask matrix of shape [`B`, `N`, `N`] where `B` is the batch size, `N` is the maxium sequence length in which element [i,j] = `1` means the `i` th token in the input can attend to the `j` th token.
return_att (:obj:`bool`, optional):
Whether return the attention maxitrix.
query_states (:obj:`torch.FloatTensor`, optional):
The `Q` state in `Attention(Q,K,V)`.
relative_pos (:obj:`torch.LongTensor`):
The relative position encoding between the tokens in the sequence. It's of shape [`B`, `N`, `N`] with values ranging in [`-max_relative_positions`, `max_relative_positions`].
rel_embeddings (:obj:`torch.FloatTensor`):
The embedding of relative distances. It's a tensor of shape [:math:`2 \\times \\text{max_relative_positions}`, `hidden_size`].
"""
if query_states is None:
qp = self.in_proj(hidden_states) # .split(self.all_head_size, dim=-1)
query_layer, key_layer, value_layer = self.transpose_for_scores(qp).chunk(3, dim=-1)
else:
def linear(w, b, x):
if b is not None:
return torch.matmul(x, w.t()) + b.t()
else:
return torch.matmul(x, w.t()) # + b.t()
ws = self.in_proj.weight.chunk(self.num_attention_heads * 3, dim=0)
qkvw = [torch.cat([ws[i * 3 + k] for i in range(self.num_attention_heads)], dim=0) for k in range(3)]
qkvb = [None] * 3
q = linear(qkvw[0], qkvb[0], query_states)
k, v = [linear(qkvw[i], qkvb[i], hidden_states) for i in range(1, 3)]
query_layer, key_layer, value_layer = [self.transpose_for_scores(x) for x in [q, k, v]]
query_layer += self.transpose_for_scores(self.q_bias[None, None, :])
> value_layer += self.transpose_for_scores(self.v_bias[None, None, :])
E RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1602659340682/work/torch/csrc/autograd/variable.cpp":352, please report a bug to PyTorch.
src/transformers/modeling_deberta.py:575: RuntimeError
__________________________________________________________ DebertaModelTest.test_feed_forward_chunking ___________________________________________________________
self = <tests.test_modeling_deberta.DebertaModelTest testMethod=test_feed_forward_chunking>
def test_feed_forward_chunking(self):
(
original_config,
inputs_dict,
) = self.model_tester.prepare_config_and_inputs_for_common()
for model_class in self.all_model_classes:
torch.manual_seed(0)
config = copy.deepcopy(original_config)
model = model_class(config)
model.to(torch_device)
model.eval()
> hidden_states_no_chunk = model(**self._prepare_for_class(inputs_dict, model_class))[0]
tests/test_modeling_common.py:617:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:891: in forward
encoder_outputs = self.encoder(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:401: in forward
hidden_states = layer_module(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:324: in forward
attention_output = self.attention(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:257: in forward
self_output = self.self(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = DisentangledSelfAttention(
(in_proj): Linear(in_features=32, out_features=96, bias=False)
(dropout): StableDropout()
)
hidden_states = tensor([[[-0.1510, 1.7101, -1.2962, ..., 1.5861, -0.6766, 0.3315],
[ 0.0000, -0.0000, -0.0000, ..., 0.0... [-0.0000, -0.0000, -0.0000, ..., 0.0000, 0.0000, 0.0000]]],
device='cuda:0', grad_fn=<MulBackward0>)
attention_mask = tensor([[[[1, 0, 0, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[1, 0, 0,..., 0, 1, 1, 0],
[1, 0, 0, 0, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0]]]], device='cuda:0', dtype=torch.uint8)
return_att = False, query_states = None, relative_pos = None, rel_embeddings = None
def forward(
self,
hidden_states,
attention_mask,
return_att=False,
query_states=None,
relative_pos=None,
rel_embeddings=None,
):
"""Call the module
Args:
hidden_states (:obj:`torch.FloatTensor`):
Input states to the module usally the output from previous layer, it will be the Q,K and V in `Attention(Q,K,V)`
attention_mask (:obj:`torch.ByteTensor`):
An attention mask matrix of shape [`B`, `N`, `N`] where `B` is the batch size, `N` is the maxium sequence length in which element [i,j] = `1` means the `i` th token in the input can attend to the `j` th token.
return_att (:obj:`bool`, optional):
Whether return the attention maxitrix.
query_states (:obj:`torch.FloatTensor`, optional):
The `Q` state in `Attention(Q,K,V)`.
relative_pos (:obj:`torch.LongTensor`):
The relative position encoding between the tokens in the sequence. It's of shape [`B`, `N`, `N`] with values ranging in [`-max_relative_positions`, `max_relative_positions`].
rel_embeddings (:obj:`torch.FloatTensor`):
The embedding of relative distances. It's a tensor of shape [:math:`2 \\times \\text{max_relative_positions}`, `hidden_size`].
"""
if query_states is None:
qp = self.in_proj(hidden_states) # .split(self.all_head_size, dim=-1)
query_layer, key_layer, value_layer = self.transpose_for_scores(qp).chunk(3, dim=-1)
else:
def linear(w, b, x):
if b is not None:
return torch.matmul(x, w.t()) + b.t()
else:
return torch.matmul(x, w.t()) # + b.t()
ws = self.in_proj.weight.chunk(self.num_attention_heads * 3, dim=0)
qkvw = [torch.cat([ws[i * 3 + k] for i in range(self.num_attention_heads)], dim=0) for k in range(3)]
qkvb = [None] * 3
q = linear(qkvw[0], qkvb[0], query_states)
k, v = [linear(qkvw[i], qkvb[i], hidden_states) for i in range(1, 3)]
query_layer, key_layer, value_layer = [self.transpose_for_scores(x) for x in [q, k, v]]
query_layer += self.transpose_for_scores(self.q_bias[None, None, :])
> value_layer += self.transpose_for_scores(self.v_bias[None, None, :])
E RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1602659340682/work/torch/csrc/autograd/variable.cpp":352, please report a bug to PyTorch.
src/transformers/modeling_deberta.py:575: RuntimeError
_______________________________________________________ DebertaModelTest.test_for_sequence_classification ________________________________________________________
self = <tests.test_modeling_deberta.DebertaModelTest testMethod=test_for_sequence_classification>
def test_for_sequence_classification(self):
config_and_inputs = self.model_tester.prepare_config_and_inputs()
> self.model_tester.create_and_check_deberta_for_sequence_classification(*config_and_inputs)
tests/test_modeling_deberta.py:214:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_modeling_deberta.py:177: in create_and_check_deberta_for_sequence_classification
loss, logits = model(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:987: in forward
outputs = self.deberta(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:891: in forward
encoder_outputs = self.encoder(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:401: in forward
hidden_states = layer_module(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:324: in forward
attention_output = self.attention(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:257: in forward
self_output = self.self(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = DisentangledSelfAttention(
(in_proj): Linear(in_features=32, out_features=96, bias=False)
(dropout): StableDropout()
)
hidden_states = tensor([[[-6.2643e-01, -1.6159e+00, 1.4385e+00, ..., 1.2083e+00,
-7.2520e-02, 4.6405e-01],
[ 7....62e-01, 6.8716e-01, ..., -1.0658e+00,
-9.1048e-01, 4.0098e-01]]], device='cuda:0', grad_fn=<MulBackward0>)
attention_mask = tensor([[[[1, 1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 0, 0],
[1, 1, 1,..., 0, 0, 0, 0],
[1, 1, 1, 1, 0, 1, 1],
[1, 1, 1, 1, 0, 1, 1]]]], device='cuda:0', dtype=torch.uint8)
return_att = False, query_states = None, relative_pos = None, rel_embeddings = None
def forward(
self,
hidden_states,
attention_mask,
return_att=False,
query_states=None,
relative_pos=None,
rel_embeddings=None,
):
"""Call the module
Args:
hidden_states (:obj:`torch.FloatTensor`):
Input states to the module usally the output from previous layer, it will be the Q,K and V in `Attention(Q,K,V)`
attention_mask (:obj:`torch.ByteTensor`):
An attention mask matrix of shape [`B`, `N`, `N`] where `B` is the batch size, `N` is the maxium sequence length in which element [i,j] = `1` means the `i` th token in the input can attend to the `j` th token.
return_att (:obj:`bool`, optional):
Whether return the attention maxitrix.
query_states (:obj:`torch.FloatTensor`, optional):
The `Q` state in `Attention(Q,K,V)`.
relative_pos (:obj:`torch.LongTensor`):
The relative position encoding between the tokens in the sequence. It's of shape [`B`, `N`, `N`] with values ranging in [`-max_relative_positions`, `max_relative_positions`].
rel_embeddings (:obj:`torch.FloatTensor`):
The embedding of relative distances. It's a tensor of shape [:math:`2 \\times \\text{max_relative_positions}`, `hidden_size`].
"""
if query_states is None:
qp = self.in_proj(hidden_states) # .split(self.all_head_size, dim=-1)
query_layer, key_layer, value_layer = self.transpose_for_scores(qp).chunk(3, dim=-1)
else:
def linear(w, b, x):
if b is not None:
return torch.matmul(x, w.t()) + b.t()
else:
return torch.matmul(x, w.t()) # + b.t()
ws = self.in_proj.weight.chunk(self.num_attention_heads * 3, dim=0)
qkvw = [torch.cat([ws[i * 3 + k] for i in range(self.num_attention_heads)], dim=0) for k in range(3)]
qkvb = [None] * 3
q = linear(qkvw[0], qkvb[0], query_states)
k, v = [linear(qkvw[i], qkvb[i], hidden_states) for i in range(1, 3)]
query_layer, key_layer, value_layer = [self.transpose_for_scores(x) for x in [q, k, v]]
query_layer += self.transpose_for_scores(self.q_bias[None, None, :])
> value_layer += self.transpose_for_scores(self.v_bias[None, None, :])
E RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1602659340682/work/torch/csrc/autograd/variable.cpp":352, please report a bug to PyTorch.
src/transformers/modeling_deberta.py:575: RuntimeError
_________________________________________________________ DebertaModelTest.test_resize_tokens_embeddings _________________________________________________________
self = <tests.test_modeling_deberta.DebertaModelTest testMethod=test_resize_tokens_embeddings>
def test_resize_tokens_embeddings(self):
(
original_config,
inputs_dict,
) = self.model_tester.prepare_config_and_inputs_for_common()
if not self.test_resize_embeddings:
return
for model_class in self.all_model_classes:
config = copy.deepcopy(original_config)
model = model_class(config)
model.to(torch_device)
if self.model_tester.is_training is False:
model.eval()
model_vocab_size = config.vocab_size
# Retrieve the embeddings and clone theme
model_embed = model.resize_token_embeddings(model_vocab_size)
cloned_embeddings = model_embed.weight.clone()
# Check that resizing the token embeddings with a larger vocab size increases the model's vocab size
model_embed = model.resize_token_embeddings(model_vocab_size + 10)
self.assertEqual(model.config.vocab_size, model_vocab_size + 10)
# Check that it actually resizes the embeddings matrix
self.assertEqual(model_embed.weight.shape[0], cloned_embeddings.shape[0] + 10)
# Check that the model can still do a forward pass successfully (every parameter should be resized)
> model(**self._prepare_for_class(inputs_dict, model_class))
tests/test_modeling_common.py:655:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:891: in forward
encoder_outputs = self.encoder(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:401: in forward
hidden_states = layer_module(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:324: in forward
attention_output = self.attention(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
src/transformers/modeling_deberta.py:257: in forward
self_output = self.self(
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/nn/modules/module.py:727: in _call_impl
result = self.forward(*input, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = DisentangledSelfAttention(
(in_proj): Linear(in_features=32, out_features=96, bias=False)
(dropout): StableDropout()
)
hidden_states = tensor([[[ 0.0000, -1.3935, 1.0241, ..., 0.4936, 0.0103, 0.0000],
[-0.9223, 0.6012, -0.7819, ..., -0.8... [ 0.0000, -0.4612, 1.3087, ..., 1.2537, 0.3499, -0.9116]]],
device='cuda:0', grad_fn=<XDropoutBackward>)
attention_mask = tensor([[[[1, 1, 0, 0, 1, 1, 0],
[1, 1, 0, 0, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0,..., 0, 1, 0, 1],
[0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 0, 1, 0, 1]]]], device='cuda:0', dtype=torch.uint8)
return_att = False, query_states = None, relative_pos = None, rel_embeddings = None
def forward(
self,
hidden_states,
attention_mask,
return_att=False,
query_states=None,
relative_pos=None,
rel_embeddings=None,
):
"""Call the module
Args:
hidden_states (:obj:`torch.FloatTensor`):
Input states to the module usally the output from previous layer, it will be the Q,K and V in `Attention(Q,K,V)`
attention_mask (:obj:`torch.ByteTensor`):
An attention mask matrix of shape [`B`, `N`, `N`] where `B` is the batch size, `N` is the maxium sequence length in which element [i,j] = `1` means the `i` th token in the input can attend to the `j` th token.
return_att (:obj:`bool`, optional):
Whether return the attention maxitrix.
query_states (:obj:`torch.FloatTensor`, optional):
The `Q` state in `Attention(Q,K,V)`.
relative_pos (:obj:`torch.LongTensor`):
The relative position encoding between the tokens in the sequence. It's of shape [`B`, `N`, `N`] with values ranging in [`-max_relative_positions`, `max_relative_positions`].
rel_embeddings (:obj:`torch.FloatTensor`):
The embedding of relative distances. It's a tensor of shape [:math:`2 \\times \\text{max_relative_positions}`, `hidden_size`].
"""
if query_states is None:
qp = self.in_proj(hidden_states) # .split(self.all_head_size, dim=-1)
query_layer, key_layer, value_layer = self.transpose_for_scores(qp).chunk(3, dim=-1)
else:
def linear(w, b, x):
if b is not None:
return torch.matmul(x, w.t()) + b.t()
else:
return torch.matmul(x, w.t()) # + b.t()
ws = self.in_proj.weight.chunk(self.num_attention_heads * 3, dim=0)
qkvw = [torch.cat([ws[i * 3 + k] for i in range(self.num_attention_heads)], dim=0) for k in range(3)]
qkvb = [None] * 3
q = linear(qkvw[0], qkvb[0], query_states)
k, v = [linear(qkvw[i], qkvb[i], hidden_states) for i in range(1, 3)]
query_layer, key_layer, value_layer = [self.transpose_for_scores(x) for x in [q, k, v]]
query_layer += self.transpose_for_scores(self.q_bias[None, None, :])
> value_layer += self.transpose_for_scores(self.v_bias[None, None, :])
E RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1602659340682/work/torch/csrc/autograd/variable.cpp":352, please report a bug to PyTorch.
src/transformers/modeling_deberta.py:575: RuntimeError
======================================================================== warnings summary ========================================================================
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/utils.py:30
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/utils.py:30: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
from collections import Mapping, defaultdict
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/monkey_patching.py:19
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/monkey_patching.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/graphql/type/directives.py:55
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/graphql/type/directives.py:55: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
assert isinstance(locations, collections.Iterable), 'Must provide locations for directive.'
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/graphql/type/typemap.py:1
/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/graphql/type/typemap.py:1: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
from collections import OrderedDict, Sequence, defaultdict
tests/test_modeling_deberta.py::DebertaModelTest::test_deberta_model
tests/test_modeling_deberta.py::DebertaModelTest::test_feed_forward_chunking
tests/test_modeling_deberta.py::DebertaModelTest::test_for_sequence_classification
tests/test_modeling_deberta.py::DebertaModelTest::test_resize_tokens_embeddings
/mnt/nvme1/code/huggingface/transformers-master/src/transformers/modeling_deberta.py:574: UserWarning: Output 0 of SplitBackward is a view and is being modified inplace. This view is an output of a function that returns multiple views. Inplace operators on such views are being deprecated and will be forbidden starting from version 1.8. Consider using `unsafe_` version of the function that produced this view or don't modify this view inplace. (Triggered internally at /opt/conda/conda-bld/pytorch_1602659340682/work/torch/csrc/autograd/variable.cpp:480.)
query_layer += self.transpose_for_scores(self.q_bias[None, None, :])
tests/test_modeling_deberta.py::DebertaModelTest::test_deberta_model
tests/test_modeling_deberta.py::DebertaModelTest::test_feed_forward_chunking
tests/test_modeling_deberta.py::DebertaModelTest::test_for_sequence_classification
tests/test_modeling_deberta.py::DebertaModelTest::test_resize_tokens_embeddings
/mnt/nvme1/code/huggingface/transformers-master/src/transformers/modeling_deberta.py:575: UserWarning: Output 2 of SplitBackward is a view and is being modified inplace. This view is an output of a function that returns multiple views. Inplace operators on such views are being deprecated and will be forbidden starting from version 1.8. Consider using `unsafe_` version of the function that produced this view or don't modify this view inplace. (Triggered internally at /opt/conda/conda-bld/pytorch_1602659340682/work/torch/csrc/autograd/variable.cpp:480.)
value_layer += self.transpose_for_scores(self.v_bias[None, None, :])
tests/test_modeling_deberta.py::DebertaModelTest::test_model_outputs_equivalence
/mnt/nvme1/code/huggingface/transformers-master/src/transformers/modeling_deberta.py:1011: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at /opt/conda/conda-bld/pytorch_1602659340682/work/torch/csrc/utils/python_arg_parser.cpp:945.)
label_index = (labels >= 0).nonzero()
-- Docs: https://docs.pytest.org/en/stable/warnings.html
==================================================================== short test summary info =====================================================================
FAILED tests/test_modeling_deberta.py::DebertaModelTest::test_deberta_model - RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED at "/opt/con...
FAILED tests/test_modeling_deberta.py::DebertaModelTest::test_feed_forward_chunking - RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED at "...
FAILED tests/test_modeling_deberta.py::DebertaModelTest::test_for_sequence_classification - RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILE...
FAILED tests/test_modeling_deberta.py::DebertaModelTest::test_resize_tokens_embeddings - RuntimeError: diff_view_meta->output_nr_ == 0 INTERNAL ASSERT FAILED a...
====================================================== 4 failed, 22 passed, 7 skipped, 13 warnings in 7.85s ====================================================
```
Thank you!
## Environment info
```
- `transformers` version: master
- Platform: Linux-4.15.0-118-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.0.dev20201014 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
@sgugger, @LysandreJik | 10-15-2020 16:13:12 | 10-15-2020 16:13:12 | Good to know that this we need to update this code for version 1.8.0. Thanks!<|||||>Ah, right, I didn't notice that it was nightly specific - if I'm not mistaken this is really for 1.7 I think - it's hard to tell though.<|||||>Ah, possible! I will take a look :)<|||||>Was fixed in https://github.com/huggingface/transformers/pull/8057 |
transformers | 7,821 | closed | [testing] trainer tests fail - 2 issues | trainer tests seem to have several issues:
```
USE_CUDA=1 pytest tests/test_trainer.py::TrainerIntegrationTest::test_custom_optimizer
```
1.
```
if self.api_key is None:
> raise ValueError(
"Comet.ml requires an API key. Please provide as the "
"first argument to Experiment(api_key) or as an environment"
" variable named COMET_API_KEY "
)
```
the test should skip and not fail if the user doesn't have Comet.ml
2. wandb seems to still be problematic when there is an error in the test.
```
Traceback (most recent call last):
File "/home/stas/anaconda3/envs/main-38/bin/pytest", line 8, in <module>
sys.exit(console_main())
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/config/__init__.py", line 180, in console_main
code = main()
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/config/__init__.py", line 157, in main
ret = config.hook.pytest_cmdline_main(
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/hooks.py", line 286, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/manager.py", line 93, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/manager.py", line 84, in <lambda>
self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/callers.py", line 208, in _multicall
return outcome.get_result()
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/callers.py", line 80, in get_result
raise ex[1].with_traceback(ex[2])
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/main.py", line 289, in pytest_cmdline_main
return wrap_session(config, _main)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/main.py", line 284, in wrap_session
config._ensure_unconfigure()
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/config/__init__.py", line 920, in _ensure_unconfigure
fin()
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/capture.py", line 632, in stop_global_capturing
self._global_capturing.pop_outerr_to_orig()
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/capture.py", line 522, in pop_outerr_to_orig
out, err = self.readouterr()
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/capture.py", line 563, in readouterr
out = self.out.snap()
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/capture.py", line 481, in snap
self.tmpfile.seek(0)
OSError: [Errno 29] Illegal seek
```
I have to add `WANDB_DISABLED=true` to overcome those.
There was an issue about it earlier, but I am not able to find it.
Full report:
```
FAILED tests/test_trainer.py::TrainerIntegrationTest::test_can_resume_training - ValueError: Comet.ml requires an API key. Please provide as the first argument...
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_can_resume_training - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_custom_optimizer - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_custom_optimizer - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_evaluate - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_evaluate - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_flos_extraction - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_flos_extraction - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_load_best_model_at_end - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_load_best_model_at_end - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_model_init - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_model_init - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_num_train_epochs_in_training - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_num_train_epochs_in_training - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_number_of_steps_in_training - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_number_of_steps_in_training - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_predict - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_predict - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_reproducible_training - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_reproducible_training - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_save_checkpoints - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_save_checkpoints - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_train_and_eval_dataloaders - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_train_and_eval_dataloaders - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_trainer_eval_lm - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_trainer_eval_lm - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_trainer_eval_mrpc - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_trainer_eval_mrpc - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_trainer_iterable_dataset - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_trainer_iterable_dataset - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_trainer_with_datasets - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_trainer_with_datasets - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_training_arguments_are_left_untouched - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer.py::TrainerIntegrationTest::test_training_arguments_are_left_untouched - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer_callback.py::TrainerCallbackTest::test_add_remove_callback - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer_callback.py::TrainerCallbackTest::test_add_remove_callback - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer_callback.py::TrainerCallbackTest::test_event_flow - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer_callback.py::TrainerCallbackTest::test_event_flow - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer_callback.py::TrainerCallbackTest::test_init_callback - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer_callback.py::TrainerCallbackTest::test_init_callback - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer_utils.py::TrainerUtilsTest::test_distributed_tensor_gatherer - OSError: [Errno 29] Illegal seek
ERROR tests/test_trainer_utils.py::TrainerUtilsTest::test_distributed_tensor_gatherer - OSError: [Errno 29] Illegal seek
ERROR tests/test_utils_check_copies.py::CopyCheckTester::test_find_code_in_transformers - OSError: [Errno 29] Illegal seek
ERROR tests/test_utils_check_copies.py::CopyCheckTester::test_find_code_in_transformers - OSError: [Errno 29] Illegal seek
ERROR tests/test_utils_check_copies.py::CopyCheckTester::test_is_copy_consistent - OSError: [Errno 29] Illegal seek
ERROR tests/test_utils_check_copies.py::CopyCheckTester::test_is_copy_consistent - OSError: [Errno 29] Illegal seek
```
Thank you!
## Environment info
```
- `transformers` version: master
- Platform: Linux-4.15.0-118-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.0.dev20201014 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
@sgugger
| 10-15-2020 16:09:00 | 10-15-2020 16:09:00 | All tests are passing on my side and the CI so it's something in your environment that triggers the failures. Could you add what you have inside it?<|||||>This?
> the test should skip and not fail if the user doesn't have `Comet.ml`
<|||||>I don't have `comet.ml` installed in my dev env. I think the problem might be having it installed without the key properly set.<|||||>Right, so this comes from `transformers`:
```
src/transformers/integrations.py: experiment = comet_ml.Experiment(**args)
src/transformers/trainer_tf.py: experiment = comet_ml.Experiment(**args)
```
and it fails in:
```
if self.api_key is None:
> raise ValueError(
"Comet.ml requires an API key. Please provide as the "
"first argument to Experiment(api_key) or as an environment"
" variable named COMET_API_KEY "
)
```
and it has a check:
```
if is_comet_available():
import comet_ml
```
so the tests need to do something about it.
I have never explicitly installed `comet_ml`, but some other package installed it as a dependency. Yet, I don't have this service configured. It's like `wandb` - it needs to gracefully skip tests if `comet_ml` is installed but not configured. To configure you have to go to that service website, sign up, create API key, etc. etc.
I suppose there needs to be a skip decorator that does something like:
```
from .integrations import is_comet_available
def working_comet_ml(test_case):
# handle the case where comel_ml is installed but not configured
try:
if is_comet_available:
import comet_ml
comet_ml.Experiment()
except:
return unittest.skip("test is slow")(test_case)
return test_case
```
this is untested.
Most likely you should be able to reproduce the problem by just installing `comet_ml`
Or perhaps those 2 places where it's used in `transformers` it should issue a warning, rather then raise an error. Which is probably a better way to handle that.
**edit**: thinking more, it is the library that shouldn't crash, and not the tests that should skip. It's an optional feature not a requirement, so it should behave gracefully.<|||||>Seems to come from: https://github.com/huggingface/transformers/commit/b923871bb78f538e3c2e4bf36776986c800da1ae
So I guess we need to ask @dsblank, who made the initial PR and I see you were in the reviewers too.<|||||>Yes I think raising a warning should be better than a hard error since it does not mean the training has to stop, and it blocks the tests. If you can suggest a PR with those changes, I'd happily review it.<|||||>Found a better solution https://github.com/huggingface/transformers/pull/7830
`comet_ml` needs to have `comet_ml.ensure_configured()` ala `wandb.ensure_configured()` - I am not sure whether ` @dsblank could add/ask for it. Thank you!
Until then it'll be emulated by:
```
try:
# Comet needs to be imported before any ML frameworks
import comet_ml # noqa: F401
# XXX: there should be comet_ml.ensure_configured(), like `wandb`, for now emulate it
comet_ml.Experiment(project_name="ensure_configured")
_has_comet = True
except (ImportError, ValueError):
_has_comet = False
```
<|||||>and wandb needed the latest version - which fixed the `OSError: [Errno 29] Illegal seek` issue |
transformers | 7,820 | closed | Fix small type hinting error | Fix small type hinting error which causes type warnings in some IDEs when using `torch.device` instead of a string | 10-15-2020 15:44:47 | 10-15-2020 15:44:47 | @LysandreJik done, thanks for pointing that out! |
transformers | 7,819 | closed | Create README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-15-2020 15:05:41 | 10-15-2020 15:05:41 | |
transformers | 7,818 | closed | Remove masked_lm_labels from returned dictionary in DataCollatorForNextSentencePrediction | # What does this PR do?
This PR removes ```masked_lm_labels``` from dictionary returned in DataCollatorForNextSentencePrediction ```__call__``` method. I noticed the warning while BERT pre-training. I removed the lines (452,453) and the warning was finally gone. See discussion George (@gmihaila) and I already had in the [merged PR](https://github.com/huggingface/transformers/pull/7595)
<!-- Remove if not applicable -->
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case. **Discussed on forum but on previously merged [PR](https://github.com/huggingface/transformers/pull/7595) **
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). **Not needed**
- [x] Did you write any new necessary tests? **Not needed**
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@LysandreJik, @sgugger, and @gmihaila can review
| 10-15-2020 14:59:41 | 10-15-2020 14:59:41 | |
transformers | 7,817 | closed | Fix missing reference titles in retrieval evaluation of RAG | There's currently a bug in the retrieval evaluation script of RAG where for each sample, the first reference title is discarded.
Moreover only `k` reference titles were taken into account while we need all of them to compute the Precision @ k.
Because of this bug the index provided by the RAG team (hnsw M=128 + SQ8 with the "inner product to L2" trick) only had 35% of Precision @ k. With this fix it is back to 70%, which is consistent with what they had internally as far as I know.
Apparently the original parsing script used to add the question before the titles, that's why it was discarding the first entry.
cc @ola13 | 10-15-2020 14:34:52 | 10-15-2020 14:34:52 | Great! |
transformers | 7,816 | closed | RAG - MissingIndex: Index with index_name 'embeddings' not initialized yet | ## Environment info
transformers version: 3.3.1
Platform: Ubuntu
Python version:3.6.12
PyTorch version (GPU: yes): 1.6.0
Using GPU in script?: 1 gpu
Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten @sgugger
## Information
model name: facebook/rag-sequence-base
The problem arises when using the official example scripts: (give details below)
The tasks I am working on is my own task or dataset: (give details below)
## To reproduce
1) Make directory at examples/rag/ioannis-data and add train eval and test files in the directory
2) Run transformers/examples/rag/finetune.sh with following changes:
--data_dir examples/rag/ioannis-data \
--output_dir examples/rag/ioannis-output \
--model_name_or_path facebook/rag-sequence-base
The script terminates with the following error:
`datasets.search.MissingIndex: Index with index_name 'embeddings' not initialized yet. Please make sure that you call 'add_faiss_index' or 'add_elasticsearch_index' first.`
| 10-15-2020 14:34:12 | 10-15-2020 14:34:12 | I was able to get passed this error by using my own RagRetriever instead of RagPyTorchDistributedRetriever inside "transformers/examples/rag/finetune.py"
I am also using my own custom dataset and index #7763
The following changes got me past the missing index error. However, I have no idea if this is efficient or if I am doing something that I shouldn't be doing...
```
if self.is_rag_model:
if args.prefix is not None:
config.generator.prefix = args.prefix
config.label_smoothing = hparams.label_smoothing
hparams, config.generator = set_extra_model_params(extra_model_params, hparams, config.generator)
# commented out this line
# retriever = RagPyTorchDistributedRetriever.from_pretrained(hparams.model_name_or_path, config=config)
############### new stuff ###############
dataset = load_from_disk(args.passages_path) # to reload the dataset
dataset.load_faiss_index("embeddings", args.index_path) # to reload the index
retriever = RagRetriever.from_pretrained(
hparams.model_name_or_path, index_name="custom", indexed_dataset=dataset
)
######################################
model = self.model_class.from_pretrained(hparams.model_name_or_path, config=config, retriever=retriever)
prefix = config.question_encoder.prefix
```<|||||>Won't have the time in the next 1,2 weeks to take a closer look sadly. Maybe @lhoestq this is interesting to you<|||||>Could you paste the full stacktrace ?<|||||>Thank you @lhoestq .
```
GPU available: True, used: True
INFO:lightning:GPU available: True, used: True
TPU available: False, using: 0 TPU cores
INFO:lightning:TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
INFO:lightning:LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Using native 16bit precision.
INFO:lightning:Using native 16bit precision.
Validation sanity check: 0it [00:00, ?it/s]Traceback (most recent call last):
File "examples/rag/finetune.py", line 499, in <module>
main(args)
File "examples/rag/finetune.py", line 471, in main
logger=logger,
File "/home/ioannis/Desktop/transformers-lhoestq-2/transformers/examples/lightning_base.py", line 384, in generic_train
trainer.fit(model)
File "/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 440, in fit
results = self.accelerator_backend.train()
File "/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 54, in train
results = self.train_or_test()
File "/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 66, in train_or_test
results = self.trainer.train()
File "/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 462, in train
self.run_sanity_check(self.get_model())
File "/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 648, in run_sanity_check
_, eval_results = self.run_evaluation(test_mode=False, max_batches=self.num_sanity_val_batches)
File "/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 568, in run_evaluation
output = self.evaluation_loop.evaluation_step(test_mode, batch, batch_idx, dataloader_idx)
File "/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 171, in evaluation_step
output = self.trainer.accelerator_backend.validation_step(args)
File "/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 76, in validation_step
output = self.__validation_step(args)
File "/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 86, in __validation_step
output = self.trainer.model.validation_step(*args)
File "examples/rag/finetune.py", line 240, in validation_step
return self._generative_step(batch)
File "examples/rag/finetune.py", line 280, in _generative_step
max_length=self.target_lens["val"],
File "/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/transformers/modeling_rag.py", line 873, in generate
return_tensors="pt",
File "/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/transformers/retrieval_rag.py", line 600, in __call__
retrieved_doc_embeds, doc_ids, docs = self.retrieve(question_hidden_states, n_docs)
File "/home/ioannis/Desktop/transformers-lhoestq-2/transformers/examples/rag/distributed_retriever.py", line 115, in retrieve
doc_ids, retrieved_doc_embeds = self._main_retrieve(question_hidden_states, n_docs)
File "/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/transformers/retrieval_rag.py", line 521, in _main_retrieve
ids, vectors = self.index.get_top_docs(question_hidden_states, n_docs)
File "/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/transformers/retrieval_rag.py", line 226, in get_top_docs
_, ids = self.dataset.search_batch("embeddings", question_hidden_states, n_docs)
File "/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/datasets/search.py", line 607, in search_batch
self._check_index_is_initialized(index_name)
File "/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/datasets/search.py", line 358, in _check_index_is_initialized
f"Index with index_name '{index_name}' not initialized yet. Please make sure that you call `add_faiss_index` or `add_elasticsearch_index` first."
datasets.search.MissingIndex: Index with index_name 'embeddings' not initialized yet. Please make sure that you call `add_faiss_index` or `add_elasticsearch_index` first.
```
<|||||>> I was able to get passed this error by using my own RagRetriever instead of RagPyTorchDistributedRetriever inside "transformers/examples/rag/finetune.py"
>
> I am also using my own custom dataset and index #7763
>
> The following changes got me past the missing index error. However, I have no idea if this is efficient or if I am doing something that I shouldn't be doing...
>
> ```
> if self.is_rag_model:
> if args.prefix is not None:
> config.generator.prefix = args.prefix
> config.label_smoothing = hparams.label_smoothing
> hparams, config.generator = set_extra_model_params(extra_model_params, hparams, config.generator)
>
> # commented out this line
> # retriever = RagPyTorchDistributedRetriever.from_pretrained(hparams.model_name_or_path, config=config)
>
> ############### new stuff ###############
> dataset = load_from_disk(args.passages_path) # to reload the dataset
> dataset.load_faiss_index("embeddings", args.index_path) # to reload the index
> retriever = RagRetriever.from_pretrained(
> hparams.model_name_or_path, index_name="custom", indexed_dataset=dataset
> )
> ######################################
>
> model = self.model_class.from_pretrained(hparams.model_name_or_path, config=config, retriever=retriever)
> prefix = config.question_encoder.prefix
> ```
The above code seems to work (runs out of GPU memory in my local machine, so I am in the process of testing it on a server - will keep you posted).
I noticed that the retrieval step 4 in _/examples/rag/use_own_knowledge_dataset.py_ takes a few minutes for every question, so I tried passing in _device=0_ to faiss to move it from cpu to gpu. I got this:
`Faiss assertion 'blasStatus == CUBLAS_STATUS_SUCCESS' failed in virtual void faiss::gpu::StandardGpuResources::initializeForDevice(int) at gpu/StandardGpuResources.cpp:248`
The idea was to speed it up because I don't see how the finetuning can take place with such a slow index, but I might have misunderstood.<|||||>Seems like my attempt to replace RagPyTorchDistributedRetriever with RagRetriever (in an 8 GPU machine) fails too. Too good to be true :)
```
loading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/question_encoder_tokenizer/tokenizer_config.json from cache at /home/ubuntu/.cache/torch/transformers/8ade9cf561f8c0a47d1c3785e850c57414d776b3795e21bd01e58483399d2de4.11f57497ee659e26f830788489816dbcb678d91ae48c06c50c9dc0e4438ec05b
Model name 'facebook/rag-sequence-base/generator_tokenizer' not found in model shortcut name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). Assuming 'facebook/rag-sequence-base/generator_tokenizer' is a path, a model identifier, or url to a directory containing tokenizer files.
loading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/generator_tokenizer/vocab.json from cache at /home/ubuntu/.cache/torch/transformers/3b9637b6eab4a48cf2bc596e5992aebb74de6e32c9ee660a27366a63a8020557.6a4061e8fc00057d21d80413635a86fdcf55b6e7594ad9e25257d2f99a02f4be
loading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/generator_tokenizer/merges.txt from cache at /home/ubuntu/.cache/torch/transformers/b2a6adcb3b8a4c39e056d80a133951b99a56010158602cf85dee775936690c6a.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda
loading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/generator_tokenizer/added_tokens.json from cache at None
loading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/generator_tokenizer/special_tokens_map.json from cache at /home/ubuntu/.cache/torch/transformers/342599872fb2f45f954699d3c67790c33b574cc552a4b433fedddc97e6a3c58e.6e217123a3ada61145de1f20b1443a1ec9aac93492a4bd1ce6a695935f0fd97a
loading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/generator_tokenizer/tokenizer_config.json from cache at /home/ubuntu/.cache/torch/transformers/e5f72dc4c0b1ba585d7afb7fa5e3e52ff0e1f101e49572e2caaf38fab070d4d6.d596a549211eb890d3bb341f3a03307b199bc2d5ed81b3451618cbcb04d1f1bc
loading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/generator_tokenizer/tokenizer.json from cache at None
Using native 16bit precision.
INFO:lightning:Using native 16bit precision.
INFO:__main__:Custom init_ddp_connection.
initializing ddp: GLOBAL_RANK: 7, MEMBER: 8/8
INFO:lightning:initializing ddp: GLOBAL_RANK: 7, MEMBER: 8/8
Traceback (most recent call last):
File "/home/ubuntu/transformers/examples/rag/finetune.py", line 519, in <module>
main(args)
File "/home/ubuntu/transformers/examples/rag/finetune.py", line 491, in main
logger=logger,
File "/home/ubuntu/transformers/examples/lightning_base.py", line 384, in generic_train
trainer.fit(model)
File "/home/ubuntu/anaconda3/envs/tran/lib/python3.7/site-packages/pytorch_lightning/trainer/states.py", line 48, in wrapped_fn
result = fn(self, *args, **kwargs)
File "/home/ubuntu/anaconda3/envs/tran/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1046, in fit
self.accelerator_backend.train(model)
File "/home/ubuntu/anaconda3/envs/tran/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_backend.py", line 57, in train
self.ddp_train(process_idx=self.task_idx, mp_queue=None, model=model)
File "/home/ubuntu/anaconda3/envs/tran/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_backend.py", line 164, in ddp_train
self.trainer.is_slurm_managing_tasks
File "/home/ubuntu/transformers/examples/rag/finetune.py", line 180, in init_ddp_connection
self.model.retriever.init_retrieval(self.distributed_port)
TypeError: init_retrieval() takes 1 positional argument but 2 were given
Traceback (most recent call last):
File "examples/rag/finetune.py", line 519, in <module>
main(args)
File "examples/rag/finetune.py", line 491, in main
logger=logger,
File "/home/ubuntu/transformers/examples/lightning_base.py", line 384, in generic_train
trainer.fit(model)
File "/home/ubuntu/anaconda3/envs/tran/lib/python3.7/site-packages/pytorch_lightning/trainer/states.py", line 48, in wrapped_fn
result = fn(self, *args, **kwargs)
File "/home/ubuntu/anaconda3/envs/tran/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1058, in fit
results = self.accelerator_backend.spawn_ddp_children(model)
File "/home/ubuntu/anaconda3/envs/tran/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_backend.py", line 123, in spawn_ddp_children
results = self.ddp_train(local_rank, mp_queue=None, model=model, is_master=True)
File "/home/ubuntu/anaconda3/envs/tran/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_backend.py", line 164, in ddp_train
self.trainer.is_slurm_managing_tasks
File "examples/rag/finetune.py", line 180, in init_ddp_connection
self.model.retriever.init_retrieval(self.distributed_port)
TypeError: init_retrieval() takes 1 positional argument but 2 were given
```
<|||||>There are differences between the regular retriever and the distributed retriever:
- the distributed retriever's index is only initialized on rank 0
- the different processes communicate with rank 0 to do retrieval
- the init_retrieval method requires to specify a port on which retrieval communication happen
Let me know if you figure out a way to make it work in your case<|||||>Will do, though I guess it's easier to go back to trying to make it work with _RagPyTorchDistributedRetriever_.
Tried adding _dataset.load_faiss_index_ inside get_dataset in finetune.py, but... 'Seq2SeqDataset' object has no attribute 'load_faiss_index'
<|||||>The Seq2SeqDataset is the one the model is trained *on*. The knowledge dataset is stored inside the retriever.
The `MissingIndex` error must come from init_retrieval not being called on the retriever in the process 0, or that the index is not properly loaded.<|||||>Hi @lhoestq @patrickvonplaten any update on this? I'm also running into this issue when running finetune.sh. Though I am able to get the legacy index to work.<|||||>@amogkam I also get the same error when trying to run fine-tuning. I also got an error saying self.opt is not there, but I did solve it.
What do you mean by legacy index?<|||||>I'll investigate this error this week. I'll let you know how it goes<|||||>@lhoestq
I actually did change the initialization in [this line (retrieval_rag.py)](https://github.com/huggingface/transformers/blob/master/src/transformers/retrieval_rag.py#L276).
self.dataset_name, with_index=True,index_name=exact, split=self.dataset_split, dummy=self.use_dummy_dataset<|||||>That's good to know thanks !
However for the RagPyTorchDistributedRetriever we need to load the index only on the process 0 and keep `with_index=False` for the other processes. Ideally we have `with_index=False` in the `__init__` and `with_index=True` in `init_index`<|||||>Oh get it!
On Tue, Nov 10, 2020, 04:56 Quentin Lhoest <[email protected]> wrote:
> That's good to know thanks !
> However for the RagPyTorchDistributedRetriever we need to load the index
> only on the process 0 and keep with_index=False for the other processes.
> Ideally we have with_index=False in the __init__ and with_index=True in
> init_index
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/7816#issuecomment-724102348>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGQAIXISSTMTQW6L6KDSPAGKLANCNFSM4SSCPP5Q>
> .
>
<|||||>Sorry for spamming. I find it hard to understand index_name and index_paths
when loading the datasets with fairsis
On Tue, Nov 10, 2020, 04:56 Quentin Lhoest <[email protected]> wrote:
> That's good to know thanks !
> However for the RagPyTorchDistributedRetriever we need to load the index
> only on the process 0 and keep with_index=False for the other processes.
> Ideally we have with_index=False in the __init__ and with_index=True in
> init_index
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/7816#issuecomment-724102348>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGQAIXISSTMTQW6L6KDSPAGKLANCNFSM4SSCPP5Q>
> .
>
<|||||>You can specify index_name if you want to use one the index that comes with the dataset (exact/compressed), OR you can use index_path to use your own local index file.<|||||>So the index name is like a column right ? Which controls whether thah
column should get loaded in to memory or not ?
On Wed, Nov 11, 2020, 02:35 Quentin Lhoest <[email protected]> wrote:
> You can specify index_name if you want to use one the index that comes
> with the dataset (exact/compressed), OR you can use index_path to use your
> own local index file.
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/7816#issuecomment-724705036>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGUF5V5GO2KCEGIBH7TSPE6RHANCNFSM4SSCPP5Q>
> .
>
<|||||>In the RAG configuration you can specify index_name="exact" or index_name="compressed" for the "wiki_dpr" dataset. Wiki_dpr has indeed those two types of index. For more info you can check the docs of the [RagConfig](https://huggingface.co/transformers/model_doc/rag.html#ragconfig)
On the other hand in the datasets library and in particular in `Dataset.add_faiss_index` you can also see an "index_name" parameter. However this one is different from the one used in the RAG configuration on transformers side. In the datasets library, each dataset can have several indexes that are identified by their names, and by default their names correspond to the column that was used to build the index. See the docs of [the add_faiss_index method](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.add_faiss_index)
This is unfortunately the same variable name but not for the same purpose...
Does that make sense to you ?<|||||>Thanks a lot. I got the idea.
@lhoestq
*Btw I tried to run the rag fine-tuning script
with a lower PyTorch lightning (0.9) version and it worked. I think the issue comes
with version miss-match.*
On Wed, Nov 11, 2020, 02:46 Quentin Lhoest <[email protected]> wrote:
> In the RAG configuration you can specify index_name="exact" or
> index_name="compressed" for the "wiki_dpr" dataset. Wiki_dpr has indeed
> those two types of index. For more info you can check the docs of the
> RagConfig
> <https://huggingface.co/transformers/model_doc/rag.html#ragconfig>
>
> On the other hand in the datasets library and in particular in
> Dataset.add_faiss_index you can also see an "index_name" parameter.
> However this one is different from the one used in the RAG configuration on
> transformers side. In the datasets library, each dataset can have several
> indexes that are identified by their names, and by default their names
> correspond to the column that was used to build the index. See the docs of the
> add_faiss_index method
> <https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.add_faiss_index>
>
> This is unfortunately the same variable name but not for the same
> purpose...
> Does that make sense to you ?
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/7816#issuecomment-724711118>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGUB6XY3LU3EBGEV2D3SPE7Z3ANCNFSM4SSCPP5Q>
> .
>
<|||||>I managed to reproduce the issue, I'm working on a fix<|||||>Perfect.
On Fri, Nov 13, 2020, 00:12 Quentin Lhoest <[email protected]> wrote:
> I managed to reproduce the issue, I'm working on a fix
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/7816#issuecomment-726012277>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGUUQ6UV4KCAZGRLCWTSPO7LDANCNFSM4SSCPP5Q>
> .
>
<|||||>Update: it looks like it's because pytorch lightning removed the init_ddp_connection hook of their LightningModule.
The hook was used to initialize the index on process 0.
I'll use something else to initialize the index.<|||||>Ok, that's why the code still works with PL 0.9.
So now the problem is the initialization of the index in this [line](https://github.com/huggingface/transformers/blob/master/src/transformers/retrieval_rag.py#L274) ?
Thanks a lot.<|||||>@lhoestq any update with this, please?
p.s sorry for spamming :) <|||||>Yes I'm working on a fix ! I'll make a PR tomorrow<|||||>Thanks a lot. :) |
transformers | 7,815 | closed | "Cannot re-initialize CUDA in forked subprocess." error when running "transformers/notebooks/05-benchmark.ipynb" notebook | ## Environment info
I am getting this error on a server, but also on Collab, so giving the Collab specs:
- `transformers` version: 3.3.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No?
And on my server:
- `transformers` version: 3.3.1
- Platform: Linux-4.19.0-11-amd64-x86_64-with-debian-10.6
- Python version: 3.6.12
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No?
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): Any model.
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. [Run the colab on benchmarking provided on the transformers GitHub](https://github.com/huggingface/transformers/blob/master/notebooks/05-benchmark.ipynb).
```
2020-10-15 14:20:38.078717: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
1 / 5
Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
Traceback (most recent call last):
File "run_benchmark.py", line 47, in <module>
main()
File "run_benchmark.py", line 43, in main
benchmark.run()
File "/usr/local/lib/python3.6/dist-packages/transformers/benchmark/benchmark_utils.py", line 674, in run
memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length)
ValueError: too many values to unpack (expected 2)
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 10-15-2020 14:22:24 | 10-15-2020 14:22:24 | Might be of interest to @patrickvonplaten <|||||>Facing same issue here.<|||||>Try this.
You can check other parameters for PyTorchBenchmarkArguments at [here](https://github.com/huggingface/transformers/blob/acf56408d81fdc04a32af139a45ae7b76e0c5b0d/src/transformers/benchmark/benchmark_args_utils.py#L34-L123).
```python
# main.py
def main():
from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments
args = PyTorchBenchmarkArguments(models=["bert-base-uncased"],
batch_sizes=[8],
sequence_lengths=[8, 32, 128, 512],
multi_process=False)
print(args.do_multi_processing)
benchmark = PyTorchBenchmark(args)
results = benchmark.run()
print(results)
if __name__ == '__main__':
main()
```
```
CUDA_VISIBLE_DEVICES=0 python main.py
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 7,814 | closed | BART/TFBart: allow decoder_input_ids.shape[-1] > 1 + use_cache = True | There is an edge case fixed in https://github.com/huggingface/transformers/pull/4581
Try to apply the same logic to bart/move input_id slicing to prepare_inputs_for_generation | 10-15-2020 14:21:58 | 10-15-2020 14:21:58 | |
transformers | 7,813 | closed | Small fixes to NotebookProgressCallback | # What does this PR do?
Fixes a few things in the `NotebookProgressCallback`, mainly:
- triggering forced updates after an evaluation (as an evaluation is longer than just a training step)
- fixing the behavior when an evaluation strategy is not epochs
- removing empty except in the `is_in_notebook` test
| 10-15-2020 14:13:20 | 10-15-2020 14:13:20 | |
transformers | 7,812 | closed | the results of run_squad.py is terrible | # ❓ the results of run_squad.py is terrible
## I runned run_squad.py and the results is as folllows:
`Results: {'exact': 40.25098964036048, 'f1': 44.224848486777795, 'total': 11873, 'HasAns_exact': 80.61740890688259, 'HasAns_f1': 88.57652261867625, 'HasAns_total': 5928, 'NoAns_exact': 0.0, 'NoAns_f1': 0.0, 'NoAns_total': 5945, 'best_exact': 50.11370336056599, 'best_exact_thresh': 0.0, 'best_f1': 50.11370336056599, 'best_f1_thresh': 0.0}`
_**`exact`,`f1` and `HasAns_exact`,`HasAns_f1` are very different!**_
## The startup args are as follows:
export SQUAD_DIR=./datasets/squad
python run_squad.py \
--squad_model bert_qa_model_squad \
--model_type bert \
--model_name_or_path bert-base-uncased \
--do_train \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v2.0.json \
--predict_file $SQUAD_DIR/dev-v2.0.json \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ./models/debug_squad/
| 10-15-2020 13:31:29 | 10-15-2020 13:31:29 | Maybe I not use the argument `--version_2_with_negative` while training on Squad V2.0 ?<|||||>Indeed, not using that clarg would result in such bad results as you're not taking into account impossible answers. Can you let us know if you obtain better results with `--version_2_with_negative`?<|||||>> Indeed, not using that clarg would result in such bad results as you're not taking into account impossible answers. Can you let us know if you obtain better results with `--version_2_with_negative`?
After using `--version_2_with_negative`, the result is also not satisfactory.
Both exact and f1 are not exceed **80%**.
The results are as follows:
{"exact_20000": 73.88191695443443, "f1_20000": 76.9165287314905, "total_20000": 11873, "HasAns_exact_20000": 68.60661268556005, "HasAns_f1_20000": 74.68453873633385, "HasAns_total_20000": 5928, "NoAns_exact_20000": 79.1421362489487, "NoAns_f1_20000": 79.1421362489487, "NoAns_total_20000": 5945, "best_exact_20000": 73.88191695443443, "best_exact_thresh_20000": 0.0, "best_f1_20000": 76.91652873149043,}
After the 25000th steps of training, the performance got worse.
<|||||>That's already a lot better! What is this clarg `--squad_model bert_qa_model_squad`? It's not in the `run_squad.py` script?<|||||>
> That's already a lot better! What is this clarg `--squad_model bert_qa_model_squad`? It's not in the `run_squad.py` script?
Oh ,I have adjusted some code from `run_squad.py` so that it could load and train my custom model from clarg.
And `bert_qa_model_squad` is the model whose code is copied from `BertForQuestionAnswering` in `transformers`.
So that I can design my custom model and then transmit it to the `run_squad.py` script.<|||||>Does your script differ in any other way to the `run_squad.py` script? Does your model differ in any way to the `BertForQuestionAnswering` model in `transformers`? Have you tried running the standard command on the `transformers` script and model?
```
python run_squad.py
--model_name_or_path bert-base-uncased
--do_train
--do_eval
--do_lower_case
--train_file $SQUAD_DIR/train-v2.0.json
--predict_file $SQUAD_DIR/dev-v2.0.json
--version_2_with_negative
--per_gpu_train_batch_size 12
--learning_rate 3e-5
--num_train_epochs 2.0
--max_seq_length 384
--doc_stride 128
--output_dir ./models/debug_squad/
```<|||||>> Does your script differ in any other way to the `run_squad.py` script? Does your model differ in any way to the `BertForQuestionAnswering` model in `transformers`? Have you tried running the standard command on the `transformers` script and model?
>
> ```
> python run_squad.py
> --model_name_or_path bert-base-uncased
> --do_train
> --do_eval
> --do_lower_case
> --train_file $SQUAD_DIR/train-v2.0.json
> --predict_file $SQUAD_DIR/dev-v2.0.json
> --version_2_with_negative
> --per_gpu_train_batch_size 12
> --learning_rate 3e-5
> --num_train_epochs 2.0
> --max_seq_length 384
> --doc_stride 128
> --output_dir ./models/debug_squad/
> ```
sorry, I will try later and tell you the results.<|||||>> Does your script differ in any other way to the `run_squad.py` script? Does your model differ in any way to the `BertForQuestionAnswering` model in `transformers`? Have you tried running the standard command on the `transformers` script and model?
>
> ```
> python run_squad.py
> --model_name_or_path bert-base-uncased
> --do_train
> --do_eval
> --do_lower_case
> --train_file $SQUAD_DIR/train-v2.0.json
> --predict_file $SQUAD_DIR/dev-v2.0.json
> --version_2_with_negative
> --per_gpu_train_batch_size 12
> --learning_rate 3e-5
> --num_train_epochs 2.0
> --max_seq_length 384
> --doc_stride 128
> --output_dir ./models/debug_squad/
> ```
I have run the official `run_squad.py` script,and the result is as follows:
10/16/2020 19:34:00 - INFO - main - Results: {'exact': 72.60170133917292, 'f1': 75.79599520268259, 'total': 11873, 'HasAns_exact': 72.2165991902834, 'HasAns_f1': 78.61434734167507, 'HasAns_total': 5928, 'NoAns_exact': 72.9857022708158, 'NoAns_f1': 72.9857022708158, 'NoAns_total': 5945, 'best_exact': 72.60170133917292, 'best_exact_thresh': 0.0, 'best_f1': 75.79599520268256, 'best_f1_thresh': 0.0}
<|||||>i have ran the official run_squad.py script, but got a 'cached_train_bert-base-uncased_384' not a result. Did i something wrong?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,811 | closed | Fix issue #7781 | `decoder_config` may not be defined at this line, use `kwargs_decoder["config"]` instead. This is also consistent with the rest of the function.
Fixes # 7781
https://github.com/huggingface/transformers/issues/7781
## Who can review?
Anybody can review, though @patrickvonplaten gave an informal approval in the issue discussion. | 10-15-2020 12:43:28 | 10-15-2020 12:43:28 | Hey @jsilter - sorry this fix was already pushed yesterday in #7903. Thanks for spotting the bug!!! |
transformers | 7,810 | closed | Metrics calculate error: can only calculate the mean of floating types. Got Bool instead | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Ubuntu 18.04
- Python version: 3.7.7
- PyTorch version (GPU?): 1.6.0 GPU
- Tensorflow version (GPU?): -
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
Someone working on the examples/GLUE benchmarks
## Information
Model I am using (Bert, XLNet ...): BERT
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (GLUE
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run QNLI experiment
Error:
```bash
File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/transformers/data/metrics/__init__.py", line 73, in glue_compute_metrics
return {"acc": simple_accuracy(preds, labels)}
File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/transformers/data/metrics/__init__.py", line 34, in simple_accuracy
return (preds == labels).mean()
RuntimeError: Can only calculate the mean of floating types. Got Bool instead.
Exception ignored in: <function tqdm.__del__ at 0x7f8d1d09ff28>
Traceback (most recent call last):
File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/tqdm/std.py", line 1086, in __del__
File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/tqdm/std.py", line 1293, in close
File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/tqdm/std.py", line 1471, in display
File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/tqdm/std.py", line 1089, in __repr__
File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/tqdm/std.py", line 1433, in format_dict
TypeError: cannot unpack non-iterable NoneType object
```
A simple fix is to modify `transformers/data/metrics/__init__.py", line 34, in simple_accuracy` to:
```python
return (preds == labels).to(dtype=torch.float32).mean()
```
## Expected behavior
Metric should be computed without problems | 10-15-2020 12:36:03 | 10-15-2020 12:36:03 | Great, this looks like the correct fix! Do you want to open a PR?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 7,809 | closed | [Seq2Seq] Allow EncoderDecoderModels to be trained with Seq2Seq | # What does this PR do?
This PR changes the Seq2Seq Trainer a bit to:
1) Make it work with `EncoderDecoder`
2) Align its API more with the general `Trainer`
@sshleifer @patil-suraj @sgugger - it would be great if you could take a look and give your general opinion on it :-)
If this would be ok for you, I will fix the examples test. | 10-15-2020 12:35:46 | 10-15-2020 12:35:46 | LGTM, thanks for aligning it! We just need some way to pass `eval_beams` and `max_gen_length`.
>We can have a `Seq2SeqTrainingArguments` that subclasses `TrainingArguments` if that helps.
@sgugger we do have `Seq2SeqTrainingArguments` class
https://github.com/huggingface/transformers/blob/d99ed7ad618037ae878f0758157ed0764bd7f935/examples/seq2seq/finetune_trainer.py#L37<|||||>> @sgugger we do have `Seq2SeqTrainingArguments` class
Ah, had forgotten about that :-)<|||||>`eval_beams`/`eval_max_gen_length` reasoning:
@patil-suraj said exactly this LOL, but in my words:
users are not good at modifying configs locally. We want to have a way to run `num_beams=2` during the generation step, but then end up with a trained model with the default # beams. In general, we try not to manipulate config attributes that would only be desired during training.
<|||||>Also would <3 an encoder decoder test in `examples/seq2seq/test_finetune_trainer.py`.
<|||||>After discussion @sshleifer - changed the `Seq2SeqTrainer` to be fully backwards compatible and to work with EncoderDecoder.
@sshleifer - cannot add EncDec test yet because the complete command line setup is too constrained (requires `prepare_seq2seq_batch` to be defined for all tokenizers, etc...) => will see how to add this in the future.
@sshleifer , @patil-suraj - could you do another review please? :-) <|||||>Should be good - I don't really see how -100 would slow down the TPU, but let's wait for @LysandreJik opinion here. <|||||>Can't seem to reply to the comment, but yes, the line @sshleifer is pointing at will slow down on TPU since it's probably using a `torch.where` behind the scene which does not have an XLA operation AFAIK.<|||||>> Can't seem to reply to the comment, but yes, the line @sshleifer is pointing at will slow down on TPU since it's probably using a `torch.where` behind the scene which does not have an XLA operation AFAIK.
Okey, I see -> let's move back in the old CE loss function then to keep backward compatibility!
@sshleifer - one last review please :-) |
transformers | 7,808 | closed | Pipeline(summarization): CUDA error: an illegal memory access was encountered | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Ubuntu 18
- Python version: 3.7
- PyTorch version (GPU?): 1.5.0 (YES)
- Tensorflow version (GPU?): 2.2.0 (YES)
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
### Who can help
@sshleifer
## Information
Model I am using (Bert, XLNet ...): BART for Summarization (pipeline)
The problem arises when using:
```python
class Summarizer:
def __init__(self, min_length: int = 10, max_length: int = 120, device=1):
self.summarizer = pipeline("summarization", device=device)
self.min_length=min_length
self.max_length=max_length
def summarize(self, articles_df):
empty_abstract = articles_df[articles_df["abstract"] == ""]
for idx, paper in tqdm(empty_abstract.iterrows(), desc="Iterating over papers with empty abstract"):
articles_df.loc[idx, "abstract"] = self._summarize_one_paper(paper)
return articles_df
def _summarize_one_paper(self, paper):
try:
summaries = self.summarizer(paper["paragraphs"], min_length=self.min_length, max_length=self.max_length)
except:
summaries = []
for paragraph in paper["paragraphs"]:
summaries.extend(
self.summarizer(paragraph, min_length=self.min_length, max_length=self.max_length)
)
return ["".join([summary["summary_text"] for summary in summaries])]
summarizer = Summarizer()
data_summarized = summarizer.summarize(data)
```
The tasks I am working on is:
* Summarization
The error arises after 848 iterations, which is what surprises me the most... It can access CUDA up until that point.
```
RuntimeError Traceback (most recent call last)
<ipython-input-5-098ee40477ee> in <module>
----> 1 data_summarized = summarizer.summarize(data)
<ipython-input-3-671b9c11b7bf> in summarize(self, articles_df)
8 empty_abstract = articles_df[articles_df["abstract"] == ""]
9 for idx, paper in tqdm(empty_abstract.iterrows(), desc="Iterating over papers with empty abstract"):
---> 10 articles_df.loc[idx, "abstract"] = self._summarize_one_paper(paper)
11 return articles_df
12
<ipython-input-3-671b9c11b7bf> in _summarize_one_paper(self, paper)
18 for paragraph in paper["paragraphs"]:
19 summaries.extend(
---> 20 self.summarizer(paragraph, min_length=self.min_length, max_length=self.max_length)
21 )
22 return ["".join([summary["summary_text"] for summary in summaries])]
~/miniconda/envs/transformers_env/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, return_tensors, return_text, clean_up_tokenization_spaces, *documents, **generate_kwargs)
1926
1927 if self.framework == "pt":
-> 1928 inputs = self.ensure_tensor_on_device(**inputs)
1929 input_length = inputs["input_ids"].shape[-1]
1930 elif self.framework == "tf":
~/miniconda/envs/transformers_env/lib/python3.7/site-packages/transformers/pipelines.py in ensure_tensor_on_device(self, **inputs)
601 :obj:`Dict[str, torch.Tensor]`: The same as :obj:`inputs` but on the proper device.
602 """
--> 603 return {name: tensor.to(self.device) for name, tensor in inputs.items()}
604
605 def check_model_type(self, supported_models: Union[List[str], dict]):
~/miniconda/envs/transformers_env/lib/python3.7/site-packages/transformers/pipelines.py in <dictcomp>(.0)
601 :obj:`Dict[str, torch.Tensor]`: The same as :obj:`inputs` but on the proper device.
602 """
--> 603 return {name: tensor.to(self.device) for name, tensor in inputs.items()}
604
605 def check_model_type(self, supported_models: Union[List[str], dict]):
RuntimeError: CUDA error: an illegal memory access was encountered
```
| 10-15-2020 12:28:20 | 10-15-2020 12:28:20 | I would guess that that traceback is either masking OOM or IndexError. You will get a better traceback if you try on CPU. |
transformers | 7,807 | closed | ValueError: too many values to unpack (expected 4) | Hello,
I am trying to feed in the hidden output of the embedding layer of the `LongformerForMultipleChoice` model directly into the m-th layer of the same model. Each of my multiple-choice question that has 4 options. I am trying to tweak the HuggingFace code for Longformer to carry out my task, but I am rather perplexed by the ValueError (see below) that I am getting. I know that this can be cumbersome to answer, but could you please help me?
When I do:
```Python
my_Longformer_multiple_choice_model.encoder.layer[layer_index].forward(hidden_output,
attention_mask=my_attention_mask,output_attention=False)
```
, this ValueError is generated:
```Python
File "<ipython-input-67-f93c8b17889e>", line 1, in <module>
best_model_longformer.longformer.encoder.layer[0].forward(outputs,attention_mask=attention_mask,output_attentions=True)
File "/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_longformer.py", line 718, in forward
output_attentions=output_attentions,
File "/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_longformer.py", line 693, in forward
output_attentions,
File "/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_longformer.py", line 192, in forward
float_mask.new_ones(size=float_mask.size()), float_mask, self.one_sided_attn_window_size
File "/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_longformer.py", line 384, in _sliding_chunks_query_key_matmul
batch_size, seq_len, num_heads, head_dim = query.size()
ValueError: too many values to unpack (expected 4)
```
In other words, the ValueError is generated here:
https://github.com/huggingface/transformers/blob/d99ed7ad618037ae878f0758157ed0764bd7f935/src/transformers/modeling_longformer.py#L279
In my code above, `my_attention_mask` is the same attention mask that I would specify under the regular `LongformerForMultipleChoice` command. `my_attention_mask` was generated by:
```Python
# I am using the LongformerForMultipleChoice model, where each multiple choice question has 4 options.
encoded_dict = longformer_tokenizer(question_list, option_list,
return_tensors = 'pt',
padding ='max_length')
my_attention_mask = {k: v.unsqueeze(0) for k,v in encoded_dict.items()}['attention_mask']
my_attention_mask
>>> tensor([[[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0]]])
# I can use this my_attention_mask in the regular command without an error, as below:
longformer_output= my_Longformer_multiple_choice_model(input_ids=input_ids,....,attention_mask=my_attention_mask)
```
Also, the `hidden_output` in used in my code above was generated by the following:
```Python
encoded_dict = longformer_tokenizer(question_list, option_list,
return_tensors = 'pt',
padding ='max_length')
hidden_output = my_Longformer_multiple_choice_model(**{k: v.unsqueeze(0) for k,v in encoded_dict.items()}, labels = mc_labels)[2][0][:,:,:]
hidden_output.size()
>>> torch.Size([4, 4096, 768])
```
Since the layers of Transformer models in general are designed to take in the hidden output of the previous layer as their input, I am sure that the HuggingFace code for `LongformerForMultipleChoice` also somehow allows each layer to take the hidden vectors as their input. This is why I think what I am trying to do (feeding hidden output of embedding layer as an input to the m-th layer) is completely achievable....I do not get why I am getting the ValueError.
I am suspecting the form of `my_attention_mask` is causing the ValueError. Is there any way that I can avoid this ValueError without re-writing the whole function? What exactly is the `attention_mask` that is used as a parameter for the following function?:
https://github.com/huggingface/transformers/blob/d99ed7ad618037ae878f0758157ed0764bd7f935/src/transformers/modeling_longformer.py#L225
What should I pass for the `attention_mask` parameter in the command `my_Longformer_multiple_choice_model.encoder.layer[layer_index].forward(hidden_output,
attention_mask,output_attention=False)`?
PS: The following seemed to suggest that I need to pass in the `extended_attention_mask` rather than the `attention_mask` itself:
https://github.com/huggingface/transformers/blob/d99ed7ad618037ae878f0758157ed0764bd7f935/src/transformers/modeling_longformer.py#L1262
So I tried `my_Longformer_multiple_choice_model.encoder.layer[layer_index].forward(hidden_output,
extended_attention_mask,output_attention=False)`, but I am still getting the same ValueError....
Please help.
Thank you, | 10-15-2020 12:14:26 | 10-15-2020 12:14:26 | I solved the issue by myself<|||||>> I solved the issue by myself
How?<|||||>> I solved the issue by myself
@h56cho Would you mind sharing it? Thanks |
transformers | 7,806 | closed | Import error when fine-tuning mbart from master branch | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
tokenizers: @mfuntowicz
Translation: @sshleifer
Bart: @sshleifer
-->
@mfuntowicz @sshleifer
## Information
Model I am using (mBART):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
My script involves fine-tuning mbart for multilingual-translation. Problem is arising when importing transformers from master branch.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
IITB Hindi-english dataset
## To reproduce
Steps to reproduce the behavior:
1. `import transformers` from master branch
```
from transformers import (
File "/content/drive/My Drive/hin-eng/transformers/__init__.py", line 68, in <module>
from .data import (
File "/content/drive/My Drive/hin-eng/transformers/data/__init__.py", line 6, in <module>
from .processors import (
File "/content/drive/My Drive/hin-eng/transformers/data/processors/__init__.py", line 6, in <module>
from .squad import SquadExample, SquadFeatures, SquadV1Processor, SquadV2Processor, squad_convert_examples_to_features
File "/content/drive/My Drive/hin-eng/transformers/data/processors/squad.py", line 10, in <module>
from ...tokenization_bart import BartTokenizer
File "/content/drive/My Drive/hin-eng/transformers/tokenization_bart.py", line 18, in <module>
from .tokenization_roberta import RobertaTokenizer, RobertaTokenizerFast
File "/content/drive/My Drive/hin-eng/transformers/tokenization_roberta.py", line 20, in <module>
from .tokenization_gpt2 import GPT2Tokenizer, GPT2TokenizerFast
File "/content/drive/My Drive/hin-eng/transformers/tokenization_gpt2.py", line 27, in <module>
from .tokenization_utils_fast import PreTrainedTokenizerFast
File "/content/drive/My Drive/hin-eng/transformers/tokenization_utils_fast.py", line 29, in <module>
from .convert_slow_tokenizer import convert_slow_tokenizer
File "/content/drive/My Drive/hin-eng/transformers/convert_slow_tokenizer.py", line 25, in <module>
from tokenizers.models import BPE, Unigram, WordPiece
ImportError: cannot import name 'Unigram'
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Importing transformers normally. | 10-15-2020 11:48:21 | 10-15-2020 11:48:21 | Could you fill-in the issue template for bugs so that we may help you?<|||||>Please checkout now.<|||||>you need to upgrade tokenizers.
It should happen for you if you run `pip install -e ".[dev]"` from the root of the repo.<|||||>Thanks, it worked.<|||||>> Thanks, it worked.
can you tell me how did you solve it?thanks
<|||||>> > Thanks, it worked.
>
> can you tell me how did you solve it?thanks
Just run the command which @sshleifer suggested from the directory having setup.py file. <|||||>Hello,
Which directory are you talking about ? The transformer site package directory, the project directory ? |
transformers | 7,805 | closed | pip3 install issue | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1
- Platform: macOS
- Python version: 3.8.2
- PyTorch version (GPU?):-
- Tensorflow version (GPU?):-
- Using GPU in script?:-
- Using distributed or parallel set-up in script?:-
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Just do pip3 install transformers in a machine installed macOS Catalina 10.15.7 and XCode version 12.0.1
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
Alis-MacBook-Pro:ali_taha_dincer_2020 alitahadincer$ sudo pip3 install transformers
Password:
WARNING: The directory '/Users/alitahadincer/Library/Caches/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting transformers
Downloading transformers-3.3.1-py3-none-any.whl (1.1 MB)
|████████████████████████████████| 1.1 MB 571 kB/s
Collecting tokenizers==0.8.1.rc2
Downloading tokenizers-0.8.1rc2-cp38-cp38-macosx_10_14_x86_64.whl (2.1 MB)
|████████████████████████████████| 2.1 MB 938 kB/s
Collecting regex!=2019.12.17
Downloading regex-2020.10.11.tar.gz (690 kB)
|████████████████████████████████| 690 kB 1.0 MB/s
Collecting sacremoses
Downloading sacremoses-0.0.43.tar.gz (883 kB)
|████████████████████████████████| 883 kB 1.2 MB/s
Requirement already satisfied: numpy in /Library/Python/3.8/site-packages (from transformers) (1.18.5)
Collecting sentencepiece!=0.1.92
Downloading sentencepiece-0.1.91-cp38-cp38-macosx_10_6_x86_64.whl (1.0 MB)
|████████████████████████████████| 1.0 MB 1.5 MB/s
Requirement already satisfied: requests in /Library/Python/3.8/site-packages (from transformers) (2.24.0)
Requirement already satisfied: tqdm>=4.27 in /Library/Python/3.8/site-packages (from transformers) (4.50.2)
Requirement already satisfied: filelock in /Library/Python/3.8/site-packages (from transformers) (3.0.12)
Collecting packaging
Downloading packaging-20.4-py2.py3-none-any.whl (37 kB)
Requirement already satisfied: six in /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/site-packages (from sacremoses->transformers) (1.15.0)
Requirement already satisfied: click in /Library/Python/3.8/site-packages (from sacremoses->transformers) (7.1.2)
Collecting joblib
Downloading joblib-0.17.0-py3-none-any.whl (301 kB)
|████████████████████████████████| 301 kB 1.4 MB/s
Requirement already satisfied: chardet<4,>=3.0.2 in /Library/Python/3.8/site-packages (from requests->transformers) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /Library/Python/3.8/site-packages (from requests->transformers) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /Library/Python/3.8/site-packages (from requests->transformers) (2020.6.20)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /Library/Python/3.8/site-packages (from requests->transformers) (1.25.10)
Collecting pyparsing>=2.0.2
Downloading pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)
|████████████████████████████████| 67 kB 880 kB/s
Building wheels for collected packages: regex, sacremoses
Building wheel for regex (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /Applications/Xcode.app/Contents/Developer/usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/tmp/pip-install-j20cf16z/regex/setup.py'"'"'; __file__='"'"'/private/tmp/pip-install-j20cf16z/regex/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /private/tmp/pip-wheel-461le_2e
cwd: /private/tmp/pip-install-j20cf16z/regex/
Complete output (114 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-10.14.6-x86_64-3.8
creating build/lib.macosx-10.14.6-x86_64-3.8/regex
copying regex_3/__init__.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex
copying regex_3/regex.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex
copying regex_3/_regex_core.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex
copying regex_3/test_regex.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex
running build_ext
building 'regex._regex' extension
creating build/temp.macosx-10.14.6-x86_64-3.8
creating build/temp.macosx-10.14.6-x86_64-3.8/regex_3
xcrun -sdk macosx clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -iwithsysroot/System/Library/Frameworks/System.framework/PrivateHeaders -iwithsysroot/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/Headers -arch arm64 -arch x86_64 -I/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8 -c regex_3/_regex.c -o build/temp.macosx-10.14.6-x86_64-3.8/regex_3/_regex.o
In file included from regex_3/_regex.c:48:
In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:11:
In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/12.0.0/include/limits.h:21:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/limits.h:63:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/cdefs.h:807:2: error: Unsupported architecture
#error Unsupported architecture
^
In file included from regex_3/_regex.c:48:
In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:11:
In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/12.0.0/include/limits.h:21:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/limits.h:64:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/machine/limits.h:8:2: error: architecture not supported
#error architecture not supported
^
In file included from regex_3/_regex.c:48:
In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/stdio.h:64:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_stdio.h:71:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_types.h:27:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:33:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/machine/_types.h:34:2: error: architecture not supported
#error architecture not supported
^
In file included from regex_3/_regex.c:48:
In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/stdio.h:64:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_stdio.h:71:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_types.h:27:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:55:9: error: unknown type name '__int64_t'
typedef __int64_t __darwin_blkcnt_t; /* total blocks */
^
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:56:9: error: unknown type name '__int32_t'; did you mean '__int128_t'?
typedef __int32_t __darwin_blksize_t; /* preferred block size */
^
note: '__int128_t' declared here
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:57:9: error: unknown type name '__int32_t'; did you mean '__int128_t'?
typedef __int32_t __darwin_dev_t; /* dev_t */
^
note: '__int128_t' declared here
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:60:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'?
typedef __uint32_t __darwin_gid_t; /* [???] process and group IDs */
^
note: '__uint128_t' declared here
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:61:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'?
typedef __uint32_t __darwin_id_t; /* [XSI] pid_t, uid_t, or gid_t*/
^
note: '__uint128_t' declared here
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:62:9: error: unknown type name '__uint64_t'
typedef __uint64_t __darwin_ino64_t; /* [???] Used for 64 bit inodes */
^
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:68:9: error: unknown type name '__darwin_natural_t'
typedef __darwin_natural_t __darwin_mach_port_name_t; /* Used by mach */
^
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:70:9: error: unknown type name '__uint16_t'; did you mean '__uint128_t'?
typedef __uint16_t __darwin_mode_t; /* [???] Some file attributes */
^
note: '__uint128_t' declared here
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:71:9: error: unknown type name '__int64_t'
typedef __int64_t __darwin_off_t; /* [???] Used for file sizes */
^
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:72:9: error: unknown type name '__int32_t'; did you mean '__int128_t'?
typedef __int32_t __darwin_pid_t; /* [???] process and group IDs */
^
note: '__int128_t' declared here
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:73:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'?
typedef __uint32_t __darwin_sigset_t; /* [???] signal set */
^
note: '__uint128_t' declared here
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:74:9: error: unknown type name '__int32_t'; did you mean '__int128_t'?
typedef __int32_t __darwin_suseconds_t; /* [???] microseconds */
^
note: '__int128_t' declared here
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:75:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'?
typedef __uint32_t __darwin_uid_t; /* [???] user IDs */
^
note: '__uint128_t' declared here
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:76:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'?
typedef __uint32_t __darwin_useconds_t; /* [???] microseconds */
^
note: '__uint128_t' declared here
In file included from regex_3/_regex.c:48:
In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/stdio.h:64:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_stdio.h:71:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_types.h:43:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'?
typedef __uint32_t __darwin_wctype_t;
^
note: '__uint128_t' declared here
In file included from regex_3/_regex.c:48:
In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/stdio.h:64:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_stdio.h:75:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types/_va_list.h:31:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/machine/types.h:37:2: error: architecture not supported
#error architecture not supported
^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
20 errors generated.
error: command 'xcrun' failed with exit status 1
----------------------------------------
ERROR: Failed building wheel for regex
Running setup.py clean for regex
Building wheel for sacremoses (setup.py) ... done
Created wheel for sacremoses: filename=sacremoses-0.0.43-py3-none-any.whl size=893259 sha256=b122e1b7fed3e4255e25cd3028672a60cb443062e9136993553b8ed40d2fd193
Stored in directory: /private/tmp/pip-ephem-wheel-cache-pguj75nn/wheels/7b/78/f4/27d43a65043e1b75dbddaa421b573eddc67e712be4b1c80677
Successfully built sacremoses
Failed to build regex
Installing collected packages: tokenizers, regex, joblib, sacremoses, sentencepiece, pyparsing, packaging, transformers
Running setup.py install for regex ... error
ERROR: Command errored out with exit status 1:
command: /Applications/Xcode.app/Contents/Developer/usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/tmp/pip-install-j20cf16z/regex/setup.py'"'"'; __file__='"'"'/private/tmp/pip-install-j20cf16z/regex/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/tmp/pip-record-cgmi3mph/install-record.txt --single-version-externally-managed --compile --install-headers /Library/Python/3.8/include/regex
cwd: /private/tmp/pip-install-j20cf16z/regex/
Complete output (114 lines):
running install
running build
running build_py
creating build
creating build/lib.macosx-10.14.6-x86_64-3.8
creating build/lib.macosx-10.14.6-x86_64-3.8/regex
copying regex_3/__init__.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex
copying regex_3/regex.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex
copying regex_3/_regex_core.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex
copying regex_3/test_regex.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex
running build_ext
building 'regex._regex' extension
creating build/temp.macosx-10.14.6-x86_64-3.8
creating build/temp.macosx-10.14.6-x86_64-3.8/regex_3
xcrun -sdk macosx clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -iwithsysroot/System/Library/Frameworks/System.framework/PrivateHeaders -iwithsysroot/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/Headers -arch arm64 -arch x86_64 -I/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8 -c regex_3/_regex.c -o build/temp.macosx-10.14.6-x86_64-3.8/regex_3/_regex.o
In file included from regex_3/_regex.c:48:
In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:11:
In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/12.0.0/include/limits.h:21:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/limits.h:63:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/cdefs.h:807:2: error: Unsupported architecture
#error Unsupported architecture
^
In file included from regex_3/_regex.c:48:
In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:11:
In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/12.0.0/include/limits.h:21:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/limits.h:64:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/machine/limits.h:8:2: error: architecture not supported
#error architecture not supported
^
In file included from regex_3/_regex.c:48:
In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/stdio.h:64:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_stdio.h:71:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_types.h:27:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:33:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/machine/_types.h:34:2: error: architecture not supported
#error architecture not supported
^
In file included from regex_3/_regex.c:48:
In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/stdio.h:64:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_stdio.h:71:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_types.h:27:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:55:9: error: unknown type name '__int64_t'
typedef __int64_t __darwin_blkcnt_t; /* total blocks */
^
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:56:9: error: unknown type name '__int32_t'; did you mean '__int128_t'?
typedef __int32_t __darwin_blksize_t; /* preferred block size */
^
note: '__int128_t' declared here
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:57:9: error: unknown type name '__int32_t'; did you mean '__int128_t'?
typedef __int32_t __darwin_dev_t; /* dev_t */
^
note: '__int128_t' declared here
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:60:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'?
typedef __uint32_t __darwin_gid_t; /* [???] process and group IDs */
^
note: '__uint128_t' declared here
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:61:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'?
typedef __uint32_t __darwin_id_t; /* [XSI] pid_t, uid_t, or gid_t*/
^
note: '__uint128_t' declared here
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:62:9: error: unknown type name '__uint64_t'
typedef __uint64_t __darwin_ino64_t; /* [???] Used for 64 bit inodes */
^
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:68:9: error: unknown type name '__darwin_natural_t'
typedef __darwin_natural_t __darwin_mach_port_name_t; /* Used by mach */
^
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:70:9: error: unknown type name '__uint16_t'; did you mean '__uint128_t'?
typedef __uint16_t __darwin_mode_t; /* [???] Some file attributes */
^
note: '__uint128_t' declared here
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:71:9: error: unknown type name '__int64_t'
typedef __int64_t __darwin_off_t; /* [???] Used for file sizes */
^
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:72:9: error: unknown type name '__int32_t'; did you mean '__int128_t'?
typedef __int32_t __darwin_pid_t; /* [???] process and group IDs */
^
note: '__int128_t' declared here
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:73:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'?
typedef __uint32_t __darwin_sigset_t; /* [???] signal set */
^
note: '__uint128_t' declared here
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:74:9: error: unknown type name '__int32_t'; did you mean '__int128_t'?
typedef __int32_t __darwin_suseconds_t; /* [???] microseconds */
^
note: '__int128_t' declared here
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:75:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'?
typedef __uint32_t __darwin_uid_t; /* [???] user IDs */
^
note: '__uint128_t' declared here
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:76:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'?
typedef __uint32_t __darwin_useconds_t; /* [???] microseconds */
^
note: '__uint128_t' declared here
In file included from regex_3/_regex.c:48:
In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/stdio.h:64:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_stdio.h:71:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_types.h:43:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'?
typedef __uint32_t __darwin_wctype_t;
^
note: '__uint128_t' declared here
In file included from regex_3/_regex.c:48:
In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/stdio.h:64:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_stdio.h:75:
In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types/_va_list.h:31:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/machine/types.h:37:2: error: architecture not supported
#error architecture not supported
^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
20 errors generated.
error: command 'xcrun' failed with exit status 1
----------------------------------------
ERROR: Command errored out with exit status 1: /Applications/Xcode.app/Contents/Developer/usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/tmp/pip-install-j20cf16z/regex/setup.py'"'"'; __file__='"'"'/private/tmp/pip-install-j20cf16z/regex/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/tmp/pip-record-cgmi3mph/install-record.txt --single-version-externally-managed --compile --install-headers /Library/Python/3.8/include/regex Check the logs for full command output.
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 10-15-2020 10:14:10 | 10-15-2020 10:14:10 | This seems to be an error when installing `regex`, can you install it on its own? If you cannot, please open an issue on [their repo](https://bitbucket.org/mrabarnett/mrab-regex/issues?status=new&status=open).<|||||>Thank you, I was not aware of that even I looked into trace twice. Closing the issue. |
transformers | 7,804 | closed | a | a | 10-15-2020 09:07:52 | 10-15-2020 09:07:52 | |
transformers | 7,803 | closed | TPU pod training with BERT | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:3.3.1
- Platform: TPU
- Python version:
- PyTorch version (GPU?): 1.6 TPU v3-32
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?: Yes
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [] my own modified scripts: (give details below)
https://huggingface.co/blog/how-to-train
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
After starting training on TPU Pod using Pytorch XLA, I get following long message
```## ❓ Questions and Help
2020-10-14 14:14:33 10.164.0.60 [2] Traceback (most recent call last):
2020-10-14 14:14:33 10.164.0.60 [2] File "/home/developer/XainBERT/train.py", line 111, in <module>
2020-10-14 14:14:33 10.164.0.60 [2] prediction_loss_only=True,
2020-10-14 14:14:33 10.164.0.60 [2] File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/transformers/trainer.py", line 248, in __init__
2020-10-14 14:14:33 10.164.0.60 [2] self.model = model.to(args.device) if model is not None else None
2020-10-14 14:14:33 10.164.0.60 [2] File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/transformers/file_utils.py", line 944, in wrapper
2020-10-14 14:14:33 10.164.0.60 [2] return func(*args, **kwargs)
2020-10-14 14:14:33 10.164.0.60 [2] File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/transformers/training_args.py", line 408, in device
2020-10-14 14:14:33 10.164.0.60 [2] return self._setup_devices[0]
2020-10-14 14:14:33 10.164.0.60 [2] File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/transformers/file_utils.py", line 934, in __get__
2020-10-14 14:14:33 10.164.0.60 [2] cached = self.fget(obj)
2020-10-14 14:14:33 10.164.0.60 [2] File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/transformers/file_utils.py", line 944, in wrapper
2020-10-14 14:14:33 10.164.0.60 [2] return func(*args, **kwargs)
2020-10-14 14:14:33 10.164.0.60 [2] File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/transformers/training_args.py", line 379, in _setup_devices
2020-10-14 14:14:33 10.164.0.60 [2] device = xm.xla_device()
2020-10-14 14:14:33 10.164.0.60 [2] File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 167, in xla_device
2020-10-14 14:14:33 10.164.0.60 [2] devkind=[devkind] if devkind is not None else None)
2020-10-14 14:14:33 10.164.0.60 [2] File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 72, in get_xla_supported_devices
2020-10-14 14:14:33 10.164.0.60 [2] xla_devices = _DEVICES.value
2020-10-14 14:14:33 10.164.0.60 [2] File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/utils/utils.py", line 30, in value
2020-10-14 14:14:33 10.164.0.60 [2] self._value = self._gen_fn()
2020-10-14 14:14:33 10.164.0.60 [2] File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 18, in <lambda>
2020-10-14 14:14:33 10.164.0.60 [2] _DEVICES = xu.LazyProperty(lambda: torch_xla._XLAC._xla_get_devices())
2020-10-14 14:14:33 10.164.0.60 [2] RuntimeError: tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:244 : Check failed: default_device_target != options_.global_device_map.end()
2020-10-14 14:14:33 10.164.0.60 [2] *** Begin stack trace ***
2020-10-14 14:14:33 10.164.0.60 [2] tensorflow::CurrentStackTrace()
2020-10-14 14:14:33 10.164.0.60 [2] xla::XrtComputationClient::XrtComputationClient(xla::XrtComputationClient::Options, std::unique_ptr<tensorflow::tpu::TopologyProto, std::default_delete<tensorflow::tpu::TopologyProto> >)
2020-10-14 14:14:33 10.164.0.60 [2] xla::ComputationClient::Create()
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2] xla::ComputationClient::Get()
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2] _PyCFunction_FastCallDict
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2] _PyEval_EvalFrameDefault
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2] _PyEval_EvalFrameDefault
2020-10-14 14:14:33 10.164.0.60 [2] PyEval_EvalCodeEx
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2] PyObject_Call
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2] _PyObject_GenericGetAttrWithDict
2020-10-14 14:14:33 10.164.0.60 [2] _PyEval_EvalFrameDefault
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2] _PyEval_EvalFrameDefault
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2] _PyEval_EvalFrameDefault
2020-10-14 14:14:33 10.164.0.60 [2] PyEval_EvalCodeEx
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2] PyObject_Call
2020-10-14 14:14:33 10.164.0.60 [2] _PyEval_EvalFrameDefault
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2] _PyEval_EvalFrameDefault
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2] _PyFunction_FastCallDict
2020-10-14 14:14:33 10.164.0.60 [2] _PyObject_FastCallDict
2020-10-14 14:14:33 10.164.0.60 [2] PyObject_CallFunctionObjArgs
2020-10-14 14:14:33 10.164.0.60 [2] _PyObject_GenericGetAttrWithDict
2020-10-14 14:14:33 10.164.0.60 [2] _PyEval_EvalFrameDefault
2020-10-14 14:14:33 10.164.0.60 [2] PyEval_EvalCodeEx
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2] PyObject_Call
2020-10-14 14:14:33 10.164.0.60 [2] _PyEval_EvalFrameDefault
2020-10-14 14:14:33 10.164.0.60 [2] PyEval_EvalCodeEx
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2] PyObject_Call
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2] _PyObject_GenericGetAttrWithDict
2020-10-14 14:14:33 10.164.0.60 [2] _PyEval_EvalFrameDefault
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2] _PyFunction_FastCallDict
2020-10-14 14:14:33 10.164.0.60 [2] _PyObject_FastCallDict
2020-10-14 14:14:33 10.164.0.60 [2] _PyObject_Call_Prepend
2020-10-14 14:14:33 10.164.0.60 [2] PyObject_Call
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2] _PyObject_FastCallDict
2020-10-14 14:14:33 10.164.0.60 [2] _PyObject_FastCallKeywords
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2] _PyEval_EvalFrameDefault
2020-10-14 14:14:33 10.164.0.60 [2] PyEval_EvalCodeEx
2020-10-14 14:14:33 10.164.0.60 [2] PyEval_EvalCode
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2] PyRun_FileExFlags
2020-10-14 14:14:33 10.164.0.60 [2] PyRun_SimpleFileExFlags
2020-10-14 14:14:33 10.164.0.60 [2] Py_Main
2020-10-14 14:14:33 10.164.0.60 [2] main
2020-10-14 14:14:33 10.164.0.60 [2] __libc_start_main
2020-10-14 14:14:33 10.164.0.60 [2]
2020-10-14 14:14:33 10.164.0.60 [2] *** End stack trace ***
020-10-14 14:27:11 10.164.0.61 [0] /anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/transformers/trainer.py:267: FutureWarning: Passing `prediction_loss_only` as a keyword argument is deprecated and won't
be possible in a future version. Use `args.prediction_loss_only` instead.
2020-10-14 14:27:11 10.164.0.61 [0] FutureWarning,
2020-10-14 14:27:11 10.164.0.61 [0] Using deprecated `--per_gpu_train_batch_size` argument which will be removed in a future version. Using `--per_device_train_batch_size` is preferred.
2020-10-14 14:27:11 10.164.0.61 [0] Using deprecated `--per_gpu_train_batch_size` argument which will be removed in a future version. Using `--per_device_train_batch_size` is preferred.
2020-10-14 14:27:11 10.164.0.61 [0] Using deprecated `--per_gpu_train_batch_size` argument which will be removed in a future version. Using `--per_device_train_batch_size` is preferred.
Epoch: 0%| | 0/1 [00:00<?, ?it/s]
2020-10-14 14:28:09 10.164.0.61 [0]
2020-10-14 14:28:20 10.164.0.61 [0]
2020-10-14 14:28:32 10.164.0.61 [0]
2020-10-14 14:28:33 10.164.0.61 [0]
2020-10-14 14:28:38 10.164.0.61 [0]
2020-10-14 14:28:38 10.164.0.61 [0]
2020-10-14 14:28:38 10.164.0.61 [0]
2020-10-14 14:28:40 10.164.0.61 [0]
2020-10-14 14:28:42 10.164.0.61 [0]
2020-10-14 14:28:54 10.164.0.61 [0]
2020-10-14 14:28:54 10.164.0.61 [0]
2020-10-14 14:28:56 10.164.0.61 [0]
2020-10-14 14:28:57 10.164.0.61 [0]
2020-10-14 14:28:58 10.164.0.61 [0]
2020-10-14 14:28:58 10.164.0.61 [0]
2020-10-14 14:28:59 10.164.0.61 [0]
2020-10-14 14:29:01 10.164.0.61 [0]
2020-10-14 14:29:01 10.164.0.61 [0]
2020-10-14 14:29:01 10.164.0.61 [0]
2020-10-14 14:29:02 10.164.0.61 [0]
2020-10-14 14:29:02 10.164.0.61 [0]
```
I think the problem lies here
```2020-10-14 14:14:33 10.164.0.60 [2] RuntimeError: tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:244 : Check failed: default_device_target != options_.global_device_map.end() ```
But I do not understand what is wrong.
I basically use the script from here https://huggingface.co/blog/how-to-train with some modifications, own data but Bert vocabulary. I have used the same script for training on TPUv3-8 as well which works fine.The transformer library version is 3.3.1 and I am using BERT tokeniser and BERT config. The distributed training on TPU pod worked with ImageNet example mentioned in https://cloud.google.com/tpu/docs/tutorials/pytorch-pod#configure-gcloud.
## To reproduce
I can't really provide the whole script. But the training command is as follow:
python -m torch_xla.distributed.xla_dist --tpu=$TPU_NAME --conda-env=torch-xla-1.6 -- python /home/developer/XainBERT/xla_spawn.py --num_cores 8 /home/developer/XainBERT/train.py
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 10-15-2020 08:48:50 | 10-15-2020 08:48:50 | Hi, I believe there is an error in your pytorch/xla install. I would follow the docs shared on [pytorch/xla](https://github.com/pytorch/xla) for best results.<|||||>Actually, I am using Google cloud for running the training job. And it comes already with torch-xla-1.6 environment and the imagenet example works file.
As fas as I understand the problem is here https://github.com/pytorch/xla/blob/53eb3fc5e701cb8cc79a45dd0f79c6e585a27a41/torch_xla/core/xla_model.py#L18.
I do not understand what is the problem though. <|||||>I don't really understand how you're launching your training. Our `xla_spawn.py` already takes care of parallelizing. I would do the following:
```
export TPU_IP_ADDRESS=xxx.xxx.xxx.xxx
export XRT_TPU_CONFIG="tpu_worker;0;$TPU_IP_ADDRESS:8470"
python xla_spawn.py --num_cores 8 /home/developer/XainBERT/train.py
```<|||||>This works when I am using 8 core TPU. However, when I try to use TPUv3-128, then I get the error about cluster resolver. So I follow the instructions from here https://cloud.google.com/tpu/docs/tutorials/pytorch-pod#configure-gcloud. Also, the PyTorch lightning website also refers to the same page https://pytorch-lightning.readthedocs.io/en/latest/tpu.html. <|||||>Ah, I see. Unfortunately, I don't have sufficient experience with TPU pods to help you debug this. Can you try to open an issue on pytorch/xla's github?<|||||>That's sad. Am I right, that Huggingface does not have TF implementation of Language models? I could try to switch to TF and try. Anyways, I have opened an issue on PyTorch/xla.<|||||>No, we do have TF implementations of language models. Most of our models are available in both PyTorch and TensorFlow.<|||||>However, this link says that there is not TFtrainer support for language-modelling using raw data.
https://huggingface.co/transformers/examples.html
Is there any example available for TFtrainer?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.