repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 1,991 | closed | Facing AttributeError: 'DataParallel' object has no attribute 'resize_token_embeddings' | ## 🐛 AttributeError: 'DataParallel' object has no attribute 'resize_token_embeddings'
<!-- Important information -->
I'm facing AttributeError: 'DataParallel' object has no attribute 'resize_token_embeddings' while performing fine-tuning by using run_lm_finetuning.py.
Following are the arguments:
python run_lm_finetuning.py --train_data_file=sample_text.txt --model_type=gpt2 --model_name_or_path=gpt2 --output_dir=op --mlm --do_train --overwrite_output_dir --do_lower_case --save_steps=50
I tried to change the model but faced same error :
## Detailed error message
Traceback (most recent call last):
File "run_lm_finetuning.py", line 556, in <module>
main()
File "run_lm_finetuning.py", line 508, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_lm_finetuning.py", line 218, in train
model.resize_token_embeddings(len(tokenizer))
File "/nipa/anaconda3/envs/nlp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 585, in __getattr__
type(self).__name__, name))
AttributeError: 'DataParallel' object has no attribute 'resize_token_embeddings'
| 11-29-2019 07:13:45 | 11-29-2019 07:13:45 | I've two GPUs install but I've not passed any argument to utilize both GPUs
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.87.01 Driver Version: 418.87.01 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla V100-PCIE... On | 00000000:00:05.0 Off | Off |
| N/A 39C P0 43W / 250W | 4976MiB / 32480MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla V100-PCIE... On | 00000000:00:06.0 Off | Off |
| N/A 35C P0 26W / 250W | 11MiB / 32480MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 26820 C python 1653MiB |
| 0 69641 C python 1657MiB |
| 0 114902 C python 1655MiB |
+-----------------------------------------------------------------------------+
<|||||>I ran into the same issue. It is not fixed in the newest released version, 2.2.1.<|||||>Same issue when using multi-gpu. Single gpu case works.
e.g. running the command as `CUDA_VISIBLE_DEVICES=1 python run_lm_finetuning.py ...`<|||||>As mentioned by @kalpitdixit using Single GPU works fine but on multiple GPUs, problem persists. <|||||>Ok should be fixed on master, thanks! |
transformers | 1,990 | closed | When training QA models, albert-xxlarge-v2 uses much more GPU mem than Bert-large | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hi,
When I used run_square.py to train QA model, I found that albert-xxlarge-v2 uses much more GPU mem than Bert-large. Specifically, when using Bert large, I can set `Max_sequence_ength = 512`, `bash_size = 12`. But when I use albert-xlarge-v2, I can only set `Max_sequence_length` = 512, `bash_size = 6`.
In fact, the number of parameters of Albert XXLarge is much less than that of Bert large, and the size of model file is the same. Why does Albert-xxlarge occupy more GPU memory when training QA model? Is it caused by more head parameters? | 11-29-2019 07:04:21 | 11-29-2019 07:04:21 | 

The parameters of Albert XXLarge is much less than that of Bert large, because albert used shared parameters in all transformer layers. But it does not reduce computation,Albert xlarge is 1.5 times lower than bert large and Albert xxlarge is 3 times<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>ALBERT repeats the same parameters for each layer but increases each layer size, so even though it has fewer parameters than BERT, the memory needs are greater due to the much larger activations in each layer. |
transformers | 1,989 | closed | Will you add XLNet text-generation feature ? | ## ❓ Questions & Help
There is `run_generation.py` in example now. Do you have a plan to add feature of complete lm_finetune and inference? Just like GPT-2.
Thanks | 11-29-2019 05:29:10 | 11-29-2019 05:29:10 | Not in the short term<|||||>@thomwolf Thanks a lot<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,988 | closed | Possible error in the HuggingFace Transformers documentation? | Hello,
According to HuggingFace Transformers documentation website (https://huggingface.co/transformers/model_doc/gpt2.html#gpt2doubleheadsmodel), under the GPT2DoubleHeadsModel, it defines the output lm_prediction_scores as the following:
`lm_prediction_scores: torch.FloatTensor of shape (batch_size, num_choices, sequence_length, config.vocab_size)`
To me this doesn't make sense. Shouldn't the dimension of `lm_prediction_scores` for `GPT2DoubleHeadsModel` be just `(batch_size, sequence_length, config.vocab_size)`? [no `num_choices` in the middle]
Thank you,
| 11-29-2019 01:36:44 | 11-29-2019 01:36:44 | The documentation seems correct to me. Have you tried out the example code that's been provided? If you run it and check the resulting `lm_prediction_scores`, you'll see its shape is `torch.Size([1, 2, 7, 50258])`, 2 being the length of `choices`.
This comment, and the linked blog post, explains the *DoubleHeadsModel pretty well:
https://github.com/huggingface/transformers/issues/1794#issuecomment-552627190<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,987 | closed | Saving and resuming | Here's my basic implementation of the saving and resuming improvements discussed in #1960. So far, I've only modified the `run_lm_finetuning` example, but if my changes are approved I can update the rest of the examples as well.
There are three main changes:
1. The example now saves the optimizer, scheduler, and tokenizer every `save_steps` iterations.
2. The example now checks whether training is being continued from a checkpoint, and if so, looks for a saved optimizer and scheduler and loads them in.
3. The example checks whether training is being continued from a checkpoint, and if so, gets the global step of the checkpoint and continues training from the last saved global step. | 11-29-2019 01:29:39 | 11-29-2019 01:29:39 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1987?src=pr&el=h1) Report
> Merging [#1987](https://codecov.io/gh/huggingface/transformers/pull/1987?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0cb163865a4c761c226b151283309eedb2b1ca4d?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/1987?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #1987 +/- ##
=======================================
Coverage 82.67% 82.67%
=======================================
Files 111 111
Lines 16162 16162
=======================================
Hits 13362 13362
Misses 2800 2800
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1987?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1987?src=pr&el=footer). Last update [0cb1638...bea4947](https://codecov.io/gh/huggingface/transformers/pull/1987?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thank you for your contribution! I just had a quick look an will give it the time it deserves later. Were you aware of [Pytorch's guidelines](https://pytorch.org/tutorials/beginner/saving_loading_models.html#saving-loading-a-general-checkpoint-for-inference-and-or-resuming-training) when it comes to saving models? I reckon it might simplify your solution. <|||||>Did this pull request check the epochs AND steps?
For instance, my training process stopped in epoch 2 step 6423.
If I run again it will continue from epoch 2 step 6423?<|||||>I am no expert, but looking at PyTorch schedulers' code they do keep the current global step in the state. The schedulers should thus continue from the last step. The thing I am wondering about is how to continue from the same data sample.<|||||>Hi guys,
Just wanted to ask this, Wouldn't too frequent caching to disks slow down the training overall?
We can have a flag added if the user wants to save every epoch, like ```file_name_{epoch}.pt```.
Plus we can save optimizer etc on the same weights file as well.
Plus allowing users to specify those file names as well should be considered.
Thanks.<|||||>Hi,
@rlouf,
Saving the all the checkpoints (model, tokenizer, optimizer, and scheduler) in one file like the pytorch example does would break the `from_pretrained` method, but I could change it to save the optimizer and scheduler in one file instead of two.
I could change these lines:
```
# Saving
torch.save(optimizer.state_dict(), os.path.join(output_dir, 'optimizer.pt'))
torch.save(scheduler.state_dict(), os.path.join(output_dir, 'scheduler.pt'))
# Loading
optimizer.load_state_dict(torch.load(os.path.join(args.model_name_or_path, 'optimizer.pt')))
scheduler.load_state_dict(torch.load(os.path.join(args.model_name_or_path, 'scheduler.pt')))
```
to something like
```
# Saving
torch.save({
'optimizer_state_dict': optimizer.state_dict(),
'scheduler_state_dict': scheduler.state_dict()
}, os.path.join(output_dir, 'training_state.pt'))
# Loading
checkpoint = torch.load(os.path.join(output_dir, 'training_state.pt'))
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
scheduler.load_state_dict(checkpoint['scheduler_state_dict'])
```
@marcoaleixo Yes, the code will resume training from the last saved checkpoint. The code saves the model every `--save_steps` training steps and saves the checkpoint in the format `checkpoint-global_step`, so we know exactly which global step the last checkpoint was on. From the global step we can figure out how many epochs have been trained and how many batches should be skipped in the current epoch to continue training from where we left off. The code for this is in [these](https://github.com/bkkaggle/transformers/blob/saving-and-resuming/examples/run_lm_finetuning.py#L230) lines
@rlouf To make sure you also continue from the last data sample, you would probably have to set a constant random seed in the dataloader and then `continue` through all the epochs and batches until you get to the saved checkpoint, which would take longer, especially if your dataset is very large.
@AdityaSoni19031997 You can set how often checkpoints are saved using the `--save_steps` flag. Saving the all the checkpoints (model, tokenizer, optimizer, and scheduler) in one file like the pytorch example does would break the `from_pretrained` method. Letting the user choose the file names might make it harder to automatically find and load in the files, and would require new command line parameters.<|||||>Tell me when this is not WIP anymore and ready to review, ok?<|||||>Hi @thomwolf, yes this is ready to review<|||||>@bkkaggle have you compared the training curves of a single run with the training curve of two half-runs with saving/reloading in the middle?<|||||>Hi @thomwolf, I ran some quick experiments to compare the loss curves.
The loss curves are almost, but not exactly identical - probably because the `RandomSampler` doesn't accept a random seed.
One more thing: The mean loss at the end of a continued training run won't be the same as a training run completed in one go because the mean loss gets reset to 0 when continuing training. Do you want to find some way around this, maybe by also saving the running loss at each checkpoint? or would doing this add too much complexity for no benefit?
wandb dashboard with the loss curves: https://app.wandb.ai/bkkaggle/saving-and-resuming?workspace=default
- `vague_valley_29` is the original 1 epoch training run
- `vital_bird_30` is the same training run, but cancelled after step 100
- `fresh-feather-34` is the training run resumed from `vital_bird_30`'s step 100 checkpoint<|||||>Ok, yes full determinism can be a rabbit-hole.
I think it's good to merge for me.
Ok for you as well @LysandreJik?<|||||>Yes, looks good to me!<|||||>I just finished updating the other pytorch examples: `run_xnli.py`, `run_ner.py`, `run_squad.py`, and `run_glue`. I pushed them up to my [branch](https://github.com/bkkaggle/transformers/commits/saving-and-resuming) and If you want, I can open another pull request for them. |
transformers | 1,986 | closed | Fine Tuning Bert for Q&A | This is more of a theoretical question. I would like to use a Bert Model trained on SQUAD 2.0 then train it further on my domain's Q&A dataset.
How would I do that. I've read through the code. As I understand it, I see that the BertforQuestionAnswering would be what I would need to use loaded with a model that is fine tuned on Squad so the weights match the architecture.
But now I want to further fine tune this model to include Q&A training data from my target domain. How would I do that? | 11-29-2019 00:22:47 | 11-29-2019 00:22:47 | A quick workaround would be appending your data to the SQUAD training dataset and doing the fine-tuning as usual.
<|||||>Yes thats an approach. I've been reading a bit more about GPT and GPT-2 ... i'm wondering if I could use the generative approach with fine-tuning on a specific task that would help with SQUAD Q&A ability for my target domain? What are people's thoughts. Does the math work out?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>> This is more of a theoretical question. I would like to use a Bert Model trained on SQUAD 2.0 then train it further on my domain's Q&A dataset.
>
> How would I do that. I've read through the code. As I understand it, I see that the BertforQuestionAnswering would be what I would need to use loaded with a model that is fine tuned on Squad so the weights match the architecture.
>
> But now I want to further fine tune this model to include Q&A training data from my target domain. How would I do that?
Hi @priteshpatel15 I am interested in this problem. Have you found a more elegant solution than what Matthew suggested above?
|
transformers | 1,985 | closed | run_squad.py for tf | Is there any version of the script for fine tunning on Squad using tensorflow? | 11-28-2019 22:48:53 | 11-28-2019 22:48:53 | Not yet, but it's on the roadmap. |
transformers | 1,984 | closed | [WIP] Squad refactor | This PR aims to refactor SQuAD to make it usable with all models with question answering heads, and without having to build the entire tokenization pipeline as it is currently done.
- It is based on processors that manage data, similarly to the GLUE processors. The two new processors are `SquadV1Processor` and `SquadV2Processor`. They'll probably be merged into a single `SquadProcessor` as the difference between the two versions is minimal.
- It leverages powerful abstractions made for the `run_glue` refactor a few months ago that greatly simplified the tokenization pipeline
- It can be interfaced with the package `tensorflow_datasets`.
- It better respects the library-wide naming, with `attention_mask` instead of `input_mask` and `token_type_ids` instead of `segment_ids`, among others.
- Introduces padding to `encode` and `encode_plus`, alongside tests.
It is still a work on progress but some aspects of it are working.
### Left to do
- [x] Add the processors to `__init__.py`
- [x] Patch the evaluation so that it leverages the current interface
- [x] Patch the evaluation so that it may work with tfds
- [x] Modify the run arguments to reflect the changes
- [x] Remove the `only_first` argument which would only be used for testing
- [x] Update tests running the `run_squad.py` script
- [x] Include the padding location in the tokenizers and reflect the changes in the feature converter
- [x] Test that all current models can train and evaluate (BERT, RoBERTa, XLNet, XLM)
- [x] Add the last models (DistilBERT, ALBERT, ...)
- [x] Return datasets (maybe only pytorch TensorDataset for now)
- [x] Documentation
- [x] Short examples showcasing the simply usage in the processors section.
- [x] Patch the evaluation for impossible questions
### Running sample
Here's the major difference from the user's perspective. Initially, to obtain the examples which were then converted to features, the user had to do as follows (taken from the current `run_squad.py`), which only works for BERT/XLNet/DistilBERT/ALBERT:
```py
examples = read_squad_examples(
input_file=input_file,
is_training=not evaluate,
version_2_with_negative=args.version_2_with_negative
)
features = convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
max_seq_length=args.max_seq_length,
doc_stride=args.doc_stride,
max_query_length=args.max_query_length,
is_training=not evaluate,
cls_token_segment_id=2 if args.model_type in ['xlnet'] else 0,
pad_token_segment_id=3 if args.model_type in ['xlnet'] else 0,
cls_token_at_end=True if args.model_type in ['xlnet'] else False,
sequence_a_is_doc=True if args.model_type in ['xlnet'] else False
)
```
In order to obtain the exact same results, the user now has to do as follows, which will be completely model independant once the `sequence_a_is_doc` is integrated in our sequence pair tokenization methods:
```py
processor = SquadV1Processor()
examples = processor.get_dev_examples("examples/squad") if evaluate else processor.get_train_examples("examples/squad")
features = squad_convert_examples_to_features(
examples=examples,
tokenizer=tokenizer,
max_seq_length=args.max_seq_length,
doc_stride=args.doc_stride,
max_query_length=args.max_query_length,
is_training=not evaluate,
sequence_a_is_doc=True if args.model_type in ['xlnet'] else False
)
```
The same can be done by using TFDS instead, removing the need to specify a file. The two initial lines now become:
```py
tfds_examples = tensorflow_datasets.load("squad")["validation"] if evaluate else tensorflow_datasets.load("squad")["train"]
examples = SquadV1Processor().get_examples_from_dataset(tfds_examples)
```
| 11-28-2019 22:36:08 | 11-28-2019 22:36:08 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1984?src=pr&el=h1) Report
> Merging [#1984](https://codecov.io/gh/huggingface/transformers/pull/1984?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0cb163865a4c761c226b151283309eedb2b1ca4d?src=pr&el=desc) will **decrease** coverage by `3.11%`.
> The diff coverage is `17.42%`.
[](https://codecov.io/gh/huggingface/transformers/pull/1984?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #1984 +/- ##
==========================================
- Coverage 82.67% 79.56% -3.12%
==========================================
Files 111 113 +2
Lines 16162 16969 +807
==========================================
+ Hits 13362 13501 +139
- Misses 2800 3468 +668
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1984?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [transformers/data/metrics/squad\_metrics.py](https://codecov.io/gh/huggingface/transformers/pull/1984/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2RhdGEvbWV0cmljcy9zcXVhZF9tZXRyaWNzLnB5) | `0% <0%> (ø)` | |
| [transformers/tests/tokenization\_tests\_commons.py](https://codecov.io/gh/huggingface/transformers/pull/1984/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl90ZXN0c19jb21tb25zLnB5) | `100% <100%> (ø)` | :arrow_up: |
| [transformers/data/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/1984/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2RhdGEvX19pbml0X18ucHk=) | `100% <100%> (ø)` | :arrow_up: |
| [transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1984/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl94bG5ldC5weQ==) | `90.4% <100%> (+0.15%)` | :arrow_up: |
| [transformers/data/processors/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/1984/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2RhdGEvcHJvY2Vzc29ycy9fX2luaXRfXy5weQ==) | `100% <100%> (ø)` | :arrow_up: |
| [transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1984/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `91.02% <100%> (+0.55%)` | :arrow_up: |
| [transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/1984/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2RhdGEvcHJvY2Vzc29ycy9zcXVhZC5weQ==) | `14.7% <14.7%> (ø)` | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1984?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1984?src=pr&el=footer). Last update [0cb1638...2a4ef09](https://codecov.io/gh/huggingface/transformers/pull/1984?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Should be about ready to merge now. ~I'm reproducing the results paper results on XLNet, BERT, XLM, RoBERTa, DistilBERT and ALBERT to make sure it works as expected, then I'll rebase and call for review.~
I made sure that I obtain somewhat the same results with fine-tuning + evaluating models with the old and new scripts. I could confirm it is the case for DistilBERT, BERT and XLNet.<|||||>Ok this is great, merging!
@LysandreJik do you want to have a quick look at the two comments I wrote about doc/docstring and the incomplete warning?
Merging now because these are small tweak and @mfuntowicz need this PR for his PR so I'll let you push a doc commit on master directly maybe. |
transformers | 1,983 | closed | add special tokens | Hello
I tried to add special tokens to bert tokenizer via add_special_tokens:
```
tokenizer.add_special_tokens({'additional_special_tokens':['SS']})
```
But I got CUDA error
```
CUDA error: device-side assert triggered
```
The code runs without adding additional_special_tokens!
Any idea? | 11-28-2019 21:07:44 | 11-28-2019 21:07:44 | Do you load everything (model, data) on GPU?
> Hello
> I tried to add special tokens to bert tokenizer via add_special_tokens:
>
> ```
> tokenizer.add_special_tokens({'additional_special_tokens':['SS']})
> ```
>
> But I got CUDA error
>
> ```
> CUDA error: device-side assert triggered
> ```
>
> The code runs without adding additional_special_tokens!
> Any idea?<|||||>Yes, I did. The code runs without this line :
```
tokenizer.add_special_tokens({'additional_special_tokens':['SS']})
```
Do you think it is a resourcse issue?
Thanks <|||||>I got this lately:
```
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in linear(input, weight, bias)
1370 ret = torch.addmm(bias, input, weight.t())
1371 else:
-> 1372 output = input.matmul(weight.t())
1373 if bias is not None:
1374 output += bias
RuntimeError: cublas runtime error : resource allocation failed at /pytorch/aten/src/THC/THCGeneral.cpp:216
```<|||||>Are you changing the #vocab count in the config as well?
<|||||>No, I did not. <|||||>you can check them with``` tokenizer.convert_ids_to_tokens(id)``` (I don’t remember exactly, but I think, from 100 to 1000 are free, maybe, from 5 to 1000 even free, crosscheck please, also it depends on the "case" of the model)
Generally there's a pack in the beginning and then somewhere in the between we have these free unused tokens..!<|||||>Try this,
```
### Let's load a model and tokenizer
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
### Do some stuff to our model and tokenizer
# Ex: add new tokens to the vocabulary and embeddings of our model
tokenizer.add_tokens(['[SPECIAL_TOKEN_1]', '[SPECIAL_TOKEN_2]'])
model.resize_token_embeddings(len(tokenizer))
# Train our model
train(model)
### Now let's save our model and tokenizer to a directory
model.save_pretrained('./my_saved_model_directory/')
tokenizer.save_pretrained('./my_saved_model_directory/')
### Reload the model and the tokenizer
model = BertForSequenceClassification.from_pretrained('./my_saved_model_directory/')
tokenizer = BertTokenizer.from_pretrained('./my_saved_model_directory/')
```<|||||>Thank you,
I missed this line. Silly mistake
```
model.resize_token_embeddings(len(tokenizer))
```
it worked!<|||||>You can close the issue :) |
transformers | 1,981 | closed | Transformers for WebNLG tasks | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Can we leverage GPT-2 pre-trained model for WebNLG tasks ?http://webnlg.loria.fr/pages/challenge.html
The WebNLG challenge consists in mapping data to text
similar to what is being done in https://github.com/tyliupku/wiki2bio. | 11-28-2019 16:25:02 | 11-28-2019 16:25:02 | Actually I'm working on this right now. Interested to know as well if anyone else has done it.
Most probably this is possible.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,980 | closed | update all tf.shape and tensor.shape to shape_list | We need to use the special method `shape_list` from `modeling_tf_utils` to be sure we can get TF 2.0 tensor shapes both in eager and non-eager mode.
This PR fixes this for all TF 2.0 models and templates. | 11-28-2019 14:53:31 | 11-28-2019 14:53:31 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1980?src=pr&el=h1) Report
> :exclamation: No coverage uploaded for pull request base (`master@49a69d5`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).
> The diff coverage is `90.69%`.
[](https://codecov.io/gh/huggingface/transformers/pull/1980?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #1980 +/- ##
=========================================
Coverage ? 84.05%
=========================================
Files ? 105
Lines ? 15533
Branches ? 0
=========================================
Hits ? 13056
Misses ? 2477
Partials ? 0
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1980?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [transformers/modeling\_tf\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/1980/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3RyYW5zZm9feGxfdXRpbGl0aWVzLnB5) | `85.57% <100%> (ø)` | |
| [transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1980/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2N0cmwucHk=) | `97.86% <100%> (ø)` | |
| [transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/1980/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3RyYW5zZm9feGwucHk=) | `92.54% <100%> (ø)` | |
| [transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1980/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2Rpc3RpbGJlcnQucHk=) | `98.66% <100%> (ø)` | |
| [transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1980/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3JvYmVydGEucHk=) | `90.43% <100%> (ø)` | |
| [transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1980/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbmV0LnB5) | `88.16% <100%> (ø)` | |
| [transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/1980/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2dwdDIucHk=) | `94.75% <100%> (ø)` | |
| [transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1980/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX29wZW5haS5weQ==) | `95.92% <100%> (ø)` | |
| [transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/1980/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2FsYmVydC5weQ==) | `85.26% <81.81%> (ø)` | |
| [transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1980/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2JlcnQucHk=) | `96.31% <84.61%> (ø)` | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1980?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1980?src=pr&el=footer). Last update [49a69d5...255516a](https://codecov.io/gh/huggingface/transformers/pull/1980?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Very nice! |
transformers | 1,979 | closed | AlbertForQuestionAnswering | Hello! Thanks for adding Albert so quickly! I have a problem with Albert answering a simple question from the Huggingface default example:
```
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = AlbertForQuestionAnswering.from_pretrained('albert-base-v2')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
input_text = "[CLS] " + question + " [SEP] " + text + " [SEP]"
input_ids = tokenizer.encode(input_text)
token_type_ids = [0 if i <= input_ids.index(3) else 1 for i in range(len(input_ids))] # for albert [SEP] token has id 3
start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))
all_tokens = tokenizer.convert_ids_to_tokens(input_ids)
print(' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]))
```
It actually shows empty output because
```
torch.argmax(start_scores), torch.argmax(end_scores)+1
## (tensor(7), tensor(6))
```
For other versions of Albert I also get some nonsense results :(
Thanks in advance | 11-28-2019 14:48:05 | 11-28-2019 14:48:05 | Hi! The `albert` checkpoints only include the base model (the transformer model), and not the separate heads for each task (classification/question answering/...).
For question answering, you would have to first fine-tune the model to this specific task, as the question answering head is initialized randomly. You can do so with the `run_squad.py` example.<|||||>It should be explained in that example, thank you for raising this issue! I'll change that.<|||||>Ok! Thanks a lot!<|||||>It would be really nice is you can release pretrained checkpoints for the specific tasks... I know its a big ask but it would save so many watts of energy all over the world....<|||||>The model need to finetune for downstream task is very general and
task-agnostic, if released the specific task , what extra thing u need to
do ? Also if released, it is not called “pretrained model” , all training
process finished.....
On Sat, Nov 30, 2019 at 15:58 mosheliv <[email protected]> wrote:
> It would be really nice is you can release pretrained checkpoints for the
> specific tasks... I know its a big ask but it would save so many watts of
> energy all over the world....
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/1979?email_source=notifications&email_token=AIEAE4ESDYPMYNVNJWN266DQWIMLFA5CNFSM4JSVQEGKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFP377Y#issuecomment-559923199>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AIEAE4ASDKKVFD3ZVMA4WYDQWIMLFANCNFSM4JSVQEGA>
> .
>
<|||||>I mean fine tuned for squad 2, for example. I would like to play with its
capabilities but the fine tuning process is a tad daunting....
On Sat, Nov 30, 2019, 21:22 pohan <[email protected]> wrote:
> The model need to finetune for downstream task is very general and
> task-agnostic, if released the specific task , what extra thing u need to
> do ? Also if released, it is not called “pretrained model” , all training
> process finished.....
>
> On Sat, Nov 30, 2019 at 15:58 mosheliv <[email protected]> wrote:
>
> > It would be really nice is you can release pretrained checkpoints for the
> > specific tasks... I know its a big ask but it would save so many watts of
> > energy all over the world....
> >
> > —
> > You are receiving this because you are subscribed to this thread.
> > Reply to this email directly, view it on GitHub
> > <
> https://github.com/huggingface/transformers/issues/1979?email_source=notifications&email_token=AIEAE4ESDYPMYNVNJWN266DQWIMLFA5CNFSM4JSVQEGKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFP377Y#issuecomment-559923199
> >,
> > or unsubscribe
> > <
> https://github.com/notifications/unsubscribe-auth/AIEAE4ASDKKVFD3ZVMA4WYDQWIMLFANCNFSM4JSVQEGA
> >
> > .
> >
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/1979?email_source=notifications&email_token=AC7IWC66W7KKINHNFXJETBLQWIPD5A5CNFSM4JSVQEGKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFP4Q7A#issuecomment-559925372>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AC7IWC65E5R33BTHMHSYJWLQWIPD5ANCNFSM4JSVQEGA>
> .
>
<|||||>I totally agree that it would be nice to have the weights for Albert finetuned on Squad available. <|||||>I have found a facebook model pretrained (oh sorry, fine tuned :) on squad2.0 in https://github.com/facebookresearch/SpanBERT.
it is compatible with the huggingface models, so you can get get it with:
`wget http://dl.fbaipublicfiles.com/fairseq/models/spanbert_squad2.tar.gz`
and extract it into say, directory spanbert
I use it something like:
```
import torch
from transformers import BertTokenizer, BertForQuestionAnswering
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
model = BertForQuestionAnswering.from_pretrained('./spanbert')
q = "who am i?"
doc = "my name is slim shady"
input_text = "[CLS] " + q+ " [SEP] " + doc + " [SEP]"
input_ids = tokenizer.encode(input_text)
token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))]
start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))
all_tokens = tokenizer.convert_ids_to_tokens(input_ids)
res = all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]
if not res or res[0] == "[CLS]":
print("MISSING")
else:
prev_token = ""
for i, t in enumerate(res):
if t.startswith("##"):
res[i-1] += t[2:]
res[i] = ""
print(" ".join([x for x in res if x != ""]))
```
I am including the snipped here as it is so hard to find minimal activations of bert on single entries, especially for Q&A
<|||||>Thanks a lot!<|||||>@mosheliv - isn't that just for bert, not albert?
<|||||>Yes, it is, but it was the only squad2 pre-trained i could find.
On Thu, Dec 5, 2019, 07:40 Mark Feblowitz <[email protected]> wrote:
> @mosheliv <https://github.com/mosheliv> - isn't that just for bert, not
> albert?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/1979?email_source=notifications&email_token=AC7IWC3RQUZWCNR34BFZVHTQW72QFA5CNFSM4JSVQEGKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEF6B5II#issuecomment-561782433>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AC7IWC3TJXCV3RN46RWFRX3QW72QFANCNFSM4JSVQEGA>
> .
>
<|||||>> I have found a facebook model pretrained (oh sorry, fine tuned :) on squad2.0 in https://github.com/facebookresearch/SpanBERT.
> it is compatible with the huggingface models, so you can get get it with:
> `wget http://dl.fbaipublicfiles.com/fairseq/models/spanbert_squad2.tar.gz`
> and extract it into say, directory spanbert
> I use it something like:
>
> ```
> import torch
> from transformers import BertTokenizer, BertForQuestionAnswering
> tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
> model = BertForQuestionAnswering.from_pretrained('./spanbert')
> q = "who am i?"
> doc = "my name is slim shady"
> input_text = "[CLS] " + q+ " [SEP] " + doc + " [SEP]"
> input_ids = tokenizer.encode(input_text)
> token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))]
> start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))
> all_tokens = tokenizer.convert_ids_to_tokens(input_ids)
> res = all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]
> if not res or res[0] == "[CLS]":
> print("MISSING")
> else:
> prev_token = ""
> for i, t in enumerate(res):
> if t.startswith("##"):
> res[i-1] += t[2:]
> res[i] = ""
> print(" ".join([x for x in res if x != ""]))
> ```
>
> I am including the snipped here as it is so hard to find minimal activations of bert on single entries, especially for Q&A
Can we assume that whenever there's a `[CLS]` in the answer, it basically means no answer? I'm asking since I know depending on how we treat such cases, it can affect the performance evaluation. Please take a look at my question asked [here on SO](https://stackoverflow.com/questions/60133236/what-does-berts-special-characters-appearance-in-squads-qa-answers-mean).
Also for folks who might be looking for a running example of fine-tuned ALBERT on SQuAD v2.0, you might find this helpful:
```
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("ktrapeznikov/albert-xlarge-v2-squad-v2")
model = AutoModelForQuestionAnswering.from_pretrained("ktrapeznikov/albert-xlarge-v2-squad-v2")
question = "Where is the capital of the USA?"
text = "Capital of the USA is the beautiful Washington D.C."
input_dict = tokenizer.encode_plus(question, text, return_tensors="pt")
input_ids = input_dict["input_ids"].tolist()
start_scores, end_scores = model(**input_dict)
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0])
answer = ''.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]).replace('▁', ' ').strip()
print(answer)
```
<|||||>No expert on this model but yes, this is how I used it.
Thanks for the albert, will try it later on!
On Sun, Feb 9, 2020, 16:56 Pedram <[email protected]> wrote:
> I have found a facebook model pretrained (oh sorry, fine tuned :) on
> squad2.0 in https://github.com/facebookresearch/SpanBERT.
> it is compatible with the huggingface models, so you can get get it with:
> wget http://dl.fbaipublicfiles.com/fairseq/models/spanbert_squad2.tar.gz
> and extract it into say, directory spanbert
> I use it something like:
>
> import torch
>
> from transformers import BertTokenizer, BertForQuestionAnswering
>
> tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
>
> model = BertForQuestionAnswering.from_pretrained('./spanbert')
>
> q = "who am i?"
>
> doc = "my name is slim shady"
>
> input_text = "[CLS] " + q+ " [SEP] " + doc + " [SEP]"
>
> input_ids = tokenizer.encode(input_text)
>
> token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))]
>
> start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))
>
> all_tokens = tokenizer.convert_ids_to_tokens(input_ids)
>
> res = all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]
>
> if not res or res[0] == "[CLS]":
>
> print("MISSING")
>
> else:
>
> prev_token = ""
>
> for i, t in enumerate(res):
>
> if t.startswith("##"):
>
> res[i-1] += t[2:]
>
> res[i] = ""
>
> print(" ".join([x for x in res if x != ""]))
>
>
> I am including the snipped here as it is so hard to find minimal
> activations of bert on single entries, especially for Q&A
>
> Can we assume that whenever there's a [CLS] in the answer, it basically
> means no answer? I'm asking since I know depending on how we treat such
> cases, it can affect the performance evaluation. Please see take a look at
> my question asked here on SO
> <https://stackoverflow.com/questions/60133236/what-does-berts-special-characters-appearance-in-squads-qa-answers-mean>
> .
>
> Also for folks who might be looking for a running example of fine-tuned
> ALBERT on SQuAD v2.0, you might find this helpful:
>
> from transformers import AutoTokenizer, AutoModelForQuestionAnswering
>
>
>
> tokenizer = AutoTokenizer.from_pretrained("ktrapeznikov/albert-xlarge-v2-squad-v2")
>
> model = AutoModelForQuestionAnswering.from_pretrained("ktrapeznikov/albert-xlarge-v2-squad-v2")
>
> question = "Where is the capital of the USA?"
>
> text = "The capital of the USA is beautiful Washington D.C."
>
>
>
> input_dict = tokenizer.encode_plus(question, text, return_tensors="pt")
>
> input_ids = input_dict["input_ids"].tolist()
>
> start_scores, end_scores = model(**input_dict)
>
>
>
> all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0])
>
> answer = ' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]).replace('▁', '')
>
> print(answer)
>
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/1979?email_source=notifications&email_token=AC7IWCYV3DSSF2HRGDQDFWLRB55FLA5CNFSM4JSVQEGKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELGBZ2A#issuecomment-583802088>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AC7IWC6KKB4S4UBYSQGQ3M3RB55FLANCNFSM4JSVQEGA>
> .
>
<|||||>> > I have found a facebook model pretrained (oh sorry, fine tuned :) on squad2.0 in https://github.com/facebookresearch/SpanBERT.
> > it is compatible with the huggingface models, so you can get get it with:
> > `wget http://dl.fbaipublicfiles.com/fairseq/models/spanbert_squad2.tar.gz`
> > and extract it into say, directory spanbert
> > I use it something like:
> > ```
> > import torch
> > from transformers import BertTokenizer, BertForQuestionAnswering
> > tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
> > model = BertForQuestionAnswering.from_pretrained('./spanbert')
> > q = "who am i?"
> > doc = "my name is slim shady"
> > input_text = "[CLS] " + q+ " [SEP] " + doc + " [SEP]"
> > input_ids = tokenizer.encode(input_text)
> > token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))]
> > start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))
> > all_tokens = tokenizer.convert_ids_to_tokens(input_ids)
> > res = all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]
> > if not res or res[0] == "[CLS]":
> > print("MISSING")
> > else:
> > prev_token = ""
> > for i, t in enumerate(res):
> > if t.startswith("##"):
> > res[i-1] += t[2:]
> > res[i] = ""
> > print(" ".join([x for x in res if x != ""]))
> > ```
> >
> >
> > I am including the snipped here as it is so hard to find minimal activations of bert on single entries, especially for Q&A
>
> Can we assume that whenever there's a `[CLS]` in the answer, it basically means no answer? I'm asking since I know depending on how we treat such cases, it can affect the performance evaluation. Please see take a look at my question asked [here on SO](https://stackoverflow.com/questions/60133236/what-does-berts-special-characters-appearance-in-squads-qa-answers-mean).
>
> Also for folks who might be looking for a running example of fine-tuned ALBERT on SQuAD v2.0, you might find this helpful:
>
> ```
> from transformers import AutoTokenizer, AutoModelForQuestionAnswering
>
> tokenizer = AutoTokenizer.from_pretrained("ktrapeznikov/albert-xlarge-v2-squad-v2")
> model = AutoModelForQuestionAnswering.from_pretrained("ktrapeznikov/albert-xlarge-v2-squad-v2")
> question = "Where is the capital of the USA?"
> text = "Capital of the USA is the beautiful Washington D.C."
>
> input_dict = tokenizer.encode_plus(question, text, return_tensors="pt")
> input_ids = input_dict["input_ids"].tolist()
> start_scores, end_scores = model(**input_dict)
>
> all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0])
> answer = ''.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]).replace('▁', ' ').strip()
> print(answer)
> ```
Hi! Thanks for this, I'm only a beginner and this really saved me a lot of trouble! I had a small question however. Apparently the page for this model [https://huggingface.co/ktrapeznikov/albert-xlarge-v2-squad-v2] shows there is a way to get the 'scores' of the spans in addition to getting an answer but I couldn't get it to work myself. The code is supposed to be on the lines of:
```
start_scores, end_scores = model(input_ids)
span_scores = start_scores.softmax(dim=1).log()[:,:,None] + end_scores.softmax(dim=1).log()[:,None,:]
ignore_score = span_scores[:,0,0] #no answer scores
```
But this doesn't return a single score. What am I missing? <|||||>@desaibhargav probably a little late for this but you can get the answers scores like so:
answer_start = torch.argmax(start_scores) # get the most likely beginning of answer with the argmax of the score
answer_end = torch.argmax(end_scores) + 1
answer_span = inputs["input_ids"][0][answer_start:answer_end]
answer_tokens = tokenizer.convert_ids_to_tokens(answer_span)
tokenizer.convert_tokens_to_string(answer_tokens)
This converts the answer spans to the answer. However, I'm not sure how to ignore score my best guess is that its filtered based off of some threshold
|
transformers | 1,978 | closed | Modify position_embeddings from pre_trained model | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
When I load a model like below:
`model1 = BertForSequenceClassification.from_pretrained('bert-base-uncased')`
```
BertForSequenceClassification(
(bert): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(...
```
I want to change Embedding size from 512 to 1024, but when I try to add like this and get an error:
`model = BertForSequenceClassification.from_pretrained('bert-base-uncased', max_position_embeddings=1024)`
> RuntimeError: Error(s) in loading state_dict for BertForSequenceClassification:
size mismatch for bert.embeddings.position_embeddings.weight: copying a param with shape torch.Size([512, 768]) from checkpoint, the shape in current model is torch.Size([1024, 768]).
May I know how to change configs of pre-trained model layers? | 11-28-2019 12:12:26 | 11-28-2019 12:12:26 | I think you **cannot** change this parameter because doing so you're trying to load weights with (512, 768) shape into an architecture with (1024, 768), and it's not possible.
If my statement is true (maybe some authors of Transformers can confirm or deny my statement), maybe a way to avoid that end users like you try to change this parameter would be to make this variable private, such as `_max_position_embeddings`.
> ## Questions & Help
> When I load a model like below:
> `model1 = BertForSequenceClassification.from_pretrained('bert-base-uncased')`
>
> ```
> BertForSequenceClassification(
> (bert): BertModel(
> (embeddings): BertEmbeddings(
> (word_embeddings): Embedding(30522, 768, padding_idx=0)
> (position_embeddings): Embedding(512, 768)
> (token_type_embeddings): Embedding(2, 768)
> (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
> (dropout): Dropout(p=0.1, inplace=False)
> )
> (encoder): BertEncoder(
> (layer): ModuleList(...
> ```
>
> I want to change Embedding size from 512 to 1024, but when I try to add like this and get an error:
> `model = BertForSequenceClassification.from_pretrained('bert-base-uncased', max_position_embeddings=1024)`
>
> > RuntimeError: Error(s) in loading state_dict for BertForSequenceClassification:
> > size mismatch for bert.embeddings.position_embeddings.weight: copying a param with shape torch.Size([512, 768]) from checkpoint, the shape in current model is torch.Size([1024, 768]).
>
> May I know how to change configs of pre-trained model layers?<|||||>> I think you **cannot** change this parameter because doing so you're trying to load weights with (512, 768) shape into an architecture with (1024, 768), and it's not possible.
> If my statement is true (maybe some authors of Transformers can confirm or deny my statement), maybe a way to avoid that end users like you try to change this parameter would be to make this variable private, such as `_max_position_embeddings`.
>
As I check with `vars(BertForSequenceClassification.from_pretrained('bert-base-uncased'))`:
```
{'_backend': <torch.nn.backends.thnn.THNNFunctionBackend at 0x1e557269400>,
'_parameters': OrderedDict(),
'_buffers': OrderedDict(),
'_backward_hooks': OrderedDict(),
'_forward_hooks': OrderedDict(),
'_forward_pre_hooks': OrderedDict(),
'_state_dict_hooks': OrderedDict(),
'_load_state_dict_pre_hooks': OrderedDict(),
'_modules': ...
'training': False,
'config': {
"attention_probs_dropout_prob": 0.1,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512, <============= This is the one
"num_attention_heads": 12,
"num_hidden_layers": 12,
"num_labels": 2,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pruned_heads": {},
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 30522
},
'num_labels': 2}
```
So I decided to replace `config` with `config=BertConfig(max_position_embeddings=1024)`:
```
{
"attention_probs_dropout_prob": 0.1,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 1024, <============== It changed
"num_attention_heads": 12,
"num_hidden_layers": 12,
"num_labels": 2,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pruned_heads": {},
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 30522
}
```
But the same error is occurred when `BertForSequenceClassification.from_pretrained('bert-base-uncased', config=config)`:
```
RuntimeError Traceback (most recent call last)
<ipython-input-9-cfc2c553c1d9> in <module>
1 config=BertConfig(max_position_embeddings=1024)
----> 2 BertForSequenceClassification.from_pretrained('bert-base-uncased', config=config)
C:\ProgramData\Anaconda3\lib\site-packages\transformers\modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
457 if len(error_msgs) > 0:
458 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
--> 459 model.__class__.__name__, "\n\t".join(error_msgs)))
460
461 if hasattr(model, 'tie_weights'):
RuntimeError: Error(s) in loading state_dict for BertForSequenceClassification:
size mismatch for bert.embeddings.position_embeddings.weight: copying a param with shape torch.Size([512, 768]) from checkpoint, the shape in current model is torch.Size([1024, 768]).
```
... 🚶
<|||||>Sorry, but it is obvious that it doesn't work. As a said before, BERT was trained with a **particular** architecture (i.e. with 512 as max positional embeddings), and it was saved with this shape. You cannot load weights that doesn't match your architecture!
> > I think you **cannot** change this parameter because doing so you're trying to load weights with (512, 768) shape into an architecture with (1024, 768), and it's not possible.
> > If my statement is true (maybe some authors of Transformers can confirm or deny my statement), maybe a way to avoid that end users like you try to change this parameter would be to make this variable private, such as `_max_position_embeddings`.
>
> As I check with `vars(BertForSequenceClassification.from_pretrained('bert-base-uncased'))`:
>
> ```
> {'_backend': <torch.nn.backends.thnn.THNNFunctionBackend at 0x1e557269400>,
> '_parameters': OrderedDict(),
> '_buffers': OrderedDict(),
> '_backward_hooks': OrderedDict(),
> '_forward_hooks': OrderedDict(),
> '_forward_pre_hooks': OrderedDict(),
> '_state_dict_hooks': OrderedDict(),
> '_load_state_dict_pre_hooks': OrderedDict(),
> '_modules': ...
> 'training': False,
> 'config': {
> "attention_probs_dropout_prob": 0.1,
> "finetuning_task": null,
> "hidden_act": "gelu",
> "hidden_dropout_prob": 0.1,
> "hidden_size": 768,
> "initializer_range": 0.02,
> "intermediate_size": 3072,
> "is_decoder": false,
> "layer_norm_eps": 1e-12,
> "max_position_embeddings": 512, <============= This is the one
> "num_attention_heads": 12,
> "num_hidden_layers": 12,
> "num_labels": 2,
> "output_attentions": false,
> "output_hidden_states": false,
> "output_past": true,
> "pruned_heads": {},
> "torchscript": false,
> "type_vocab_size": 2,
> "use_bfloat16": false,
> "vocab_size": 30522
> },
> 'num_labels': 2}
> ```
>
> So I decided to replace `config` with `config=BertConfig(max_position_embeddings=1024)`:
>
> ```
> {
> "attention_probs_dropout_prob": 0.1,
> "finetuning_task": null,
> "hidden_act": "gelu",
> "hidden_dropout_prob": 0.1,
> "hidden_size": 768,
> "initializer_range": 0.02,
> "intermediate_size": 3072,
> "is_decoder": false,
> "layer_norm_eps": 1e-12,
> "max_position_embeddings": 1024, <============== It changed
> "num_attention_heads": 12,
> "num_hidden_layers": 12,
> "num_labels": 2,
> "output_attentions": false,
> "output_hidden_states": false,
> "output_past": true,
> "pruned_heads": {},
> "torchscript": false,
> "type_vocab_size": 2,
> "use_bfloat16": false,
> "vocab_size": 30522
> }
> ```
>
> But the same error is occurred when `BertForSequenceClassification.from_pretrained('bert-base-uncased', config=config)`:
>
> ```
> RuntimeError Traceback (most recent call last)
> <ipython-input-9-cfc2c553c1d9> in <module>
> 1 config=BertConfig(max_position_embeddings=1024)
> ----> 2 BertForSequenceClassification.from_pretrained('bert-base-uncased', config=config)
>
> C:\ProgramData\Anaconda3\lib\site-packages\transformers\modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
> 457 if len(error_msgs) > 0:
> 458 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
> --> 459 model.__class__.__name__, "\n\t".join(error_msgs)))
> 460
> 461 if hasattr(model, 'tie_weights'):
>
> RuntimeError: Error(s) in loading state_dict for BertForSequenceClassification:
> size mismatch for bert.embeddings.position_embeddings.weight: copying a param with shape torch.Size([512, 768]) from checkpoint, the shape in current model is torch.Size([1024, 768]).
> ```
>
> ... |
transformers | 1,977 | closed | 'convert_tf_checkpoint_to_pytorch.py' file is missing | Hi all,
I pre-trained BERT base model on my domain-specific corpus using ```https://github.com/google-research/bert``` ```create_pretraining_data.py``` and ```run_pretraining.py```.
Now, I want to use it with this ```pytorch-transformers```. I saw from this page https://devhub.io/repos/huggingface-pytorch-pretrained-BERT that there is conversion script from tf checkpoints to ```pytorch_model.bin``` called ```convert_tf_checkpoint_to_pytorch.py``` but the file no longer exists.
Does anyone have solution? Thanks!
| 11-28-2019 07:13:52 | 11-28-2019 07:13:52 | _PyTorch-pretrained-BERT_ is a older name of this library; now its name is **Transformers**.
You can check the latest docs of the library and install it from PyPi with `pip install transformers` (you have to install manually TensorFlow 2.0 and PyTorch as well through `pip install tensorflow==2.0.0` and `pip install torch`).
Said this, you can read [this](https://github.com/huggingface/transformers/blob/17ea43cf985829634bd86b36b44e5410c6f83e36/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py) Python script whose goal is to convert BERT original TensorFlow checkpoint to PyTorch. The input of this script are the following three:
- the path to the TensorFlow checkpoint, through `--tf_checkpoint_path` parameter
- a JSON file which specifies the model architecture through `--bert_config_file` parameter
- the path to the output converted PyTorch model through `--pytorch_dump_path` parameter
> Hi all,
> I pre-trained BERT base model on my domain-specific corpus using `https://github.com/google-research/bert` `create_pretraining_data.py` and `run_pretraining.py`.
> Now, I want to use it with this `pytorch-transformers`. I saw from this page https://devhub.io/repos/huggingface-pytorch-pretrained-BERT that there is conversion script from tf checkpoints to `pytorch_model.bin` called `convert_tf_checkpoint_to_pytorch.py` but the file no longer exists.
> Does anyone have solution? Thanks!<|||||>Thanks @TheEdoardo93! |
transformers | 1,976 | closed | Merge pull request #1 from huggingface/master | merge | 11-28-2019 02:14:28 | 11-28-2019 02:14:28 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1976?src=pr&el=h1) Report
> Merging [#1976](https://codecov.io/gh/huggingface/transformers/pull/1976?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/96e7ee72380a135bfd07b8fdc2018bcbea65b086?src=pr&el=desc) will **increase** coverage by `0.2%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/1976?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #1976 +/- ##
=========================================
+ Coverage 84.06% 84.26% +0.2%
=========================================
Files 105 104 -1
Lines 15537 15431 -106
=========================================
- Hits 13061 13003 -58
+ Misses 2476 2428 -48
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1976?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1976/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fdXRpbHMucHk=) | `92.1% <0%> (-0.11%)` | :arrow_down: |
| [transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1976/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3V0aWxzLnB5) | `92.68% <0%> (-0.05%)` | :arrow_down: |
| [transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1976/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `89.43% <0%> (-0.03%)` | :arrow_down: |
| [transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1976/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `96.46% <0%> (-0.02%)` | :arrow_down: |
| [transformers/configuration\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1976/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fZGlzdGlsYmVydC5weQ==) | `89.74% <0%> (ø)` | :arrow_up: |
| [transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1976/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYXV0by5weQ==) | `59.45% <0%> (ø)` | :arrow_up: |
| [transformers/tests/tokenization\_tests\_commons.py](https://codecov.io/gh/huggingface/transformers/pull/1976/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl90ZXN0c19jb21tb25zLnB5) | `100% <0%> (ø)` | :arrow_up: |
| [transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1976/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2Rpc3RpbGJlcnQucHk=) | `95.87% <0%> (ø)` | :arrow_up: |
| [transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1976/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2F1dG8ucHk=) | `31.81% <0%> (ø)` | :arrow_up: |
| [transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1976/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9hdXRvLnB5) | `45.94% <0%> (ø)` | :arrow_up: |
| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/1976/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1976?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1976?src=pr&el=footer). Last update [96e7ee7...5a3f240](https://codecov.io/gh/huggingface/transformers/pull/1976?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 1,975 | closed | How can we view different versions of documentation? | ## ❓ Questions & Help
How can I select a specific version of transformers in the documentation located here: https://huggingface.co/transformers/index.html?
| 11-27-2019 23:14:14 | 11-27-2019 23:14:14 | We're in the process of building better versioned documentation with easier links to follow, but at the moment the different versions are accessible in the [README](https://github.com/huggingface/transformers#state-of-the-art-natural-language-processing-for-tensorflow-20-and-pytorch), right before the `installation` section.<|||||>Great, thank you! |
transformers | 1,974 | closed | Albert Hyperparameters for Fine-tuning SQuAD 2.0 | ## ❓ Questions & Help
I want to fine-tune `albert-xxlarge-v1` on SQuAD 2.0 and am in need of optimal hyperparameters. I did not find any discussion in the Albert original paper regarding suggested fine-tuning hyperparameters, as is provided in the XLNet original paper. I did find the following hard-coded parameters in the Google-research Albert `run_squad_sp.py` code:
```
'do_lower_case' = True
'train_batch_size' = 32
'predict_batch_size' = 8
'learning_rate' = 5e-5
'num_train_epochs' = 3.0
'warmup_proportion' = 0.1
```
With fine-tuning on my 2x GPUs taking ~69 hours, I'd like to shrink the number of fine-tuning iterations necessary to attain optimal model performance. Anyone have a bead on the optimal hyperparameters?
Also, Google-research comments in `run_squad_sp.py` state that `warmup_proportion` is "Proportion of training to perform linear learning rate warmup for." "E.g., 0.1 = 10% of training". Since 3 epochs, batch size = 32 while fine-tuning SQuAD 2.0 results in approximately 12.5K total optimization steps, would I set `--warmup_steps = 1250` when calling Transformers' run_squad.py?
Thanks in advance for any input. | 11-27-2019 22:03:22 | 11-27-2019 22:03:22 | Wondering this as well but for GLUE tasks. There don't seem to be a good consensus on hyperparameters such as weight decay and such<|||||>Results using hyperparameters from my first post above, varying only batch size:
```
albert_xxlargev1_squad2_512_bs32:
{
"exact": 83.67725090541565,
"f1": 87.51235434089064,
"total": 11873,
"HasAns_exact": 81.86572199730094,
"HasAns_f1": 89.54692697189559,
"HasAns_total": 5928,
"NoAns_exact": 85.48359966358284,
"NoAns_f1": 85.48359966358284,
"NoAns_total": 5945
}
albert_xxlargev1_squad2_512_bs48:
{
"exact": 83.65198349195654,
"f1": 87.4736247587816,
"total": 11873,
"HasAns_exact": 81.73076923076923,
"HasAns_f1": 89.38501126197984,
"HasAns_total": 5928,
"NoAns_exact": 85.5677039529016,
"NoAns_f1": 85.5677039529016,
"NoAns_total": 5945
}
```


<|||||>@ahotrod There is a table in the appendix section of the ALBERT paper, which shows hyperparameters for ALBERT in downstream tasks:

<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,973 | closed | Changes to S3 Roberta / RobertaForSequenceClassification | Hello,
I'm wondering if there have been any changes in either the pretrained Roberta model or the configuration of RobertaForSequenceClassification within the past month or so. I am initializing it as `RobertaForSequenceClassification.from_pretrained(...)` and running it as demonstrated in `run_glue.py`.
For a custom dataset, I am noticing that the fine-tune accuracy has decreased by several percentage points on newly trained models as opposed to ~ 1 month ago. This happens consistently (i.e I've tried retraining multiple times) and using the same hyperparams.
In addition, I've noticed that the new training times are cut by ~1/2, so the model seems to train faster, but is less performant. | 11-27-2019 21:36:35 | 11-27-2019 21:36:35 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,972 | closed | How to persist cloud-based transformers | ## ❓ Questions & Help
I'm using this repo in the cloud and attempting to persist the model fails as it seems HDF5 and thus h5py doesn't support that paradigm per https://github.com/h5py/h5py/issues/925
What is the recommended method of saving the model in this scenario? Thanks!
| 11-27-2019 18:08:52 | 11-27-2019 18:08:52 | Are you trying to serialize which type of model to .hdf5 file? A pre-trained model such as BertFor* or a custom model trained with Transformers? Are you using GCP or Azure or AWS?
> ## Questions & Help
> I'm using this repo in the cloud and attempting to persist the model fails as it seems HDF5 and thus h5py doesn't support that paradigm per [h5py/h5py#925](https://github.com/h5py/h5py/issues/925)
>
> What is the recommended method of saving the model in this scenario? Thanks!<|||||>Hi @TheEdoardo93. I'm trying to serialize TFBertFor* that has been fine-tuned. I'm on Azure.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,971 | closed | add add_special_tokens=True for input examples | According to #1957 , in some versions of transformers, add_special_tokens is not set to True by default. In that case, the example code is wrong as input_ids will missing [CLS] and [SEP] tokens. It's better to pass add_special_tokens=True in the example explicitly. | 11-27-2019 16:50:49 | 11-27-2019 16:50:49 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1971?src=pr&el=h1) Report
> Merging [#1971](https://codecov.io/gh/huggingface/transformers/pull/1971?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5afca00b4732f57329824e1538897e791e02e894?src=pr&el=desc) will **increase** coverage by `0.18%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/1971?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #1971 +/- ##
==========================================
+ Coverage 84.06% 84.24% +0.18%
==========================================
Files 105 104 -1
Lines 15536 15431 -105
==========================================
- Hits 13060 13000 -60
+ Misses 2476 2431 -45
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1971?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1971/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2JlcnQucHk=) | `87.77% <ø> (ø)` | :arrow_up: |
| [transformers/tests/modeling\_tf\_common\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1971/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `95.63% <0%> (-1.46%)` | :arrow_down: |
| [transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1971/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fdXRpbHMucHk=) | `92.1% <0%> (-0.11%)` | :arrow_down: |
| [transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1971/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3V0aWxzLnB5) | `92.68% <0%> (-0.05%)` | :arrow_down: |
| [transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1971/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `89.43% <0%> (-0.03%)` | :arrow_down: |
| [transformers/configuration\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1971/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fZGlzdGlsYmVydC5weQ==) | `89.74% <0%> (ø)` | :arrow_up: |
| [transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1971/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYXV0by5weQ==) | `59.45% <0%> (ø)` | :arrow_up: |
| [transformers/tests/tokenization\_tests\_commons.py](https://codecov.io/gh/huggingface/transformers/pull/1971/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl90ZXN0c19jb21tb25zLnB5) | `100% <0%> (ø)` | :arrow_up: |
| [transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1971/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2Rpc3RpbGJlcnQucHk=) | `95.87% <0%> (ø)` | :arrow_up: |
| [transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1971/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2F1dG8ucHk=) | `31.81% <0%> (ø)` | :arrow_up: |
| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/1971/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1971?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1971?src=pr&el=footer). Last update [5afca00...d5dd44e](https://codecov.io/gh/huggingface/transformers/pull/1971?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This is less ambiguous, indeed! Thank you for taking the time to open a PR. |
transformers | 1,970 | closed | Bert Tensor Dimensions | ## 🐛 Bug
My code was working on 2.1 it throws an error in 2.2:
transformers\modeling_tf_bert.py:777 call
outputs = self.bert(inputs, **kwargs)
transformers\modeling_tf_bert.py:512 call
attention_mask = tf.fill(input_shape, 1)
tensorflow_core\python\ops\array_ops.py:171 fill
result = gen_array_ops.fill(dims, value, name=name)
tensorflow_core\python\ops\gen_array_ops.py:3602 fill
"Fill", dims=dims, value=value, name=name)
tensorflow_core\python\framework\op_def_library.py:545 _apply_op_helper
(input_name, err))
ValueError: Tried to convert 'dims' to a tensor and failed. Error: Cannot convert a partially known TensorShape to a Tensor: (None, 72)
<!-- Important information -->
Model I am using (Bert, XLNet....): BERT
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [ ] the official example scripts: (give details)
* [X] my own modified scripts: predict function
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details)
## Environment
* OS:
* Python version: 3.7.4
* PyTorch version: -
* Tensorflow : 2.0.0
* Using GPU : Yes
* Distributed of parallel setup : No
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| 11-27-2019 16:14:42 | 11-27-2019 16:14:42 | Hi, thanks for letting us know. Is there any way you could provide a script that reproduces the error in a few lines so that we may see what is wrong on our end?<|||||>model.predict function fails with output_hidden_states=True in constructor<|||||>I'm failing to reproduce what you're mentioning with the following snippet:
```py
from transformers import TFBertModel, BertTokenizer, BertConfig
import tensorflow as tf
config = BertConfig.from_pretrained("bert-base-cased", output_hidden_states=True)
model = TFBertModel.from_pretrained("bert-base-cased", config=config)
tok = BertTokenizer.from_pretrained("bert-base-cased")
text = tok.encode("Ain't this [MASK] best thing you've ever seen?")
inputs = tf.constant(text)
outputs = model.predict(inputs)
print(outputs)
```
Is there any way you could provide a script that reproduces the error in a few lines so that we may see what is wrong on our end?
<|||||>With this piece of code you've posted, I'm encountered the same problem highlighted by @halidziya .
**ENVIRONMENT**:
- Python 3.6.9
- OS: Ubuntu 16.04 ('Linux-4.15.0-70-generic-x86_64-with-debian-buster-sid')
- Transformers: 2.2.2 (installed with `pip install transformers` the day after the release)
- PyTorch: 1.3.1
- TensorFlow: 2.0.0
The stack trace is reported below:
```
Python 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31)
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import TFBertModel, BertTokenizer, BertConfig
/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
2019-12-03 09:46:35.606174: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-12-03 09:46:35.610775: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3408000000 Hz
2019-12-03 09:46:35.611320: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55fa0a097860 executing computations on platform Host. Devices:
2019-12-03 09:46:35.611341: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version
>>> import tensorflow as tf
>>> config = BertConfig.from_pretrained("bert-base-cased", output_hidden_states=True)
>>> model = TFBertModel.from_pretrained("bert-base-cased", config=config)
>>> tok = BertTokenizer.from_pretrained("bert-base-cased")
>>> text = tok.encode("Ain't this [MASK] best thing you've ever seen?")
>>> inputs = tf.constant(text)
>>> outputs = model.predict(inputs)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 909, in predict
use_multiprocessing=use_multiprocessing)
File "/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_arrays.py", line 715, in predict
x, check_steps=True, steps_name='steps', steps=steps)
File "/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2419, in _standardize_user_data
all_inputs, y_input, dict_inputs = self._build_model_with_inputs(x, y)
File "/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2622, in _build_model_with_inputs
self._set_inputs(cast_inputs)
File "/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 2709, in _set_inputs
outputs = self(inputs, **kwargs)
File "/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 842, in __call__
outputs = call_fn(cast_inputs, *args, **kwargs)
File "/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorflow_core/python/autograph/impl/api.py", line 237, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in converted code:
relative to /home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages:
transformers/modeling_tf_bert.py:684 call *
outputs = self.bert(inputs, **kwargs)
tensorflow_core/python/keras/engine/base_layer.py:842 __call__
outputs = call_fn(cast_inputs, *args, **kwargs)
transformers/modeling_tf_bert.py:512 call *
attention_mask = tf.fill(input_shape, 1)
tensorflow_core/python/ops/array_ops.py:171 fill
result = gen_array_ops.fill(dims, value, name=name)
tensorflow_core/python/ops/gen_array_ops.py:3602 fill
"Fill", dims=dims, value=value, name=name)
tensorflow_core/python/framework/op_def_library.py:545 _apply_op_helper
(input_name, err))
ValueError: Tried to convert 'dims' to a tensor and failed. Error: Cannot convert a partially known TensorShape to a Tensor: (None, 1)
```
> I'm failing to reproduce what you're mentioning with the following snippet:
>
> ```python
> from transformers import TFBertModel, BertTokenizer, BertConfig
> import tensorflow as tf
>
> config = BertConfig.from_pretrained("bert-base-cased", output_hidden_states=True)
> model = TFBertModel.from_pretrained("bert-base-cased", config=config)
>
> tok = BertTokenizer.from_pretrained("bert-base-cased")
> text = tok.encode("Ain't this [MASK] best thing you've ever seen?")
>
> inputs = tf.constant(text)
> outputs = model.predict(inputs)
>
> print(outputs)
> ```
>
> Is there any way you could provide a script that reproduces the error in a few lines so that we may see what is wrong on our end?<|||||>Cannot reproduce on release 2.2.1.
Can you check with the latest release or master?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,969 | closed | Implemented concurrent encoding and converting of sequences for data binarization | 11-27-2019 16:14:08 | 11-27-2019 16:14:08 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1969?src=pr&el=h1) Report
> Merging [#1969](https://codecov.io/gh/huggingface/transformers/pull/1969?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/49108288ba6e6dcfe554d1af98699ae7a1e6f39c?src=pr&el=desc) will **increase** coverage by `0.29%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/1969?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #1969 +/- ##
==========================================
+ Coverage 83.97% 84.26% +0.29%
==========================================
Files 105 104 -1
Lines 15529 15431 -98
==========================================
- Hits 13040 13003 -37
+ Misses 2489 2428 -61
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1969?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1969/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fdXRpbHMucHk=) | `92.1% <0%> (-0.11%)` | :arrow_down: |
| [transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1969/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `92.14% <0%> (-0.09%)` | :arrow_down: |
| [transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1969/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3V0aWxzLnB5) | `92.68% <0%> (-0.05%)` | :arrow_down: |
| [transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1969/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `89.43% <0%> (-0.03%)` | :arrow_down: |
| [transformers/configuration\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1969/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fZGlzdGlsYmVydC5weQ==) | `89.74% <0%> (ø)` | :arrow_up: |
| [transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1969/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYXV0by5weQ==) | `59.45% <0%> (ø)` | :arrow_up: |
| [transformers/tests/tokenization\_tests\_commons.py](https://codecov.io/gh/huggingface/transformers/pull/1969/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl90ZXN0c19jb21tb25zLnB5) | `100% <0%> (ø)` | :arrow_up: |
| [transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1969/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2Rpc3RpbGJlcnQucHk=) | `95.87% <0%> (ø)` | :arrow_up: |
| [transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1969/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2F1dG8ucHk=) | `31.81% <0%> (ø)` | :arrow_up: |
| [transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1969/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9hdXRvLnB5) | `45.94% <0%> (ø)` | :arrow_up: |
| ... and [20 more](https://codecov.io/gh/huggingface/transformers/pull/1969/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1969?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1969?src=pr&el=footer). Last update [4910828...265dbe8](https://codecov.io/gh/huggingface/transformers/pull/1969?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
|
transformers | 1,968 | closed | AlbertPreTrainedModel class is not available in release v2.2.0 | ## ❓ Questions & Help
In the release v2.2.0, the AlbertForSequenceClassification class inherits the AlbertPreTrainedModel class as "class AlbertForSequenceClassification(AlbertPreTrainedModel)".
However, in the doc, this pre-trained model is not documented. And in the released v2.2.0 transformer, the AlbertPreTrainedModel class is not available to be imported.
This is not a big issue since we can use BertPreTrainedModel class instead (like RoBERTa). But it should be consistent. Especially in the doc under Albert, there is a class called TFBertForPretraining which makes confusion for users.
| 11-27-2019 14:39:34 | 11-27-2019 14:39:34 | Hi, indeed there was a mistake with `TFBertForPretraining` referenced under the ALBERT documentation. This was fixed with 3616209.
The `PreTrainedModel` class is not available for ALBERT, which is the same for CTRL, GPT, GPT-2, DistilBERT, CamemBERT, TransformerXL, XLM and XLNet.
Why is that class useful for your use-case, seeing as it's a simple wrapper over `PreTrainedModel` with a few overridden attributes?
<|||||>> Hi, indeed there was a mistake with `TFBertForPretraining` referenced under the ALBERT documentation. This was fixed with [3616209](https://github.com/huggingface/transformers/commit/361620954acf16b27727d763a591257b03f90b5d).
>
> The `PreTrainedModel` class is not available for ALBERT, which is the same for CTRL, GPT, GPT-2, DistilBERT, CamemBERT, TransformerXL, XLM and XLNet.
>
> Why is that class useful for your use-case, seeing as it's a simple wrapper over `PreTrainedModel` with a few overridden attributes?
I completely agree we can use PreTrainedModel or BertPreTrainedModel instead.
The question is that I do see the implementation of the class AlbertPreTrainedModel(PreTrainedModel) in the source code (transformers/modeling_albert.py line 313) but I cannot import it. It seems that it is not included in the released version. I just feel it is weird.
<|||||>Yes, it is not importable as it is an internal used by different models, but I fail to see a use-case where it would be useful for the library users.
Why is that class useful for your use-case, seeing as it's a simple wrapper over `PreTrainedModel` with a few overridden attributes?<|||||>I think we could add all the `XXXPretrainedModel` in `__init__` indeed. Would make it easier for people to build custom-made models that can load pretrained checkpoints as well.<|||||>Fixed on master |
transformers | 1,967 | closed | Trouble running 'bert-base-multilingual-cased' | ## ❓ Questions & Help
Hi,
I got trouble when I apply Quickstart BERT example to korean news text.
but when I run this code using model 'bert-base-multilingual-cased',
```{.python}
# Predict hidden states features for each layer
with torch.no_grad():
# See the models docstrings for the detail of the inputs
outputs = model(tokens_tensor, token_type_ids=segments_tensors)
# Transformers models always output tuples.
# See the models docstrings for the detail of all the outputs
# In our case, the first element is the hidden state of the last layer of the Bert model
encoded_layers = outputs[0]
# We have encoded our input sequence in a FloatTensor of shape (batch size, sequence length, model hidden dimension)
assert tuple(encoded_layers.shape) == (1, len(indexed_tokens), model.config.hidden_size)
```
I sometimes got error like below,
```{.python}
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-39-b8e77f8ffc14> in <module>
2 with torch.no_grad():
3 # See the models docstrings for the detail of the inputs
----> 4 outputs = model(tokens_tensor, token_type_ids=segments_tensors)
5 # Transformers models always output tuples.
6 # See the models docstrings for the detail of all the outputs
//anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
//anaconda3/lib/python3.7/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask)
712 head_mask = [None] * self.config.num_hidden_layers
713
--> 714 embedding_output = self.embeddings(input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds)
715 encoder_outputs = self.encoder(embedding_output,
716 attention_mask=extended_attention_mask,
//anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
//anaconda3/lib/python3.7/site-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
176 inputs_embeds = self.word_embeddings(input_ids)
177 position_embeddings = self.position_embeddings(position_ids)
--> 178 token_type_embeddings = self.token_type_embeddings(token_type_ids)
179
180 embeddings = inputs_embeds + position_embeddings + token_type_embeddings
//anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
//anaconda3/lib/python3.7/site-packages/torch/nn/modules/sparse.py in forward(self, input)
112 return F.embedding(
113 input, self.weight, self.padding_idx, self.max_norm,
--> 114 self.norm_type, self.scale_grad_by_freq, self.sparse)
115
116 def extra_repr(self):
//anaconda3/lib/python3.7/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1482 # remove once script supports set_grad_enabled
1483 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1484 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1485
1486
RuntimeError: index out of range: Tried to access index 2 out of table with 1 rows. at ../aten/src/TH/generic/THTensorEvenMoreMath.cpp:418
```
Any help will be greatly appreciated.
Thanks!
Info:
OS: MacOsX 10.14.6 (Mojave)
python : 3.7
PyTorch : 1.3.1
| 11-27-2019 13:29:37 | 11-27-2019 13:29:37 | I think the error was because text is greater than 512 tokens.
I got no error when text is smaller than 512 tokens.<|||||>Hi! Indeed the models have a maximum input size, which is 512 for BERT. You should have received a warning when tokenizing your sequence, but unfortunately, there isn't much more we can do to clarify this error further.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,966 | closed | Fix issue: #1962, input shape problem | Hi,
To Fix #1962
The input's shape seem to cause error in 2.2.0 version tf_albert_model
Hope that it can help. | 11-27-2019 10:07:20 | 11-27-2019 10:07:20 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1966?src=pr&el=h1) Report
> Merging [#1966](https://codecov.io/gh/huggingface/transformers/pull/1966?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cc7968227e08858df4a5c618c739e1a3ca050196?src=pr&el=desc) will **decrease** coverage by `0.01%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/1966?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #1966 +/- ##
==========================================
- Coverage 84.26% 84.24% -0.02%
==========================================
Files 104 104
Lines 15431 15431
==========================================
- Hits 13003 13000 -3
- Misses 2428 2431 +3
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1966?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/1966/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2FsYmVydC5weQ==) | `85.49% <100%> (ø)` | :arrow_up: |
| [transformers/tests/modeling\_tf\_common\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1966/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `95.63% <0%> (-1.46%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1966?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1966?src=pr&el=footer). Last update [cc79682...a1aec9c](https://codecov.io/gh/huggingface/transformers/pull/1966?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Great, thank you for this @billpku ! |
transformers | 1,965 | closed | XLMForTokenClassification | # 🌟New model addition
## Model description
As the XLM version for token classification (NER tasks) was not implemented, I followed a similar path as for BertForTokenClassification, and it seems to work. The reason for this is that BERT multilingual works horribly for WikiNER in spanish, achieving only 55% F1 with much fine-tuning, against the 87% of Spacy. Therefore, I'm trying to improve this metric and for that purpose I decided to use XLM, which is trained only on 15 different languages, not on more than 100. There's still one think that my model implementation lacks, and that is the fact that model dimension has to be set manually. I've been trying to add d_model to XLMConfig and then pass this config to my class, but it says XLMModel has no attribute d_model. If anyone can help me out with that I'd appreciate that.
<!-- Important information -->
The code:
class XLMForTokenClassification(XLMModel):
def __init__(self, config, d_model=1024):
super(XLMForTokenClassification, self).__init__(config)
self.num_labels = config.num_labels
self.xlm = XLMModel(config)
self.dropout = nn.Dropout(config.dropout)
self.classifier = nn.Linear(d_model, config.num_labels)
self.init_weights()
def forward(self, input_ids=None, attention_mask=None, langs=None, token_type_ids=None,
position_ids=None, head_mask=None, labels=None):
outputs = self.xlm(input_ids,
attention_mask=attention_mask,
langs=langs,
token_type_ids=position_ids,
head_mask=head_mask)
sequence_output = self.dropout(outputs[0])
logits = self.classifier(sequence_output)
outputs = (logits, ) + outputs[2:] #add hidden states and attention if they are here
if labels is not None:
loss_fct = nn.CrossEntropyLoss()
#only keep active parts of the loss
if attention_mask is not None:
active_loss = attention_mask.view(-1) == 1
active_logits = logits.view(-1, self.num_labels)[active_loss]
active_labels = labels.view(-1)[active_loss]
loss = loss_fct(active_logits, active_labels)
else:
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
outputs = (loss, ) + outputs
return outputs
## Open Source status
* [x] the model implementation is available: (give details)
It is uploaded above.
* [ ] the model weights are available: (give details)
* [x] who are the authors: (mention them)
Alejandro Vaca Serrano
## Additional context
<!-- Add any other context about the problem here. -->
| 11-27-2019 09:34:29 | 11-27-2019 09:34:29 | Hi @alexvaca0 ,
Try to remove the `d_model` parameter in the constructor. Use `config.emb_dim` (`emb_dim` is specified in the [xlm config](https://s3.amazonaws.com/models.huggingface.co/bert/xlm-mlm-en-2048-config.json)), so this should work:
```python
self.classifier = nn.Linear(config.emb_dim, config.num_labels)
```
:)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,964 | closed | How to increase model saving checkpoint from 50 to 1000? | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
When I run the script `run_squad.py` , it creates model check points every 50th iteration. How to increase model saving checkpoint from 50 to 1000. Where should I edit the code ?
| 11-27-2019 07:44:21 | 11-27-2019 07:44:21 | **You haven't to modify the source code in this script**. When you call `run_squad.py` script, you have to pass the `--save_steps` parameter and set its value to 1000 (as you can see [here](https://github.com/huggingface/transformers/blob/master/examples/run_squad.py#L429).) So, the entire command would be something like that: `python run_squad.py ... --save_steps 1000`
> ## Questions & Help
> When I run the script `run_squad.py` , it creates model check points every 50th iteration. How to increase model saving checkpoint from 50 to 1000. Where should I edit the code ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Related to this question, how about making the default value (50) bigger (e.g., 1000) in scripts such as `run_squad.py` and `run_ner.py`?
If a `--save_steps` option is not specified, and the default value is used, many checkpoints are saved.<|||||>@tomohideshibata you're right that 50 is too low, I bumped it to 500 in 335dd5e.<|||||>Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,963 | closed | Did the underlying pre-trained models change somehow? | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): Bert
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [ ] the official example scripts: (give details)
* [X] my own modified scripts: I'm just loading the pre-trained bert-base-uncased model
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: document classification
## To Reproduce
Steps to reproduce the behavior:
1. `PRETRAINED_WEIGHTS = "bert-base-uncased"`
2. `model = TFBertForSequenceClassification.from_pretrained(PRETRAINED_WEIGHTS)`
3. see just below
```
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
sparse_categorical_accuracy = tf.keras.metrics.SparseCategoricalAccuracy("train_accuracy")
model.compile(optimizer=optimizer,
loss=loss,
metrics=[sparse_categorical_accuracy])
```
4. see just below
```
history = model.fit([train_input_ids, train_input_masks, train_segment_ids],
train_labels,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=([val_input_ids, val_input_masks, val_segment_ids],
val_labels),
use_multiprocessing=True,
verbose=1)
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
That the model begins training.
## Environment
* OS: Databricks (linux?)
* Python version: 3.7
* PyTorch version: I'm using the TF-flavor models
* PyTorch Transformers version (or branch): v2.1.1 & v2.2.0 (see additional context)
* Using GPU ? Yes
* Distributed of parallel setup ? No
* Any other relevant information:
## Additional context
In databricks I was not pinning the transformers version so it was installing the latest. I have always pinned the `mlflow` version to 1.4.0 though. I tried with what the latest prior release of `transformers` was (2.1.1) and still got the same error where this worked flawlessly before. The error is below and it specifies it is an `mlflow` issue, though in reality I think it may have something to do with the pretrained model that is loaded when we specify `bert-base-uncased`. It seems this underlying model changed independently of the latest release of `transformers`? Are the pre-trained models from some public Google repository or are they Huggingface-specific?
Thanks again for supporting TF 2, this repo has been a blessing!
Traceback:
```
UserWarning: Logging to MLflow failed: Saving the model to HDF5 format requires the model to be a Functional model or a Sequential model. It does not work for subclassed models, because such models are defined via the body of a Python method, which isn't safely serializable. Consider saving to the Tensorflow SavedModel format (by setting save_format="tf") or using `save_weights`.
try_mlflow_log(mlflow.keras.log_model, self.model, artifact_path='model')
InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: Incompatible shapes: [4,12,512,512] vs. [4,1,1,0]
[[node tf_bert_for_sequence_classification/bert/encoder/layer_._0/attention/self/add (defined at /databricks/python/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1751) ]]
[[cluster_1_1/xla_compile]]
[[cluster_0_1/merge_oidx_1/_22]]
(1) Invalid argument: Incompatible shapes: [4,12,512,512] vs. [4,1,1,0]
[[node tf_bert_for_sequence_classification/bert/encoder/layer_._0/attention/self/add (defined at /databricks/python/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1751) ]]
[[cluster_1_1/xla_compile]]
0 successful operations.
0 derived errors ignored. [Op:__inference_distributed_function_37652]
Function call stack:
distributed_function -> distributed_function
```
<!-- Add any other context about the problem here. -->
| 11-27-2019 06:43:02 | 11-27-2019 06:43:02 | Total user-error here. Made a change to a function that writes the TF Records to storage (the name of one of the features) and didn't propagate that info to the function i wrote that reads the TF Records back in, so it wasn't loading my input masks because it was looking for a key that didn't exist. |
transformers | 1,962 | closed | TFBertModel ValueError: Tried to convert 'dims' to a tensor and failed. Error: Cannot convert a partially known TensorShape to a Tensor | The code below was fine when transformers 2.1.1
but after I update to transformers 2.2.0
```
model = TFBertForSequenceClassification.from_pretrained('bert-base-chinese', num_labels=5)
model.summary()
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
model_fit = model.fit(train_input_ids, train_label,
batch_size=8, epochs=1,
validation_data=(validation_input_ids, validation_label)
)
```
```
ValueError: Tried to convert 'dims' to a tensor and failed.
Error: Cannot convert a partially known TensorShape to a Tensor: (None, 512)
``` | 11-27-2019 06:31:58 | 11-27-2019 06:31:58 | > The code below was fine when transformers 2.1.1
>
> but after I update to transformers 2.2.0
>
> ```
> model = TFBertForSequenceClassification.from_pretrained('bert-base-chinese', num_labels=5)
> model.summary()
>
> optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
> loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
> metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
> model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
>
> model_fit = model.fit(train_input_ids, train_label,
> batch_size=8, epochs=1,
> validation_data=(validation_input_ids, validation_label)
> )
> ```
>
> ```
> ValueError: Tried to convert 'dims' to a tensor and failed.
> Error: Cannot convert a partially known TensorShape to a Tensor: (None, 512)
> ```
The problem seem to cause by input's shape.
For transformers 2.2.0
I fix it myself by modifying file "transformers/modeling_tf_albert.py"
On line 648, I change:
input_shape = input_ids.shape
Into:
input_shape = tf.shape(input_ids)
Then the problem fixed.
Feel free to leave a comment if it work for you.<|||||>@billpku
Sorry to test it so lately.
It did fix my problem above.
but it didn't fix the code below which worked fine in 2.1.1 for custom the layer.
```
input_layer = Input(shape = (512,), dtype='int64')
bert = TFBertModel.from_pretrained('bert-base-chinese')(input_layer)
bert = bert[0]
dropout = Dropout(0.1)(bert)
flat = Flatten()(dropout)
classifier = Dense(units=5)(flat)
model = Model(inputs=input_layer, outputs=classifier)
model.summary()
```
```
ValueError: Tried to convert 'dims' to a tensor and failed.
Error: Cannot convert a partially known TensorShape to a Tensor: (None, 512)
```<|||||>Facing the same issue here.

<|||||>@AmalVijayan
I just checked my problem was fixed somehow.
How's yours now?
> ```
> input_layer = Input(shape = (512,), dtype='int64')
> bert = TFBertModel.from_pretrained('bert-base-chinese')(input_layer)
> bert = bert[0]
> dropout = Dropout(0.1)(bert)
> flat = Flatten()(dropout)
> classifier = Dense(units=5)(flat)
> model = Model(inputs=input_layer, outputs=classifier)
> model.summary()
> ```
|
transformers | 1,961 | closed | Problems with running 'run_lm_finetuning.py' with bert | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
My rough modification configuration (local)
root_path = "F://IdeaProjects/transformers"
bert_path = "F://BERT/chinese_L-12_H-768_A-12" (Downloaded from the 'https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip')
## Required parameters
parser.add_argument("--train_data_file", default=os.path.join(root_path, "data/train.txt"),...)
parser.add_argument("--output_dir", default=os.path.join(root_path, "output")...)
## Other parameters
parser.add_argument("--eval_data_file", default=os.path.join(root_path, "data/dev.txt"),...)
parser.add_argument("--model_type", default="bert"...)
parser.add_argument("--model_name_or_path", default=bert_path, ...)
parser.add_argument("--mlm", action='store_true', default=True,...)
parser.add_argument("--mlm_probability", type=float, default=0.15,...)
parser.add_argument("--config_name", default=os.path.join(bert_path, "bert_config.json"),..)
parser.add_argument("--tokenizer_name", default=bert_path, ...)
- [ Is my configuration correct?]
I changed this line, 'model = model_class.from_pretrained(args.model_name_or_path,
from_tf=True,#bool('.ckpt' in args.model_name_or_path),' Because my filename does not have this character('.ckpt').
Then running 'run_lm_finetuning.py' causes the problem in the 'modeling_bert.py' file of the 'load_tf_weights_in_bert' method:
'BertOnlyMLMHead' object has no attribute 'bias'
| 11-27-2019 02:35:53 | 11-27-2019 02:35:53 | @LysandreJik do we support this one?<|||||>We do support BERT in `run_lm_finetuning`, however, we do not support loading BERT checkpoints from the original BERT implementation.
If you wish to load a checkpoint that was pre-trained/fine-tuned using the original implementation (which seems to be what you're doing here), you can first convert to our implementation using [convert_bert_original_tf_checkpoint_to_pytorch](https://github.com/huggingface/transformers/blob/master/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py), it will then be usable by `run_lm_finetuning`.
If you wish to use TensorFlow with the outputted model, you can use the script [convert_pytorch_checkpoint_to_tf2](https://github.com/huggingface/transformers/blob/master/transformers/convert_pytorch_checkpoint_to_tf2.py) which will convert the pytorch model back to tensorflow 2.<|||||>@LysandreJik - not related to fine-tuning but converting one of the fine-tuned (rum_lm_finetuning.py) model to tensorflow checkpoint.
Here is the command I used:
bin/python3.6 convert_pytorch_checkpoint_to_tf2.py --tf_dump_path="../tf_test/" --model_type="bert" --pytorch_checkpoint_path="../pytorch_model.bin" --config_file='../config.json'
However, it was throwing the below error (log and stack trace)
Converting model type 1/1 bert
Converting checkpoint 1/15: ../pytorch_model.bin - model_type bert
Building TensorFlow model from configuration: {
"attention_probs_dropout_prob": 0.1,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"num_labels": 2,
"output_attentions": true,
"output_hidden_states": true,
"output_past": true,
"pruned_heads": {},
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 28996
}
Traceback (most recent call last):
File "/home/imagen/skc/bert/transformers-2.2.1/transformers/convert_pytorch_checkpoint_to_tf2.py", line 248, in <module>
only_convert_finetuned_models=args.only_convert_finetuned_models)
File "/home/imagen/skc/bert/transformers-2.2.1/transformers/convert_pytorch_checkpoint_to_tf2.py", line 194, in convert_all_pt_checkpoints_to_tf
compare_with_pt_model=compare_with_pt_model)
File "/home/imagen/skc/bert/transformers-2.2.1/transformers/convert_pytorch_checkpoint_to_tf2.py", line 115, in convert_pt_checkpoint_to_tf
tf_model = load_pytorch_checkpoint_in_tf2_model(tf_model, pytorch_checkpoint_path)
File "/home/imagen/skc/environments/.virtualenvs/lstm_dev_tf2x/lib/python3.6/site-packages/transformers/modeling_tf_pytorch_utils.py", line 82, in load_pytorch_checkpoint_in_tf2_model
return load_pytorch_weights_in_tf2_model(tf_model, pt_state_dict, tf_inputs=tf_inputs, allow_missing_keys=allow_missing_keys)
File "/home/imagen/skc/environments/.virtualenvs/lstm_dev_tf2x/lib/python3.6/site-packages/transformers/modeling_tf_pytorch_utils.py", line 145, in load_pytorch_weights_in_tf2_model
assert name in pt_state_dict, "{} not found in PyTorch model".format(name)
**AssertionError: cls.seq_relationship.weight not found in PyTorch model**
Can you please explain what did go wrong with the conversion? This is one of the BERT-base fine-tuned model. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,960 | closed | Improving model saving and resuming | ## 🚀 Feature
The official transformer examples should make it easier to continue training a model that suddenly stopped (e.g. vm gets preempted in the middle of a training run).
To do this, the examples should be updated to save the `optimizer` and `scheduler` states to the `output_dir` as well as the current epoch to disk at the end of each epoch in a `training_state.pt` file. This way, the user could choose to continue training from a previous model checkpoint, but would continue training from the saved epoch and would use the saved tokenizer, optimizer, and scheduler.
## Motivation
If your VM gets preempted in the middle of a training run, you won't be able to properly continue training the model since the scheduler will be reset and the current learning rate will be lost.
## Additional context
If anyone is interested in this, I can implement the feature and start a pull request.
Related issues:
- https://github.com/huggingface/transformers/issues/1925
- https://github.com/huggingface/transformers/issues/839 | 11-27-2019 02:02:54 | 11-27-2019 02:02:54 | I think it's very useful this feature because, as you highlight, VMs and other variables could stop the (long) training process.
Technically, how could you implement this feature? Surrounding all the training code in a `try/except` statement and when it occurs a particular Exception (which one?) you ask the user whether he/she wants to save till now and after that you saved a file?
It could be necessary to write a method for saving the optimizer state, the scheduler and the tokenizer in a **standardized** way. Reading #1925 and #839, I understand that @thomwolf suggests to use standard PyTorch method for saving and loading e.g. scheduler.
> ## Feature
> The official transformer examples should make it easier to continue training a model that suddenly stopped (e.g. vm gets preempted in the middle of a training run).
>
> To do this, the examples should be updated to save the `optimizer` and `scheduler` states to the `output_dir` as well as the current epoch to disk at the end of each epoch in a `training_state.pt` file. This way, the user could choose to continue training from a previous model checkpoint, but would continue training from the saved epoch and would use the saved tokenizer, optimizer, and scheduler.
>
> ## Motivation
> If your VM gets preempted in the middle of a training run, you won't be able to properly continue training the model since the scheduler will be reset and the current learning rate will be lost.
>
> ## Additional context
> If anyone is interested in this, I can implement the feature and start a pull request.
>
> Related issues:
>
> * #1925
> * #839<|||||>This would be very useful indeed.
I would guess that when the VM gets preempted the process in which your program runs is sent a `SIGTERM` or `SIGKILL` signal from the OS. You would need to catch this signal and act accordingly. Look at the [signal module](https://docs.python.org/3/library/signal.html) in python's standard library.
An elegant and very general solution would be to define a context manager (`with ... do`) in which we would execute the training and that handles all the backup logic on `SIGTERM` or `SIGKILL`.
Do you want to give it a shot and make a Pull Request? You can @ me when you have a first draft and I can have a look and give you feedback.<|||||>@rlouf I was thinking to just save the optimizer and scheduler whenever the model is saved. As for resuming, you could just load in the optimizer state, scheduler state, and current epoch from the checkpoint file when passing in `--model_name_or_path`
I've got a basic implementation of this on my [fork](https://github.com/bkkaggle/transformers/tree/saving-and-resuming)
<|||||>I've seen your changes in source code. I think this is the "easiest" way to handle this feature and I like it. Do you have tested your code with unit testing? I don't see any test suite. Only for being sure that it works as expected.
N.B: I've left some comments under your changes in your repo, please read them.
> @rlouf I was thinking to just save the optimizer and scheduler whenever the model is saved. As for resuming, you could just load in the optimizer state, scheduler state, and current epoch from the checkpoint file when passing in `--model_name_or_path`
>
> I've got a basic implementation of this on my [fork](https://github.com/bkkaggle/transformers/tree/saving-and-resuming)<|||||>@bkkaggle Definitely the easiest solution if you don't mind resuming from the last checkpoint---mine was a bit heavy-duty :)
I also agree with @TheEdoardo93; Can you rebase your branch on the repo's `master` and open a pull request (if you haven't done so already)? I'm happy to have a closer look.
<|||||>I've updated my branch and submitted a [WIP] [pull request](https://github.com/huggingface/transformers/pull/1987) |
transformers | 1,959 | closed | update Roberta checkpoint conversion | - update to fix fairseq Roberta checkpoint conversion
Fairseq had removed `in_proj_weight` and `in_proj_bias` from the self attention module:
https://github.com/pytorch/fairseq/commit/4c6b689eebe66a53717dacf28cba7a11b6ffa64f
- create save directory if not exist | 11-27-2019 00:17:15 | 11-27-2019 00:17:15 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1959?src=pr&el=h1) Report
> Merging [#1959](https://codecov.io/gh/huggingface/transformers/pull/1959?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5e289f69bc564c94132f77c89a34e5f1dd69a592?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/1959?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #1959 +/- ##
=======================================
Coverage 81.47% 81.47%
=======================================
Files 122 122
Lines 18342 18342
=======================================
Hits 14945 14945
Misses 3397 3397
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1959?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1959?src=pr&el=footer). Last update [5e289f6...5190320](https://codecov.io/gh/huggingface/transformers/pull/1959?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks a lot @armancohan
I think we'll want to keep the possibility to import both types of models for backward compatibility. Can you add a switch based on an identification of the model type?<|||||>@armancohan (rebased on top of master so I force-pushed to your fork)<|||||>So, the fairseq weights themselves didn't change, it's the multiheadattention API that did, in fairseq `0.9.0`. So I'll just check that the fairseq version is >= 0.9 in the script.
I've also updated the script to not hardcode the vocab length, which should make it compatible with other roberta-like fairseq models such as CamemBERT + XLM-R out of the box.
cc @myleott @louismartin <|||||>thanks @julien-c |
transformers | 1,958 | closed | run_ner.py --do_predict inference mode errors. Right data format? | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hello again,
I'm here to bother you one more time.
I fine-tuned preloaded BioBERT weights on a custom dataset to run biomedical NER.
Now I want to use the model for inference mode on a 'raw' set of documents. I renamed this set 'test.txt' and formatted it the following way (documents are separated by '-DOCSTART- (num_doc)' lines):
```
to O
be O
referred O
to O
the O
location O
of O
the O
disease O
in O
the O
skeletal O
structures O
examined O
; O
unchanged O
the O
areas O
of O
bone O
rarefaction O
reported O
to O
the O
sternum O
as O
a O
result O
of O
median O
sternotomy O
. O
```
I had to add the 'fake' labels on the right and place a space " " between col1 and col2.
The error I now get is:
```
Traceback (most recent call last):
File "run_ner.py", line 531, in <module>
main()
File "run_ner.py", line 522, in main
output_line = line.split()[0] + " " + predictions[example_id].pop(0) + "\n"
IndexError: list index out of range
```
Many thanks again. | 11-26-2019 22:43:27 | 11-26-2019 22:43:27 | Three questions:
- the error occurs at line 522, so at line 507 you've saved the file called `test_results.txt`. Do you see the content of this file and whether is it correct?
- the input file has been formatted as CoNLL-2003?
- **N.B**: moreover, the code block from line 512 to line 525 save to .txt file the predictions obtained from the NER. But already at line 505 you have the `predictions` variable. Have you seen the content of this variable? Maybe, it is only a saving problem.
> ## Questions & Help
> Hello again,
>
> I'm here to bother you one more time.
>
> I fine-tuned preloaded BioBERT weights on a custom dataset to run biomedical NER.
>
> Now I want to use the model for inference mode on a 'raw' set of documents. I renamed this set 'test.txt' and formatted it the following way (documents are separated by '-DOCSTART- (num_doc)' lines):
>
> ```
> to O
> be O
> referred O
> to O
> the O
> location O
> of O
> the O
> disease O
> in O
> the O
> skeletal O
> structures O
> examined O
> ; O
>
> unchanged O
> the O
> areas O
> of O
> bone O
> rarefaction O
> reported O
> to O
> the O
> sternum O
> as O
> a O
> result O
> of O
> median O
> sternotomy O
> . O
> ```
>
> I had to add the 'fake' labels on the right and place a space " " between col1 and col2.
>
> The error I now get is:
>
> ```
> Traceback (most recent call last):
> File "run_ner.py", line 531, in <module>
> main()
> File "run_ner.py", line 522, in main
> output_line = line.split()[0] + " " + predictions[example_id].pop(0) + "\n"
> IndexError: list index out of range
> ```
>
> Many thanks again.<|||||>Ciao @TheEdoardo93 ,
Thanks for your support!
- I formatted the test set trying to follow the indications from the tutorial on the german-eval, with the first column being the token and the second being the B-I-O tags (in this set it's just a pile of Os to fill the column). They are space-separated.
- `test_results.txt` is saved and shows precision, recall, f-1, and loss. All are terrible of course, as the test set was actually filled with the dummy BIO tags.
- `test_predictions.txt` is truncated after about 50 lines of token+BIO prediction.
I'm now trying to print the content of `predictions`, I'll let you know.<|||||>I wait your `predictions` variable content :D
We can implement a saving method that works as we expect (and not using the code lines in the `run_ner.py` script and see what happens!)
> Ciao @TheEdoardo93 ,
> Thanks for your support!
>
> * I formatted the test set trying to follow the indications from the tutorial on the german-eval, with the first column being the token and the second being the B-I-O tags (in this set it's just a pile of Os to fill the column). They are space-separated.
> * `test_results.txt` is saved and shows precision, recall, f-1, and loss. All are terrible of course, as the test set was actually filled with the dummy BIO tags.
> * `test_predictions.txt` is truncated after about 50 lines of token+BIO prediction.
>
> I'm now trying to print the content of `predictions`, I'll let you know.<|||||>I'm back.
What I did was: changed columns' separation from tab to space (I was wrong in the previous comment, I thought I already changed it).
Now the code runs properly and `test_predictions.txt` is complete.
This is a snapshot of `print(predictions)`:
```
[['O', 'O', 'B-Organism_subdivision', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Organ', 'O', 'B-Organ', 'O', 'B-Multi-tissue_structure', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Organism_subdivision', 'I-Organism_subdivision', 'O', 'O', 'O', 'O', 'O', 'B-Cancer', 'I-Cancer', 'O'], ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'], ['O', 'O', 'B-Immaterial_anatomical_entity', 'O', 'O', 'O', 'O', 'O', 'O', 'O'], ['O', 'O', 'O', 'O', 'O', 'O', 'B-Multi-tissue_structure', 'B-Multi-tissue_structure', 'O', 'O', 'O', 'O', 'O', 'O', 'O'], ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'], ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Immaterial_anatomical_entity', 'O', 'O'], ..., ['O', 'O', 'O', 'O', 'B-Organ', 'O', 'B-Organ', 'O', 'B-Organ', 'O', 'B-Organ', 'O'], ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Tissue', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Tissue', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Tissue', 'O', 'O', 'O', 'O', 'B-Multi-tissue_structure', 'O', 'O', 'O', 'O', 'O', 'O', 'O']]
```
There is another minor issue I guess, i.e. a long series of warnings about no predictions because of exceeded maximum sequence length. The non-predicted tokens appear to be not really relevant for my final interest, but I'd like to have a complete output nonetheless.
I will try to place a newline not only after usual end-of-sentence punctuation (.!?), but also after semi-colons and colons, in order to split each document in more pieces.
Is it a strategy that makes sense or have I misinterpreted the meaning of the maximum sequence length?
<|||||>Do you have sentences with length greater than **512 tokens**? BioBERT allows to have sentences with 512 tokens length maximum, as stated in the [paper](https://watermark.silverchair.com/btz682.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAApcwggKTBgkqhkiG9w0BBwagggKEMIICgAIBADCCAnkGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMYCT-oIlIkXngiEprAgEQgIICSsx4GD3tbVdWpu1VPO7avpF_9v-YeT2MtTj6ysC7p_RRxtqG74n5C_tpst2LvM8UKCDfiuvU4bWi8PfJiJfKoQiWwR4AH4K0JC2m-Q8YC2V8_Jfuk-AL_CR8oWJc2U40FdB2fyV2tdoYxV0v1A35Qjg6PEujCkc3ztZqcGctW1_awddqskqkGF8Oz02kiwFQyHHlMPuRAewMopnxii_Pqo-nSNynCl03WCCGUKPbkC-vbPwIjo7vjz-opJQaNcTrNOLI8xPzQm3qT5_R85w3mm-CpHbo2rj4LW7YkJrswc7Z4KOlEfdq7AC5WkiIYhYqyauVLTDNzVYwSYJ_L6RsPeNlfxv3rm71J7fppWO_fu4Mbn8vnzmjKS0nqxdEbRcI4bGkpkjCvW-sVa3FIcRbNlOp_fH_PTeMf3VwIR_wGR0Nrw_80_BMzqy774SB1LitxarWsA7h3dU7Gp1f162TloTdqISAsTzfJJSTa4YVU2qHDp2iRzghvsBlXGhtuuiNkLQ_TblRFq3hdMpLtpHH5KlfahZ0tMvfBvbc_YGLi-9U5NmQbUnM0unhb73mQ5SneLAAD9JlLQv-4pXwYDIGi9ekn5G2RwueTOKSiKji8dm1rCtmUFXVL56WsPUdNkgJROoqGCC87_iVdV95TjpL7jVvNfOX8Bvh1eF_iCGyfrsKyK1aDpvY8B4vt3uUJowPlFjDo21AXOe53aAgnb9yay-t53WzmTNw-Q6lfZNiWsSQn9H1cUi7g8P5bRruZkmL8HaYlZje8TVNIn4).
> The maximum sequence length was fixed to 512
If you have sentences with more than 512 tokens, you have to apply different workaround, e.g. splitting a sentence length 1024 in two different sentences of 512 length and combine in some manner their output.
However, the strategy you've proposed (e.g. split by comma, dot, semi-column, etc.) works! Try to follow this approach and share the results with us! I suggest you to do a visual evaluation/comparison between the current output and the output you'll obtain with the strategy highlight by you.
> I'm back.
>
> What I did was: changed columns' separation from tab to space (I was wrong in the previous comment, I thought I already changed it).
>
> Now the code runs properly and `test_predictions.txt` is complete.
> This is a snapshot of `print(predictions)`:
>
> ```
> [['O', 'O', 'B-Organism_subdivision', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Organ', 'O', 'B-Organ', 'O', 'B-Multi-tissue_structure', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Organism_subdivision', 'I-Organism_subdivision', 'O', 'O', 'O', 'O', 'O', 'B-Cancer', 'I-Cancer', 'O'], ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'], ['O', 'O', 'B-Immaterial_anatomical_entity', 'O', 'O', 'O', 'O', 'O', 'O', 'O'], ['O', 'O', 'O', 'O', 'O', 'O', 'B-Multi-tissue_structure', 'B-Multi-tissue_structure', 'O', 'O', 'O', 'O', 'O', 'O', 'O'], ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'], ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Immaterial_anatomical_entity', 'O', 'O'], ..., ['O', 'O', 'O', 'O', 'B-Organ', 'O', 'B-Organ', 'O', 'B-Organ', 'O', 'B-Organ', 'O'], ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Tissue', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Tissue', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-Tissue', 'O', 'O', 'O', 'O', 'B-Multi-tissue_structure', 'O', 'O', 'O', 'O', 'O', 'O', 'O']]
> ```
>
> There is another minor issue I guess, i.e. a long series of warnings about no predictions because of exceeded maximum sequence length. The non-predicted tokens appear to be not really relevant for my final interest, but I'd like to have a complete output nonetheless.
> I will try to place a newline not only after usual end-of-sentence punctuation (.!?), but also after semi-colons and colons, in order to split each document in more pieces.
> Is it a strategy that makes sense or have I misinterpreted the meaning of the maximum sequence length?<|||||>Quite funnily, now a lot more tokens are without predictions.
What I did was just adding a newline after each semicolon with `sed`.
A question that I thought was easy to answer to: what constitutes a sequences in BERT relative to this task? Is it a sequence of tokens between empty lines? Or between defined punctuation?<|||||>Taken from the official BERT [paper](https://arxiv.org/pdf/1810.04805.pdf):
> Throughout this work, a “sentence” can be an arbitrary span of contiguous text, rather than an actual linguistic sentence. A “sequence” refers to the input token sequence to BERT, which may be a single sentence or two sentences packed together.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,957 | closed | Do we need to add [CLS] and [SEP] for BertForMaskedLM ? | ## ❓ Questions & Help
https://github.com/huggingface/transformers/blob/cc7968227e08858df4a5c618c739e1a3ca050196/transformers/modeling_bert.py#L837-L841
Seems like the example is wrong? | 11-26-2019 22:29:03 | 11-26-2019 22:29:03 | like this ?
```
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
```<|||||>The output **without** `add_special_tokens=True`:
```
import torch
from transformers import BertTokenizer, BertForMaskedLM
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
text = 'Hello, my dog is cute'
input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
>>> tensor([[ 101, 7592, 1010, 2026, 3899, 2003, 10140, 102]])
```
If you inspect the vocabulary built by BertTokenizer (accessible through `tokenizer.vocab`), you can see that the token [CLS] and [SEP] have ID 101 and 102, respectively. So, `tokenizer.encode` already add these two tokens at the start and at the end of each encoded sentence.
The output **with** `add_special_tokens=True`:
```
import torch
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', add_special_tokens=True)
text = 'Hello, my dog is cute'
input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
>>> tensor([[ 101, 7592, 1010, 2026, 3899, 2003, 10140, 102]])
```
As you can see, **the output obtained is the same**.
Moreover, this night Transformers passed from 2.1.1 to 2.2.0 version, and reading [here](https://github.com/huggingface/transformers/releases) we can see the statement **Tokenizers now add special tokens by default.**.
> ## Questions & Help
> https://github.com/huggingface/transformers/blob/cc7968227e08858df4a5c618c739e1a3ca050196/transformers/modeling_bert.py#L837-L841
>
> Seems like the example is wrong?<|||||>Yes, my version v2.1.1 will not set add_special_tokens to True by default. Thanks for your comment.
> like this ?
>
> ```
> input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
> ```<|||||>> The output **without** `add_special_tokens=True`:
>
> ```
> import torch
> from transformers import BertTokenizer, BertForMaskedLM
> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
> text = 'Hello, my dog is cute'
> input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
> >>> tensor([[ 101, 7592, 1010, 2026, 3899, 2003, 10140, 102]])
> ```
>
> If you inspect the vocabulary built by BertTokenizer (accessible through `tokenizer.vocab`), you can see that the token [CLS] and [SEP] have ID 101 and 102, respectively. So, `tokenizer.encode` already add these two tokens at the start and at the end of each encoded sentence.
>
> The output **with** `add_special_tokens=True`:
>
> ```
> import torch
> from transformers import BertTokenizer
> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', add_special_tokens=True)
> text = 'Hello, my dog is cute'
> input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
> >>> tensor([[ 101, 7592, 1010, 2026, 3899, 2003, 10140, 102]])
> ```
>
> As you can see, **the output obtained is the same**.
>
> Moreover, this night Transformers passed from 2.1.1 to 2.2.0 version, and reading [here](https://github.com/huggingface/transformers/releases) we can see the statement **Tokenizers now add special tokens by default.**.
>
> > ## Questions & Help
> > https://github.com/huggingface/transformers/blob/cc7968227e08858df4a5c618c739e1a3ca050196/transformers/modeling_bert.py#L837-L841
> >
> > Seems like the example is wrong?
create pull request #1971 to make this less ambiguous across different versions. |
transformers | 1,956 | closed | get_linear_schedule_with_warmup Scheduler | Hello,
When I try to execute the line of code below, Python gives me an import error:
```js
from pytorch_transformers import (GPT2Config, GPT2LMHeadModel, GPT2DoubleHeadsModel,
AdamW, get_linear_schedule_with_warmup)
ImportError: cannot import name 'get_linear_schedule_with_warmup' from 'pytorch_transformers' (/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/pytorch_transformers/__init__.py)
```
What should I then import to use the linear scheduler with warm up?
Thank you, | 11-26-2019 19:17:42 | 11-26-2019 19:17:42 | You might be using an old version of the library, try updating it to v2.2.0<|||||>You're trying to import a method only available in more recent Transformers versions from a (very old) Transformers version called _pytorch-transformers_.
With Transformers 2.1.1 (the second recent one) and the new version 2.2.0, you can import correctly the `get_linear_schedule_with_warmup`. In fact, Transformers modifies its source code for what concern the optimization process (e.g. learning rate). You can see the changes [here](https://github.com/huggingface/transformers/commit/022525b0031bcdbbb62d1223f75919983f2ac426).
> Hello,
>
> When I try to execute the line of code below, Python gives me an import error:
>
> ```js
> from pytorch_transformers import (GPT2Config, GPT2LMHeadModel, GPT2DoubleHeadsModel,
> AdamW, get_linear_schedule_with_warmup)
>
> ImportError: cannot import name 'get_linear_schedule_with_warmup' from 'pytorch_transformers' (/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/pytorch_transformers/__init__.py)
> ```
>
> What should I then import to use the linear scheduler with warm up?
>
> Thank you,<|||||>You should use the `transformers` library instead of the `pytorch_transformers`. The `get_linear_schedule_with_warmup` is only defined in the former, in its latest version.<|||||>Thank you all,<|||||>Hello,
So I installed transformers 2.2.0,
```
pip install transformers
```
and tried to import the same things:
```js
from transformers import (GPT2Config, GPT2LMHeadModel, GPT2DoubleHeadsModel,
AdamW, get_linear_schedule_with_warmup)
```
and it's still giving me the same error:
```
from transformers import (GPT2Config, GPT2LMHeadModel, GPT2DoubleHeadsModel,
AdamW, get_linear_schedule_with_warmup)
2019-11-27 08:40:15.940560: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-11-27 08:40:15.954925: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fe20d2f5a50 executing computations on platform Host. Devices:
2019-11-27 08:40:15.954938: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version
Traceback (most recent call last):
File "<ipython-input-1-99af21631e15>", line 1, in <module>
from transformers import (GPT2Config, GPT2LMHeadModel, GPT2DoubleHeadsModel,
ImportError: cannot import name 'get_linear_schedule_with_warmup' from 'transformers' (/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/transformers/__init__.py)
```
What should I do to be able to use the linear warm up scheduler?
Thank you,<|||||>It's very strange.. In my environment works as expected the import statement.
```
> import transformers
> transformers.__version__
>>> '2.2.0'
> from transformers import get_linear_schedule_with_warmup
> ...
```
Please, share with us your **OS** and your **Python version**.
> Hello,
>
> So I installed transformers 2.2.0,
>
> ```
> pip install transformers
> ```
>
> and tried to import the same things:
>
> ```js
> from transformers import (GPT2Config, GPT2LMHeadModel, GPT2DoubleHeadsModel,
> AdamW, get_linear_schedule_with_warmup)
> ```
>
> and it's still giving me the same error:
>
> ```
> from transformers import (GPT2Config, GPT2LMHeadModel, GPT2DoubleHeadsModel,
> AdamW, get_linear_schedule_with_warmup)
> 2019-11-27 08:40:15.940560: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
> 2019-11-27 08:40:15.954925: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fe20d2f5a50 executing computations on platform Host. Devices:
> 2019-11-27 08:40:15.954938: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version
> Traceback (most recent call last):
>
> File "<ipython-input-1-99af21631e15>", line 1, in <module>
> from transformers import (GPT2Config, GPT2LMHeadModel, GPT2DoubleHeadsModel,
>
> ImportError: cannot import name 'get_linear_schedule_with_warmup' from 'transformers' (/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/transformers/__init__.py)
> ```
>
> What should I do to be able to use the linear warm up scheduler?
>
> Thank you,<|||||>Hello,
I tried uninstalling transformers and install the module again and it works now.
Thank you for all your help, |
transformers | 1,955 | closed | run_squad.py crashes during do_eval | ## 🐛 Bug
When running run_squad.py as provided in the README once training is complete the prediction/evaluation component of the script crashes.
No prediction files are written are written.
Model I am using (Bert, XLNet....): Bert
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [ ] the official example scripts: (give details) example squad fine tuning
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name) official squad dev set 1.1
* [ ] my own task or dataset: (give details) the checkpoint was made using a custom training dataset in squad format, but it appears to be an eval bug
## To Reproduce
Steps to reproduce the behavior:
1. finish training as specified in README
2. I ran with this command
CUDA_VISIBLE_DEVICES=10,11,12,13,14,15 python -m torch.distributed.launch --nproc_per_node=6 ./examples/run_squad.py --model_type bert --model_name_or_path bert-large-uncased-whole-word-masking --do_train --do_eval --do_lower_case --train_file /data/data/SQUAD/train-v1.1json --predict_file /data/data/SQUAD/dev-v1.1.json --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir models/wwm_uncased_finetuned_squad_supp/ --per_gpu_eval_batch_size=6 --per_gpu_train_batch_size=6 --save_steps 500
I also ran with just do_eval using the same model and it produced the same error
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
11/26/2019 18:29:01 - INFO - __main__ - Saving features into cached file /data/data/SQUAD/cached_dev_bert-large-uncased-whole-word-masking_384
11/26/2019 18:29:20 - INFO - __main__ - ***** Running evaluation *****
11/26/2019 18:29:20 - INFO - __main__ - Num examples = 10833
11/26/2019 18:29:20 - INFO - __main__ - Batch size = 6
Evaluating: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 301/301 [00:48<00:00, 6.20it/s]
11/26/2019 18:30:09 - INFO - __main__ - Evaluation done in total 48.868770 secs (0.004511 sec per example)
11/26/2019 18:30:09 - INFO - utils_squad - Writing predictions to: models/wwm_uncased_finetuned_squad_supp/predictions_.json
11/26/2019 18:30:09 - INFO - utils_squad - Writing nbest to: models/wwm_uncased_finetuned_squad_supp/nbest_predictions_.json
Traceback (most recent call last):
File "./examples/run_squad.py", line 573, in <module>
main()
File "./examples/run_squad.py", line 562, in main
result = evaluate(args, model, tokenizer, prefix=global_step)
File "./examples/run_squad.py", line 284, in evaluate
args.version_2_with_negative, args.null_score_diff_threshold)
File "/home/clong/git/transformers/examples/utils_squad.py", line 532, in write_predictions
result = unique_id_to_result[feature.unique_id]
KeyError: 1000000000
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/opt/conda/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.6/site-packages/torch/distributed/launch.py", line 253, in <module>
main()
File "/opt/conda/lib/python3.6/site-packages/torch/distributed/launch.py", line 249, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/opt/conda/bin/python', '-u', './examples/run_squad.py', '--local_rank=5', '--model_type', 'bert', '--model_name_or_path', 'bert-large-uncased-whole-word-masking', '--do_eval', '--do_lower_case', '--train_file', '/data/data/SQUAD/train-v1.1.json', '--predict_file', '/data/data/SQUAD/dev-v1.1.json', '--learning_rate', '3e-5', '--num_train_epochs', '2', '--max_seq_length', '384', '--doc_stride', '128', '--output_dir', 'models/wwm_uncased_finetuned_squad_supp/', '--per_gpu_eval_batch_size=6', '--per_gpu_train_batch_size=6', '--save_steps', '500']' returned non-zero exit status 1.
## Expected behavior
no crashing and predictions written
## Environment
* OS: Ubuntu 18.04 in NVIDIA pytorch container
* Python version: 3.6.9 anaconda
* PyTorch version: 1.3.0 custom nvidia version
* PyTorch Transformers version (or branch): pip install
* Using GPU ? yes
* Distributed of parallel setup ? using 6 of 16 gpu's on system
* Any other relevant information:
| 11-26-2019 18:46:43 | 11-26-2019 18:46:43 | same issue<|||||>I tracked it down further this morning and found the problem, you cannot run do_eval in pytorch distributed mode, do_eval works completely fine when there is no pytorch distributed in the equation. This should probably result in a change to the README
<|||||>👍Just ran it, I confirm that do_eval runs well without distributed mode<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,954 | closed | BertForMultipleChoice | ## ❓ Questions & Help
Shape Error when using BrtForMultipleChoice

Below is the model i used:

| 11-26-2019 18:14:01 | 11-26-2019 18:14:01 | Hi, could you try and test this on a more recent version of the library and let me know if it works?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,953 | closed | the output type of TFBertModel is weird | ```
model = TFBertModel.from_pretrained('bert-base-chinese')
model.summary()
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
predictions = model.predict(validation_input_ids)
print(type(predictions))
print(predictions.shape)
```
```
<class 'list'>
AttributeError: 'list' object has no attribute 'shape'
```
The type is weird.
It is a (N, 512, 768) shape numpy array inside a List.
I had to take it out from the List
```
predictions = predictions[0]
print(predictions.shape)
```
```
(8359, 512, 768)
``` | 11-26-2019 17:14:07 | 11-26-2019 17:14:07 | This is because in a bert pretraining progress, there are two tasks: masked token prediction and next sentence predition . The first needs hidden state of each tokens ( shape: [batch_size, sequence_length, hidden_size]) the second needs the embedding of the whole sequence (shape : [batch_size, hidden_size] ) .
And there is also position left for some one who want to get all the hidden state from each level inside the model ( may represent different level of abstraction besides the last one ) or the attention matrix. <|||||>> This is because in a bert pretraining progress, there are two tasks: masked token prediction and next sentence predition . The first needs hidden state of each tokens ( shape: [batch_size, sequence_length, hidden_size]) the second needs the embedding of the whole sequence (shape : [batch_size, hidden_size] ) .
Because of this
if I want use tf.keras to custom the layer below TFBertModel
I have to add this particular line
bert = bert[0]
```
input_layer = Input(shape = (512,), dtype='int64')
bert = TFBertModel.from_pretrained('bert-base-chinese')(input_layer)
bert = bert[0] # I have to add this particular line
dropout = Dropout(0.1)(bert)
flat = Flatten()(dropout)
classifier = Dense(units=5)(flat)
model = Model(inputs=input_layer, outputs=classifier)
model.summary()
```
```
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 512)] 0
_________________________________________________________________
tf_bert_model (TFBertModel) ((None, 512, 768), (None, 102267648
_________________________________________________________________
dropout_37 (Dropout) (None, 512, 768) 0
_________________________________________________________________
flatten (Flatten) (None, 393216) 0
_________________________________________________________________
dense (Dense) (None, 5) 1966085
=================================================================
Total params: 104,233,733
Trainable params: 104,233,733
Non-trainable params: 0
```<|||||>> > This is because in a bert pretraining progress, there are two tasks: masked token prediction and next sentence predition . The first needs hidden state of each tokens ( shape: [batch_size, sequence_length, hidden_size]) the second needs the embedding of the whole sequence (shape : [batch_size, hidden_size] ) .
>
> Because of this
> if I want use tf.keras to custom the layer below TFBertModel
> I have to add this particular line
> bert = bert[0]
>
> ```
> input_layer = Input(shape = (512,), dtype='int64')
> bert = TFBertModel.from_pretrained('bert-base-chinese')(input_layer)
>
> bert = bert[0] # I have to add this particular line
>
> dropout = Dropout(0.1)(bert)
> flat = Flatten()(dropout)
> classifier = Dense(units=5)(flat)
> model = Model(inputs=input_layer, outputs=classifier)
> model.summary()
> ```
>
> ```
> Model: "model"
> _________________________________________________________________
> Layer (type) Output Shape Param #
> =================================================================
> input_1 (InputLayer) [(None, 512)] 0
> _________________________________________________________________
> tf_bert_model (TFBertModel) ((None, 512, 768), (None, 102267648
> _________________________________________________________________
> dropout_37 (Dropout) (None, 512, 768) 0
> _________________________________________________________________
> flatten (Flatten) (None, 393216) 0
> _________________________________________________________________
> dense (Dense) (None, 5) 1966085
> =================================================================
> Total params: 104,233,733
> Trainable params: 104,233,733
> Non-trainable params: 0
> ```
That's right. But for sentence level classification , I recommend you to use the embedding of whole sequence .
```
bert = bert[1] # instead of bert = bert[0]
```
Just like what the official sequence classificiation does in **TFBertForSequenceClassification** class at
https://github.com/huggingface/transformers/blob/master/transformers/modeling_tf_bert.py<|||||>> ```
> bert = bert[1] # instead of bert = bert[0]
> ```
May I ask why?
It looks like it reduce the features of flatten layer.
It doesn't look like whole.
```
input_layer = Input(shape = (512,), dtype='int64')
bert = TFBertModel.from_pretrained('bert-base-chinese')(input_layer)
bert = bert[1]
dropout = Dropout(0.1)(bert)
flat = Flatten()(dropout)
classifier = Dense(units=5)(flat)
model = Model(inputs=input_layer, outputs=classifier)
model.summary()
```
```
Model: "model_5"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_6 (InputLayer) [(None, 512)] 0
_________________________________________________________________
tf_bert_model_5 (TFBertModel ((None, 512, 768), (None, 102267648
_________________________________________________________________
dropout_225 (Dropout) (None, 768) 0
_________________________________________________________________
flatten_3 (Flatten) (None, 768) 0
_________________________________________________________________
dense_5 (Dense) (None, 5) 3845
=================================================================
Total params: 102,271,493
Trainable params: 102,271,493
Non-trainable params: 0
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>It is still a problem<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,952 | closed | suggest to track repo w/ https rather than ssh | cf #1943 we tell users to track the repository via https rather than ssh (as it requires us to enable ssh). | 11-26-2019 15:31:52 | 11-26-2019 15:31:52 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1952?src=pr&el=h1) Report
> Merging [#1952](https://codecov.io/gh/huggingface/transformers/pull/1952?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8e5d84fcc1a645d3c13b8a2f64fa995637440dad?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/1952?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #1952 +/- ##
======================================
Coverage 84% 84%
======================================
Files 97 97
Lines 14340 14340
======================================
Hits 12047 12047
Misses 2293 2293
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1952?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1952?src=pr&el=footer). Last update [8e5d84f...c6edc47](https://codecov.io/gh/huggingface/transformers/pull/1952?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 1,951 | closed | Benchmark not replicable | ## ❓ Questions & Help
Hello. I wanted to test if everything is allright with my downloads and so I ran the code snippet you provided in the section **Fine-tuning Bert model on the MRPC classification task** in the main README file (the only difference being the number of gpus-I use 4). However, my evaluation results are well below the ones you mention. I get
```
acc = 0.8725490196078431
acc_and_f1 = 0.888829254329469
f1 = 0.9051094890510949
```
in my terminal.
There is also no output file in the specified folder.
Do you know what could cause this?
Thanks for answer
MS | 11-26-2019 15:26:43 | 11-26-2019 15:26:43 | Hello @Pointy-Hat
Please tell us more, as explained [here](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md).<|||||>Technical data of the cluster used for computation:
System: CentOS Linux release 7.5.1804 (Core)
Python: 3.7.4
Pytorch: 1.3.1
Code run:
```
python3 -m torch.distributed.launch --nproc_per_node 4 ./examples/run_glue.py \
--model_type bert \
--model_name_or_path bert-large-uncased-whole-word-masking \
--task_name MRPC \
--do_train \
--do_eval \
--do_lower_case \
--data_dir $GLUE_DIR/MRPC/ \
--max_seq_length 128 \
--per_gpu_eval_batch_size=8 \
--per_gpu_train_batch_size=8 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/mrpc_single/ \
--overwrite_output_dir \
--overwrite_cache
```
Expected results (as per README.txt):
```
acc = 0.8823529411764706
acc_and_f1 = 0.901702786377709
f1 = 0.9210526315789473
```
Obtained results:
```
acc = 0.8725490196078431
acc_and_f1 = 0.888829254329469
f1 = 0.9051094890510949
```
GLUE data obtained via their `download_glue_data.py` script, as recommended in README.<|||||>did you try multiple random seeds?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,950 | closed | word or sentence embedding from BERT model | How can I extract embeddings for a sentence or a set of words directly from pre-trained models (Standard BERT)? For example, I am using Spacy for this purpose at the moment where I can do it as follows:
sentence vector:
`sentence_vector = bert_model("This is an apple").vector`
word_vectors:
```
words = bert_model("This is an apple")
word_vectors = [w.vector for w in words]
```
I am wondering if this is possible directly with huggingface pre-trained models (especially BERT).
| 11-26-2019 13:54:48 | 11-26-2019 13:54:48 | You can use [`BertModel`](https://huggingface.co/transformers/model_doc/bert.html#transformers.BertModel), it'll return the hidden states for the input sentence.<|||||>Found it, thanks @bkkaggle . Just for others who are looking for the same information.
Using Pytorch:
```
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
outputs = model(input_ids)
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
```
Using Tensorflow:
```
import tensorflow as tf
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained('bert-base-uncased')
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1
outputs = model(input_ids)
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
```<|||||>This is a bit different for `...ForSequenceClassification` models. I've found that the item at `outputs[0]` are the logits and the only way to get the `hidden_states` is to set `config.output_hidden_states=True` when initializing the model. Only then was I able to get the `hidden_states` which are located at `outputs[1]`.
Example:
```python3
inputs = {
"input_ids": batch[0],
"attention_mask": batch[1]
}
output = bertmodel(**inputs)
logits = output[0]
hidden_states = output[1]
```
<|||||>By using this code, you can obtain a PyTorch tensor of (1, N, 768) shape, where _N_ is the number of different tokens extracted from `BertTokenizer`. If you want to build the sentence vector by exploiting these N tensors, how do you do that? @engrsfi
> Found it, thanks @bkkaggle . Just for others who are looking for the same information.
>
> Using Pytorch:
>
> ```
> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
> model = BertModel.from_pretrained('bert-base-uncased')
> input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
> outputs = model(input_ids)
> last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
> ```
>
> Using Tensorflow:
>
> ```
> import tensorflow as tf
> from transformers import BertTokenizer, TFBertModel
>
> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
> model = TFBertModel.from_pretrained('bert-base-uncased')
> input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1
> outputs = model(input_ids)
> last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
> ```<|||||>> This is a bit different for `...ForSequenceClassification` models. I've found that the item at `outputs[0]` are the logits and the only way to get the `hidden_states` is to set `config.output_hidden_states=True` when initializing the model. Only then was I able to get the `hidden_states` which are located at `outputs[1]`.
>
> Example:
>
> ```python
> inputs = {
> "input_ids": batch[0],
> "attention_mask": batch[1]
> }
>
> output = bertmodel(**inputs)
> logits = output[0]
> hidden_states = output[1]
> ```
I am interested in the last hidden states which are seen as kind of embeddings. I think you are referring to all hidden states including the output of the embedding layer.
```
"**hidden_states**: (`optional`, returned when ``config.output_hidden_states=True``)
list of ``torch.FloatTensor`` (one for the output of each layer + the output of the embeddings)
of shape ``(batch_size, sequence_length, hidden_size)``:
Hidden-states of the model at the output of each layer plus the initial embedding outputs
```.<|||||>> By using this code, you can obtain a PyTorch tensor of (1, N, 768) shape, where _N_ is the number of different tokens extracted from `BertTokenizer`. If you want to build the sentence vector by exploiting these N tensors, how do you do that? @engrsfi
>
> > Found it, thanks @bkkaggle . Just for others who are looking for the same information.
> > Using Pytorch:
> > ```
> > tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
> > model = BertModel.from_pretrained('bert-base-uncased')
> > input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
> > outputs = model(input_ids)
> > last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
> > ```
> >
> >
> > Using Tensorflow:
> > ```
> > import tensorflow as tf
> > from transformers import BertTokenizer, TFBertModel
> >
> > tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
> > model = TFBertModel.from_pretrained('bert-base-uncased')
> > input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1
> > outputs = model(input_ids)
> > last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
> > ```
You can take an average of them. However, I think the embeddings at first position [CLS] are considered a kind of sentence vector because only those are fed to a further classifier if any for downstream tasks. Disclaimer: I am not sure about it.<|||||>> > This is a bit different for `...ForSequenceClassification` models. I've found that the item at `outputs[0]` are the logits and the only way to get the `hidden_states` is to set `config.output_hidden_states=True` when initializing the model. Only then was I able to get the `hidden_states` which are located at `outputs[1]`.
> > Example:
> > ```python
> > inputs = {
> > "input_ids": batch[0],
> > "attention_mask": batch[1]
> > }
> >
> > output = bertmodel(**inputs)
> > logits = output[0]
> > hidden_states = output[1]
> > ```
>
> I am interested in the last hidden states which are seen as kind of embeddings. I think you are referring to all hidden states including the output of the embedding layer.
>
> ```
> "**hidden_states**: (`optional`, returned when ``config.output_hidden_states=True``)
> list of ``torch.FloatTensor`` (one for the output of each layer + the output of the embeddings)
> of shape ``(batch_size, sequence_length, hidden_size)``:
> Hidden-states of the model at the output of each layer plus the initial embedding outputs
> ```.
> ```
Should be as simple as grabbing the last element in the list:
```python3
last_layer = hidden_states[-1]
```
<|||||>@maxzzze According to the documentation, one can get the last hidden states directly without setting this flag to True. See below.
https://huggingface.co/transformers/_modules/transformers/modeling_bert.html#BertModel
```
Outputs: `Tuple` comprising various elements depending on the configuration (config) and inputs:
**last_hidden_state**: ``torch.FloatTensor`` of shape ``(batch_size, sequence_length, hidden_size)``
Sequence of hidden-states at the output of the last layer of the model.
**pooler_output**: ``torch.FloatTensor`` of shape ``(batch_size, hidden_size)``
Last layer hidden-state of the first token of the sequence (classification token)
further processed by a Linear layer and a Tanh activation function. The Linear
layer weights are trained from the next sentence prediction (classification)
objective during Bert pretraining. This output is usually *not* a good summary
of the semantic content of the input, you're often better with averaging or pooling
the sequence of hidden-states for the whole input sequence.
**hidden_states**: (`optional`, returned when ``config.output_hidden_states=True``)
list of ``torch.FloatTensor`` (one for the output of each layer + the output of the embeddings)
of shape ``(batch_size, sequence_length, hidden_size)``:
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
**attentions**: (`optional`, returned when ``config.output_attentions=True``)
list of ``torch.FloatTensor`` (one for each layer) of shape ``(batch_size, num_heads, sequence_length, sequence_length)``:
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
```
BTW, for me, the shape of hidden_states in the below code is `(batch_size, 768)` when I set this Flag to True, not sure if I can extract last hidden states from that.
```
output = bertmodel(**inputs)
logits = output[0]
hidden_states = output[1]
```<|||||>> @maxzzze According to the documentation, one can get the last hidden states directly without setting this flag to True. See below.
>
> ```
> Outputs: `Tuple` comprising various elements depending on the configuration (config) and inputs:
> **last_hidden_state**: ``torch.FloatTensor`` of shape ``(batch_size, sequence_length, hidden_size)``
> Sequence of hidden-states at the output of the last layer of the model.
> **pooler_output**: ``torch.FloatTensor`` of shape ``(batch_size, hidden_size)``
> Last layer hidden-state of the first token of the sequence (classification token)
> further processed by a Linear layer and a Tanh activation function. The Linear
> layer weights are trained from the next sentence prediction (classification)
> objective during Bert pretraining. This output is usually *not* a good summary
> of the semantic content of the input, you're often better with averaging or pooling
> the sequence of hidden-states for the whole input sequence.
> **hidden_states**: (`optional`, returned when ``config.output_hidden_states=True``)
> list of ``torch.FloatTensor`` (one for the output of each layer + the output of the embeddings)
> of shape ``(batch_size, sequence_length, hidden_size)``:
> Hidden-states of the model at the output of each layer plus the initial embedding outputs.
> **attentions**: (`optional`, returned when ``config.output_attentions=True``)
> list of ``torch.FloatTensor`` (one for each layer) of shape ``(batch_size, num_heads, sequence_length, sequence_length)``:
> Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
> ```
>
> BTW, for me, the shape of hidden_states in the below code is `(batch_size, 768)` whereas it should be `(batch_size, num_heads, sequence_length, sequence_length)`.
>
> ```
> output = bertmodel(**inputs)
> logits = output[0]
> hidden_states = output[1]
> ```
I believe your comment is in reference to the standard models, but its hard to tell without a link. Can you link where to where in the documentation the pasted doc string is from?
I dont know if you saw my original comment but I was providing an example for how to get `hidden_states` from the `..ForSequenceClassification` models, not the standard ones. The `..ForSequenceClassification` models do not output `hidden_states` by default: https://huggingface.co/transformers/model_doc/bert.html#bertforsequenceclassification<|||||>Sorry, I missed that part :) I am referring to the standard BERTMODEL. Doc link:
https://huggingface.co/transformers/model_doc/bert.html#bertmodel
> > @maxzzze According to the documentation, one can get the last hidden states directly without setting this flag to True. See below.
> > ```
> > Outputs: `Tuple` comprising various elements depending on the configuration (config) and inputs:
> > **last_hidden_state**: ``torch.FloatTensor`` of shape ``(batch_size, sequence_length, hidden_size)``
> > Sequence of hidden-states at the output of the last layer of the model.
> > **pooler_output**: ``torch.FloatTensor`` of shape ``(batch_size, hidden_size)``
> > Last layer hidden-state of the first token of the sequence (classification token)
> > further processed by a Linear layer and a Tanh activation function. The Linear
> > layer weights are trained from the next sentence prediction (classification)
> > objective during Bert pretraining. This output is usually *not* a good summary
> > of the semantic content of the input, you're often better with averaging or pooling
> > the sequence of hidden-states for the whole input sequence.
> > **hidden_states**: (`optional`, returned when ``config.output_hidden_states=True``)
> > list of ``torch.FloatTensor`` (one for the output of each layer + the output of the embeddings)
> > of shape ``(batch_size, sequence_length, hidden_size)``:
> > Hidden-states of the model at the output of each layer plus the initial embedding outputs.
> > **attentions**: (`optional`, returned when ``config.output_attentions=True``)
> > list of ``torch.FloatTensor`` (one for each layer) of shape ``(batch_size, num_heads, sequence_length, sequence_length)``:
> > Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
> > ```
> >
> >
> > BTW, for me, the shape of hidden_states in the below code is `(batch_size, 768)` whereas it should be `(batch_size, num_heads, sequence_length, sequence_length)`.
> > ```
> > output = bertmodel(**inputs)
> > logits = output[0]
> > hidden_states = output[1]
> > ```
>
> I believe your comment is in reference to the standard models, but its hard to tell without a link. Can you link where to where in the documentation the pasted doc string is from?
>
> I dont know if you saw my original comment but I was providing an example for how to get `hidden_states` from the `..ForSequenceClassification` models, not the standard ones. The `..ForSequenceClassification` models do not output `hidden_states` by default: https://huggingface.co/transformers/model_doc/bert.html#bertforsequenceclassification
<|||||>@engrsfi @maxzzze @bkkaggle
Please, look [here](https://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/). I hope it can help :)<|||||>@TheEdoardo93 is this example taking the first element in each of the `hidden_states`?<|||||>@engrsfi You can process the hidden states of BERT (all layers or only the last layer) in whatever way you want.
Most people usually only take the hidden states of the [CLS] token of the last layer - using the hidden states for all tokens or from multiple layers doesn't usually help you that much.
If you want to get the embeddings for classification, just do something like:
```
input_sentence = torch.tensor(tokenizer.encode("[CLS] My sentence")).unsqueeze(0)
out = model(input_sentence)
embeddings_of_last_layer = out[0]
cls_embeddings = embeddings_of_last_layer[0]
```<|||||>> @engrsfi You can process the hidden states of BERT (all layers or only the last layer) in whatever way you want.
>
> Most people usually only take the hidden states of the [CLS] token of the last layer - using the hidden states for all tokens or from multiple layers doesn't usually help you that much.
>
> If you want to get the embeddings for classification, just do something like:
>
> ```
> input_sentence = torch.tensor(tokenizer.encode("[CLS] My sentence")).unsqueeze(0)
> out = model(input_sentence)
> embeddings_of_last_layer = out[0]
> cls_embeddings = embeddings_of_last_layer[0]
> ```
Do you have any reference as to "people usually only take the hidden states of the [CLS] token of the last layer"?<|||||>Here are a few related links: [1](https://github.com/google-research/bert/issues/196), [2](https://github.com/hanxiao/bert-as-service#q-what-are-the-available-pooling-strategies), [3](https://yashuseth.blog/2019/06/12/bert-explained-faqs-understand-bert-working/)
The [CLS] token isn't the only (or necessarily the best) way to finetune, but it is the easiest and is Bert's default<|||||>There is some clarification about the use of the last hidden states in the BERT Paper.
According to the paper, the last hidden state for [CLS] is mainly used for classification tasks and the last hidden states for all tokens are used for token level tasks such as sequence tagging or question answering.
From the paper:
> At the output, the token representations are fed into an output layer for token level tasks, such as sequence tagging or question answering, and the [CLS] representation is fed into an output layer for classification, such as entailment or sentiment analysis.
Reference:
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (https://arxiv.org/pdf/1810.04805.pdf)<|||||>What about ALBERT? The output of the last hidden state isn't the same of the embedding because in the doc they say that the embedding have a size of 128 for every model (https://arxiv.org/pdf/1909.11942.pdf page 6).
But I'm not sure if the 128-embedding referenced in the table is something internally used to represent words or the final word embedding.<|||||>> Found it, thanks @bkkaggle . Just for others who are looking for the same information.
>
> Using Pytorch:
>
> ```
> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
> model = BertModel.from_pretrained('bert-base-uncased')
> input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
> outputs = model(input_ids)
> last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
> ```
>
> Using Tensorflow:
>
> ```
> import tensorflow as tf
> from transformers import BertTokenizer, TFBertModel
>
> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
> model = TFBertModel.from_pretrained('bert-base-uncased')
> input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1
> outputs = model(input_ids)
> last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
> ```
if batch size is N, how to convert?<|||||>> What about ALBERT? The output of the last hidden state isn't the same of the embedding because in the doc they say that the embedding have a size of 128 for every model (https://arxiv.org/pdf/1909.11942.pdf page 6).
> But I'm not sure if the 128-embedding referenced in the table is something internally used to represent words or the final word embedding.
128 is used internally by Albert. The output of the model (last hidden state) is your actual word embeddings. In order to understand this better, you should read the following blog from Google.
https://ai.googleblog.com/2019/12/albert-lite-bert-for-self-supervised.html
Quote:
"The key to optimizing performance, captured in the design of ALBERT, is to allocate the model’s capacity more efficiently. Input-level embeddings (words, sub-tokens, etc.) need to learn context-independent representations, a representation for the word “bank”, for example. In contrast, hidden-layer embeddings need to refine that into context-dependent representations, e.g., a representation for “bank” in the context of financial transactions, and a different representation for “bank” in the context of river-flow management.
**This is achieved by factorization of the embedding parametrization — the embedding matrix is split between input-level embeddings with a relatively-low dimension (e.g., 128), while the hidden-layer embeddings use higher dimensionalities (768 as in the BERT case, or more).** With this step alone, ALBERT achieves an 80% reduction in the parameters of the projection block, at the expense of only a minor drop in performance — 80.3 SQuAD2.0 score, down from 80.4; or 67.9 on RACE, down from 68.2 — with all other conditions the same as for BERT."
<|||||>> > Found it, thanks @bkkaggle . Just for others who are looking for the same information.
> > Using Pytorch:
> > ```
> > tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
> > model = BertModel.from_pretrained('bert-base-uncased')
> > input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
> > outputs = model(input_ids)
> > last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
> > ```
> >
> >
> > Using Tensorflow:
> > ```
> > import tensorflow as tf
> > from transformers import BertTokenizer, TFBertModel
> >
> > tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
> > model = TFBertModel.from_pretrained('bert-base-uncased')
> > input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1
> > outputs = model(input_ids)
> > last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
> > ```
>
> if batch size is N, how to convert?
If I understand you correctly, you are asking for how to get the last hidden states for all entries in a batch of size N. If that's the case, then here is the explanation.
Your model expect input of the following shape:
`(batch_size, sequence_length)`
and returns last hidden states of the following shape:
`(batch_size, sequence_length, hidden_size)`
You can just go through the last hidden states to get the individual last hidden state for each input in the batch size of N.
Reference:
https://huggingface.co/transformers/model_doc/bert.html<|||||>@engrsfi : What if I want to use bert embedding vector of each token as an input to an LSTM network? Can I get the embedding of each token of the sentence from the last hidden layer of the bert model? In this case I think I can't just use the embedding for [CLS] token as I need word embedding of each token?
I used the code below to get bert's word embedding for all tokens of my sentences. I padded all my sentences to have maximum length of 80 and also used attention mask to ignore padded elements. in this case the shape of last_hidden_states element is of size (batch_size ,80 ,768). However, when I see my embeddings, I can see that embedding vectors for padded elements are not the same? like I have a vector of size 768 for each token of the sentence(most of them are padded tokens). but vectors for padded element are not equal. is it natural?
import tensorflow as tf
import numpy as np
from transformers import BertTokenizer, TFBertModel
bert_model = TFBertModel.from_pretrained("bert-base-uncased")
bert_tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
tokenized = x_train['token'].apply((lambda x: bert_tokenizer.encode(x, add_special_tokens=True, max_length=80)))
padded = np.array([i + [0]*(80-len(i)) for i in tokenized.values])
attention_mask = np.where(padded != 0, 1, 0)
input_ids = tf.constant(padded)
attention_mask = tf.constant(attention_mask)
output= bert_model(input_ids, attention_mask=attention_mask)
last_hidden_states=output[0]<|||||>> How can I extract embeddings for a sentence or a set of words directly from pre-trained models (Standard BERT)? For example, I am using Spacy for this purpose at the moment where I can do it as follows:
>
> sentence vector:
> `sentence_vector = bert_model("This is an apple").vector`
>
> word_vectors:
>
> ```
> words = bert_model("This is an apple")
> word_vectors = [w.vector for w in words]
> ```
>
> I am wondering if this is possible directly with huggingface pre-trained models (especially BERT).
Hi, could I ask how you would use Spacy to do this? Is there a link? Thanks a lot. <|||||>> > How can I extract embeddings for a sentence or a set of words directly from pre-trained models (Standard BERT)? For example, I am using Spacy for this purpose at the moment where I can do it as follows:
> > sentence vector:
> > `sentence_vector = bert_model("This is an apple").vector`
> > word_vectors:
> > ```
> > words = bert_model("This is an apple")
> > word_vectors = [w.vector for w in words]
> > ```
> >
> >
> > I am wondering if this is possible directly with huggingface pre-trained models (especially BERT).
>
> Hi, could I ask how you would use Spacy to do this? Is there a link? Thanks a lot.
Here is the link:
https://spacy.io/usage/vectors-similarity<|||||>> @engrsfi You can process the hidden states of BERT (all layers or only the last layer) in whatever way you want.
>
> Most people usually only take the hidden states of the [CLS] token of the last layer - using the hidden states for all tokens or from multiple layers doesn't usually help you that much.
>
> If you want to get the embeddings for classification, just do something like:
>
> ```
> input_sentence = torch.tensor(tokenizer.encode("[CLS] My sentence")).unsqueeze(0)
> out = model(input_sentence)
> embeddings_of_last_layer = out[0]
> cls_embeddings = embeddings_of_last_layer[0]
> ```
Thank you for sharing the code. It really helped in understanding tokenization in BERT. I ran this and had a minor problem. Shouldn't it be:
```cls_embeddings = embeddings_of_last_layer[0][0]```? This is because embeddings_of_last_layer is of the dimension: 1*#tokens*#hidden-units. Then, since [CLS] is the first token (and usually have 101 as id), we want embedding corresponding to just [CLS]. ```embeddings_of_last_layer[0]``` is of shape #tokens*#hidden-units and contains embeddings of all the tokens.<|||||>@sahand91
pooled_output, sequence_output = bert_model(input_)
pooled_output.shape = (1, 768), one vector on 768 entries (represent the whole sentence)
sequence_output.shape = (batch_size, max_len, dim), (1, 256, 768) bs = 1, n_tokens = 256
sequence output gives the vector for each token of the sentence.
I have used the sequence output for classification task like sentiment analysis. As the paper mentions that the pooled output is not a good representation of the whole sentence so we use the sequence output and feed it further in a CNN or LSTM.
So I don't see any problem in using the sequence output for classification tasks as we get to see the actual vector representation of the word say "bank" in both contexts "commercial" and "location" (bank of a river) <|||||>> > @engrsfi You can process the hidden states of BERT (all layers or only the last layer) in whatever way you want.
> > Most people usually only take the hidden states of the [CLS] token of the last layer - using the hidden states for all tokens or from multiple layers doesn't usually help you that much.
> > If you want to get the embeddings for classification, just do something like:
> > ```
> > input_sentence = torch.tensor(tokenizer.encode("[CLS] My sentence")).unsqueeze(0)
> > out = model(input_sentence)
> > embeddings_of_last_layer = out[0]
> > cls_embeddings = embeddings_of_last_layer[0]
> > ```
>
> Thank you for sharing the code. It really helped in understanding tokenization in BERT. I ran this and had a minor problem. Shouldn't it be:
>
> `cls_embeddings = embeddings_of_last_layer[0][0]`? This is because embeddings_of_last_layer is of the dimension: 1*#tokens*#hidden-units. Then, since [CLS] is the first token (and usually have 101 as id), we want embedding corresponding to just [CLS]. `embeddings_of_last_layer[0]` is of shape #tokens*#hidden-units and contains embeddings of all the tokens.
Yes i think the same. @sumitsidana
embeddings_of_last_layer[0][0].shape
Out[179]: torch.Size([144]) # where 144 in my case is the hidden_size
Anyone confirming that embeddings_of_last_layer[0][0] is the embedding related to CLS token for each sequence?<|||||>> > > @engrsfi You can process the hidden states of BERT (all layers or only the last layer) in whatever way you want.
> > > Most people usually only take the hidden states of the [CLS] token of the last layer - using the hidden states for all tokens or from multiple layers doesn't usually help you that much.
> > > If you want to get the embeddings for classification, just do something like:
> > > ```
> > > input_sentence = torch.tensor(tokenizer.encode("[CLS] My sentence")).unsqueeze(0)
> > > out = model(input_sentence)
> > > embeddings_of_last_layer = out[0]
> > > cls_embeddings = embeddings_of_last_layer[0]
> > > ```
> >
> >
> > Thank you for sharing the code. It really helped in understanding tokenization in BERT. I ran this and had a minor problem. Shouldn't it be:
> > `cls_embeddings = embeddings_of_last_layer[0][0]`? This is because embeddings_of_last_layer is of the dimension: 1*#tokens*#hidden-units. Then, since [CLS] is the first token (and usually have 101 as id), we want embedding corresponding to just [CLS]. `embeddings_of_last_layer[0]` is of shape #tokens*#hidden-units and contains embeddings of all the tokens.
>
> Yes i think the same. @sumitsidana
> embeddings_of_last_layer[0][0].shape
> Out[179]: torch.Size([144]) # where 144 in my case is the hidden_size
>
> Anyone confirming that embeddings_of_last_layer[0][0] is the embedding related to CLS token for each sequence?
Yes it is. but it is only for first batch. you will have to loop through all the batches and get the first element (CLS) for each sentence.<|||||>Yes gotcha. Thanks<|||||>> This is a bit different for `...ForSequenceClassification` models. I've found that the item at `outputs[0]` are the logits and the only way to get the `hidden_states` is to set `config.output_hidden_states=True` when initializing the model. Only then was I able to get the `hidden_states` which are located at `outputs[1]`.
>
> Example:
>
> ```python
> inputs = {
> "input_ids": batch[0],
> "attention_mask": batch[1]
> }
>
> output = bertmodel(**inputs)
> logits = output[0]
> hidden_states = output[1]
> ```
logtis = output[0] means the word embedding. So, does hidden_states = output[1] means the sentence level embedding ?<|||||>> Found it, thanks @bkkaggle . Just for others who are looking for the same information.
>
> Using Pytorch:
>
> ```
> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
> model = BertModel.from_pretrained('bert-base-uncased')
> input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
> outputs = model(input_ids)
> last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
> ```
>
> Using Tensorflow:
>
> ```
> import tensorflow as tf
> from transformers import BertTokenizer, TFBertModel
>
> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
> model = TFBertModel.from_pretrained('bert-base-uncased')
> input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1
> outputs = model(input_ids)
> last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
> ```
outputs[0] is sentence embedding for "Hello, my dog is cute" right?
then what is outputs[1]?<|||||>> Found it, thanks @bkkaggle . Just for others who are looking for the same information.
>
> Using Pytorch:
>
> ```
> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
> model = BertModel.from_pretrained('bert-base-uncased')
> input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
> outputs = model(input_ids)
> last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
> ```
>
> Using Tensorflow:
>
> ```
> import tensorflow as tf
> from transformers import BertTokenizer, TFBertModel
>
> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
> model = TFBertModel.from_pretrained('bert-base-uncased')
> input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1
> outputs = model(input_ids)
> last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
> ```
If I want to encode a list of strings,
input_ids = torch.tensor(tokenizer.encode(["Hello, my dog is cute", "how are you"])).unsqueeze(0)
It does not really gives me 2*768 array. The only is would be
input_ids = [torch.tensor([tokenizer.encode(text) for text in ["Hello, my dog is cute", "how are you"]]).unsqueeze(0)]
Anything to make it faster?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>> Found it, thanks @bkkaggle . Just for others who are looking for the same information.
>
> Using Pytorch:
>
> ```
> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
> model = BertModel.from_pretrained('bert-base-uncased')
> input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
> outputs = model(input_ids)
> last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
> ```
>
> Using Tensorflow:
>
> ```
> import tensorflow as tf
> from transformers import BertTokenizer, TFBertModel
>
> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
> model = TFBertModel.from_pretrained('bert-base-uncased')
> input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1
> outputs = model(input_ids)
> last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
> ```
This is great, i am interested in how to get word vectors for out of vocabulary (OOV) tokens. Any references would help. thanks .
for example if i use this sentences : "This framework generates embeddings for each input sentence"
i am getting 11 tokens(+start and end) when i have only 8 words, embeddings is a out of vocab in my model. <|||||>@engrsfi
> import tensorflow as tf
> from transformers import BertTokenizer, TFBertModel
>
> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
> model = TFBertModel.from_pretrained('bert-base-uncased')
> input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] # Batch size 1
> outputs = model(input_ids)
> last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
> ```
It stops with errors on model = TFBertModel.from_pretrained('bert-base-uncased'):
model = TFBertModel.from_pretrained('bert-base-uncased')
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py", line 484, in from_pretrained
model(model.dummy_inputs, training=False) # build the network with dummy inputs
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 712, in __call__
outputs = self.call(inputs, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_bert.py", line 739, in call
outputs = self.bert(inputs, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 712, in __call__
outputs = self.call(inputs, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_bert.py", line 606, in call
embedding_output = self.embeddings([input_ids, position_ids, token_type_ids, inputs_embeds], training=training)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 709, in __call__
self._maybe_build(inputs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 1966, in _maybe_build
self.build(input_shapes)
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_bert.py", line 146, in build
initializer=get_initializer(self.initializer_range),
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer.py", line 389, in add_weight
aggregation=aggregation)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/training/tracking/base.py", line 713, in _add_variable_with_custom_getter
**kwargs_for_getter)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 154, in make_variable
shape=variable_shape if variable_shape else None)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py", line 260, in __call__
return cls._variable_v1_call(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py", line 221, in _variable_v1_call
shape=shape)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py", line 199, in <lambda>
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 2502, in default_variable_creator
shape=shape)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py", line 264, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 464, in __init__
shape=shape)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 608, in _init_from_args
initial_value() if init_from_fn else initial_value,
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 134, in <lambda>
init_val = lambda: initializer(shape, dtype=dtype)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/init_ops_v2.py", line 341, in __call__
dtype = _assert_float_dtype(dtype)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/init_ops_v2.py", line 769, in _assert_float_dtype
raise ValueError("Expected floating point type, got %s." % dtype)
ValueError: Expected floating point type, got <dtype: 'int32'>.
|
transformers | 1,949 | closed | Can i train my own text corpus | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I get the idea of using `from_pretrained` but Can i train my own text corpus and then get the weights? If So how? | 11-26-2019 11:05:01 | 11-26-2019 11:05:01 | You can initialize the weights of your model as the ones of e.g. BERT, and after that you can fine-tune your model with _your own data_ (**transfer learning**). Please see [here](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py) for fine-tuning for a particular task.
Have i answered to you?
> ## Questions & Help
> I get the idea of using `from_pretrained` but Can i train my own text corpus and then get the weights? If So how?<|||||>> You can initialize the weights of your model as the ones of e.g. BERT, and after that you can fine-tune your model with _your own data_ (**transfer learning**). Please see [here](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py) for fine-tuning for a particular task.
>
> Have i answered to you?
>
> > ## Questions & Help
> > I get the idea of using `from_pretrained` but Can i train my own text corpus and then get the weights? If So how?
Hey Thanks! I'll go through the source code you referred, also just wanted to confirm the same goes with gpt-2 model right?<|||||>Yeah, this particular script works with OpenAI GPT-2 too.
In general, the most part of the code is the same even changing the model chosen.
> > You can initialize the weights of your model as the ones of e.g. BERT, and after that you can fine-tune your model with _your own data_ (**transfer learning**). Please see [here](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py) for fine-tuning for a particular task.
> > Have i answered to you?
> > > ## Questions & Help
> > > I get the idea of using `from_pretrained` but Can i train my own text corpus and then get the weights? If So how?
>
> Hey Thanks! I'll go through the source code you referred, also just wanted to confirm the same goes with gpt-2 model right?<|||||>Thanks for the quick response, i'll close this one then. |
transformers | 1,948 | closed | Should I use `attention_mask`? | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
#### My Target
1. Transfer sentences to `ids`
2. Padding `ids` when
3. Encode `ids` to vector(`last_hidden_states`)
4. Put this vector to my own downstream model.
#### My code
```py
original_text = "Hello world!"
ids = model.encode(original_text)
padding_mask = [1] * len(ids) # Padding mask
while ids < max_length:
ids.append(0) # Padding
padding_mask.append(0) # Mask using 0
# === Use `attention_mask` ===
outputs = model(ids, attention_mask=padding_mask)
last_hidden_states = outputs[0] # Get vector such as '<CLS>'
```
#### But...
In official demo code, **it does not use padding mask**:
```py
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
# === Not use `attention_mask` ===
outputs = model(input_ids)
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
```
So, why? Should I use `attention_mask`?
Thanks! | 11-26-2019 10:33:51 | 11-26-2019 10:33:51 | Hi,
The attention mask is useful when there are padding indices on which you do not want to perform attention. You should use it when you're using padding, which should only happen when having a batch size superior to one with different sized sequences in those batches.<|||||>> Hi,
>
> The attention mask is useful when there are padding indices on which you do not want to perform attention. You should use it when you're using padding, which should only happen when having a batch size superior to one with different sized sequences in those batches.
Thanks for your reply! @LysandreJik
So, you mean I should use it when I use batch data and there are different sized sequences in batch, otherwise I shouldn’t.
But why the official demo is ok? I guess it will implement PAD again no matter what I have already implemented PAD manually or not? Right?<|||||>> So, you mean I should use it when I use batch data and there are different sized sequences in batch, otherwise I shouldn’t.
Exactly. You can use them, but you don't need to. You probably shouldn't because of the performance cost.
> But why the official demo is ok? I guess it will implement PAD again no matter what I have already implemented PAD manually or not? Right?
It is ok because there is only one sentence.<|||||>> > So, you mean I should use it when I use batch data and there are different sized sequences in batch, otherwise I shouldn’t.
>
> Exactly. You can use them, but you don't need to. You probably shouldn't because of the performance cost.
>
> > But why the official demo is ok? I guess it will implement PAD again no matter what I have already implemented PAD manually or not? Right?
>
> It is ok because there is only one sentence.
Thank you! @rlouf
1. I think those tokens that have been padded should not be paid attention. So I don't know why I don't need to.
2. The most models have a fixed length(max_length), the sentences should pad to it before feed. I mean why the official demo doesn't pad.<|||||>> 1. I think those tokens that have been padded should not be paid attention. So I don't know why I don't need to.
Sorry, now I re-read my response I realize it was not very clear. I meant that if they are the same size you can still pad, but you'd hit the performance. If they are not, then you should absolutely pad and pass the appropriate attention mask.
> 2. The most models have a fixed length(max_length), the sentences should pad to it before feed. I mean why the official demo doesn't pad.
Yes but sentences don't all have the same length:
```
// No need to pad this
[[1, 2, 3],
[5, 6, 7]]
```
BUT
```
// Here you should pad
[[1, 2, 3, pad_token_id],
[5, 6, 7, 8]]
```<|||||>Thank you very much! I got it. @rlouf |
transformers | 1,947 | closed | Expected object of scalar type Byte but got scalar type Bool for argument #2 'mask' | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
fine tuning bert use wikitext2 | 11-26-2019 09:30:56 | 11-26-2019 09:30:56 | in probability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.bool), value=0.0)<|||||>Please, describe your environment, post the source code for reproducibility and the error.
> in probability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.bool), value=0.0)<|||||>Please upgrade your Pytorch version to 1.2.0+.<|||||>You're probably passing in a boolean tensor (true or false) instead of a byte tensor (0 or 1) for your attention mask.
Try changing
```
probability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.bool), value=0.0)
```
to
```
probability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.uint8), value=0.0)
``` |
transformers | 1,946 | closed | Fixed typo | Changed `emove` to `remove` | 11-26-2019 04:06:23 | 11-26-2019 04:06:23 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1946?src=pr&el=h1) Report
> Merging [#1946](https://codecov.io/gh/huggingface/transformers/pull/1946?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5d3b8daad2cc6287d30f03f8a96d0a1f7bc8d0dc?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/1946?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #1946 +/- ##
======================================
Coverage 84% 84%
======================================
Files 97 97
Lines 14340 14340
======================================
Hits 12047 12047
Misses 2293 2293
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1946?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1946/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2JlcnQucHk=) | `87.77% <100%> (ø)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1946?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1946?src=pr&el=footer). Last update [5d3b8da...e1d116d](https://codecov.io/gh/huggingface/transformers/pull/1946?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 1,945 | closed | When using the Bert model | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
pregenerate_training_data.py and finetune_on_pregenerated.py Is it in the project.If so, where? | 11-26-2019 03:54:55 | 11-26-2019 03:54:55 | These two files are **not** in the Transformers library **now**.
> ## Questions & Help
> pregenerate_training_data.py and finetune_on_pregenerated.py Is it in the project.If so, where?<|||||>As @TheEdoardo93 says, these files were community maintained and have been removed a few months ago. <|||||>so what is the procedure to fine tune BERT on my data? |
transformers | 1,944 | closed | tokenization progress made more sensible via tqdm | Because tokenization takes a relatively long time, having progress being visualized via tqdm would be nice. | 11-25-2019 22:09:05 | 11-25-2019 22:09:05 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1944?src=pr&el=h1) Report
> Merging [#1944](https://codecov.io/gh/huggingface/transformers/pull/1944?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5d3b8daad2cc6287d30f03f8a96d0a1f7bc8d0dc?src=pr&el=desc) will **increase** coverage by `<.01%`.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/1944?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #1944 +/- ##
==========================================
+ Coverage 84% 84.01% +<.01%
==========================================
Files 97 97
Lines 14340 14341 +1
==========================================
+ Hits 12047 12048 +1
Misses 2293 2293
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1944?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1944/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `92.16% <100%> (+0.01%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1944?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1944?src=pr&el=footer). Last update [5d3b8da...36b211d](https://codecov.io/gh/huggingface/transformers/pull/1944?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Ok for me.
Ok for you @LysandreJik?<|||||>I feel this might output too much text when tokenizing a lot of small sequences (which is the case for practically every example). It would be useful when tokenizing large datasets though. Maybe test if the length is superior to, say, 10000 before? What do you think?<|||||>You might be right, I have never thought about that. But it's a stubborn fact that when the time comes to tokenize larger datasets. How could we test about length? It is not about the `tokenized_text` 's length.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,943 | closed | [email protected]: Permission denied (publickey) when fetching | When you wanted to sync forked repository with base, as described [here](https://github.com/huggingface/transformers/blame/aa92a184d2b92faadec975139ad55e2ae749362c/CONTRIBUTING.md#L140)
You get:
```
➜ transformers (master) ✗ git fetch upstream
[email protected]: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
```
[This](https://stackoverflow.com/a/34081606) solution introduced on stackoverflow fixes the problem. [This](https://stackoverflow.com/questions/13509293/git-fatal-could-not-read-from-remote-repository#comment85002398_34081606) one also says:
> If the repo owner has not set up ssh keys then you will likely have this issue. The fix as indicated is to use https instead, or have the repo owner set up ssh
Could you please fix this (by setting up ssh?), in order to make contributing easy?
Thanks! | 11-25-2019 21:49:17 | 11-25-2019 21:49:17 | Yeah you can clone using https, it's usually easier (github actually recommends it for simple workflows)
cc @rlouf <|||||>Nice! Then, we might update [this line](https://github.com/huggingface/transformers/blame/5d3b8daad2cc6287d30f03f8a96d0a1f7bc8d0dc/CONTRIBUTING.md#L109) since Github encourages https instead of ssh.<|||||>Thanks for pointing this out, I just made the change. |
transformers | 1,942 | closed | Wrong paraphrase in the TF2/PyTorch README example. | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): TFBertForSequenceClassification
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [x] the official example scripts: https://github.com/huggingface/transformers#quick-tour-tf-20-training-and-pytorch-interoperability
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: Sequence Classification
* [ ] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
1. Run the attached script.
2. Observe
```
$ /Users/igor/projects/ml-venv/bin/python /Users/igor/projects/transformers-experiments/paraphrasing_issue.py
2019-11-25 08:58:53.985213: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fed57a2be00 executing computations on platform Host. Devices:
2019-11-25 08:58:53.985243: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version
INFO:absl:Overwrite dataset info from restored data version.
INFO:absl:Reusing dataset glue (/Users/igor/tensorflow_datasets/glue/mrpc/0.0.2)
INFO:absl:Constructing tf.data.Dataset for split None, from /Users/igor/tensorflow_datasets/glue/mrpc/0.0.2
Train for 115 steps, validate for 7 steps
Epoch 1/2
4/115 [>.............................] - ETA: 1:22:04 - loss: 0.6936 5/115 [>.............................] - ETA: 1:18:44 - loss: 0.6876 6/115 [>.............................] - ETA: 1:16:01 - loss: 0.6760115/115 [==============================] - 4587s 40s/step - loss: 0.5850 - accuracy: 0.7045 - val_loss: 0.4695 - val_accuracy: 0.8137
Epoch 2/2
115/115 [==============================] - 4927s 43s/step - loss: 0.3713 - accuracy: 0.8435 - val_loss: 0.3825 - val_accuracy: 0.8358
**sentence_1 is a paraphrase of sentence_0
sentence_2 is a paraphrase of sentence_0**
```
3. Wonder why.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
```
import tensorflow as tf
import tensorflow_datasets
from transformers import *
# Load dataset, tokenizer, model from pretrained model/vocabulary
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')
data = tensorflow_datasets.load('glue/mrpc')
# Prepare dataset for GLUE as a tf.data.Dataset instance
train_dataset = glue_convert_examples_to_features(data['train'], tokenizer, max_length=128, task='mrpc')
valid_dataset = glue_convert_examples_to_features(data['validation'], tokenizer, max_length=128, task='mrpc')
train_dataset = train_dataset.shuffle(100).batch(32).repeat(2)
valid_dataset = valid_dataset.batch(64)
# Prepare training: Compile tf.keras model with optimizer, loss and learning rate schedule
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
# Train and evaluate using tf.keras.Model.fit()
history = model.fit(train_dataset, epochs=2, steps_per_epoch=115,
validation_data=valid_dataset, validation_steps=7)
# Load the TensorFlow model in PyTorch for inspection
model.save_pretrained('./save/')
pytorch_model = BertForSequenceClassification.from_pretrained('./save/', from_tf=True)
# Quickly test a few predictions - MRPC is a paraphrasing task, let's see if our model learned the task
sentence_0 = "This research was consistent with his findings."
sentence_1 = "His findings were compatible with this research."
sentence_2 = "His findings were not compatible with this research."
inputs_1 = tokenizer.encode_plus(sentence_0, sentence_1, add_special_tokens=True, return_tensors='pt')
inputs_2 = tokenizer.encode_plus(sentence_0, sentence_2, add_special_tokens=True, return_tensors='pt')
pred_1 = pytorch_model(inputs_1['input_ids'], token_type_ids=inputs_1['token_type_ids'])[0].argmax().item()
pred_2 = pytorch_model(inputs_2['input_ids'], token_type_ids=inputs_2['token_type_ids'])[0].argmax().item()
print("sentence_1 is", "a paraphrase" if pred_1 else "not a paraphrase", "of sentence_0")
print("sentence_2 is", "a paraphrase" if pred_2 else "not a paraphrase", "of sentence_0")
```
## Expected behavior
```
sentence_1 is a paraphrase of sentence_0
sentence_2 is not a paraphrase of sentence_0
```
## Environment
* OS: MacOS
* Python version: 3.7.5
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): last commit afaa33585109550f9ecaaee4e47f187aaaefedd0 as of Sat Nov 23 11:34:45 2019 -0500.
* Using GPU ? nope
* Distributed of parallel setup ? single machine
* Any other relevant information: TF version is 2.0.0
| 11-25-2019 19:46:09 | 11-25-2019 19:46:09 | Hi, I'm investigating. For now, I confirm the issue that you observe. I've tested on both CPU and GPU and it gives the same result. I've tested with Pytorch and TF models too, same result. Now, let's track the cause!<|||||>Hi again,
Ok I've retrained a Pytorch model using `run_glue.py` on MRPC to check.
The final metrics are:
```
***** Eval results *****
acc = 0.8382608695652174
acc_and_f1 = 0.8608840882272851
f1 = 0.8835073068893529
```
So it's not crazy high but not near random either.
Then I've retested:
```
Is "This research was consistent with his findings" same as:
"His findings were compatible with this research." ?
TRUE -> 😄
"His findings were not compatible with this research." ?
TRUE -> 😢
```
I've taken a more complex sentence from training set
```
Is 'Amrozi accused his brother, whom he called "the witness", of deliberately distorting his evidence.' same as:
"Referring to him as only "the witness", Amrozi accused his brother of deliberately distorting his evidence." ?
TRUE -> 😄
"Referring to him as only "the witness", Amrozi accused his brother of not deliberately distorting his evidence." ?
TRUE -> 😢
"platypus to him as only "the platypus", platypus accused his platypus of deliberately platypus his evidence." ?
TRUE -> 😭
"platypus to him as only "the platypus", platypus accused his platypus of deliberately platypus his platypus." ?
FALSE -> 🌝
```
Here we see that it's not robust to `not` as in the primary case. Then it's also not robust to replacing any word with `platypus` until I replace 6 words (which is quite disappointing on the performance of the model, it's true).
I've taken sentences from test set:
```
Is "A tropical storm rapidly developed in the Gulf of Mexico Sunday and was expected to hit somewhere along the Texas or Louisiana coasts by Monday night." same as:
"A tropical storm rapidly developed in the Gulf of Mexico on Sunday and could have hurricane-force winds when it hits land somewhere along the Louisiana coast Monday night." ?
TRUE -> 😢
----------------------------------------------------------------------------------------
Is "The broader Standard & Poor's 500 Index <.SPX> was 0.46 points lower, or 0.05 percent, at 997.02." same as:
"The technology-laced Nasdaq Composite Index .IXIC was up 7.42 points, or 0.45 percent, at 1,653.44." ?
FALSE -> 😄
--------------------------------------------------------------------------------------------
Is "NASA plans to follow-up the rovers' missions with additional orbiters and landers before launching a long-awaited sample-return flight." same as:
"NASA plans to explore the Red Planet with ever more sophisticated robotic orbiters and landers."
FALSE -> 😄
----------------------------------------------------------------------------------------
Is "We are piloting it there to see whether we roll it out to other products." same as:
"Macromedia is piloting this product activation system in Contribute to test whether to roll it out to other products."
TRUE -> 😄
```
Here we see that sometimes it works, sometimes not. I might be wrong but I haven't seen anything in the code that could explain this issue (83% is the final accuracy on dev set... ok but it remains 1 error on 5 cases). A priori, I'd say that basic BERT trained like that on this tiny dataset is simply not that robust for that task in a generalized case and would need more data or at least more data augmentation.
Do you share my conclusion or see something different?
<|||||>Thanks for the investigation. Was the performance ever different at the time when that example was put into the README?<|||||>TBH, personally I wasn't there, so I don't know...
If anyone at huggingface can answer this question?
I've been looking at MRPC leaderboard https://gluebenchmark.com/leaderboard/ and BERT is around my training above so it looks like a normal score.<|||||>MRPC is a very small dataset (the smallest among all GLUE benchmark and that's why we use it as an example). I should not be expected to generalize well and be usable in real-life settings.
The perfrormance you got @mandubian are a normal score indeed.<|||||>Sounds like we don't think there's an actionable issue here. |
transformers | 1,941 | closed | NER - sciBERT weights not initialized. | ## ❓ Questions & Help
<!-- Custom weights not initialized when training NER on dataset. -->
Hi all,
first of all thanks for this awesome interface.
Coming to the issue:
I am trying out NER on the Anatem dataset, using Google Colab's GPU.
I imported SciBERT (and BioBERT) models with the solutions provided in issue [457](https://github.com/huggingface/transformers/issues/457).
For clarity, batch_size is 8 because when set to 16 the GPU goes into seg fault.
the scritp is the following. I am into the `transformers/examples` folder
```
!python3 run_ner.py --data_dir ../datasets/ \
--model_type bert \
--labels ../datasets/labels.txt \
--model_name_or_path ../scibert_model \
--output_dir ../results_scibert \
--max_seq_length 512 \
--num_train_epochs 3 \
--per_gpu_train_batch_size 8 \
--save_steps 750 \
--seed 1 \
--do_train \
--do_eval \
--do_predict
```
And the warning is:
```
11/25/2019 16:47:14 - INFO - transformers.tokenization_utils - Model name '../scibert_model' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). Assuming '../scibert_model' is a path or url to a directory containing tokenizer files.
11/25/2019 16:47:14 - INFO - transformers.tokenization_utils - Didn't find file ../scibert_model/added_tokens.json. We won't load it.
11/25/2019 16:47:14 - INFO - transformers.tokenization_utils - Didn't find file ../scibert_model/special_tokens_map.json. We won't load it.
11/25/2019 16:47:14 - INFO - transformers.tokenization_utils - Didn't find file ../scibert_model/tokenizer_config.json. We won't load it.
11/25/2019 16:47:14 - INFO - transformers.tokenization_utils - loading file ../scibert_model/vocab.txt
11/25/2019 16:47:14 - INFO - transformers.tokenization_utils - loading file None
11/25/2019 16:47:14 - INFO - transformers.tokenization_utils - loading file None
11/25/2019 16:47:14 - INFO - transformers.tokenization_utils - loading file None
11/25/2019 16:47:14 - INFO - transformers.modeling_utils - loading weights file ../scibert_model/pytorch_model.bin
11/25/2019 16:47:20 - INFO - transformers.modeling_utils - Weights of BertForTokenClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias']
11/25/2019 16:47:20 - INFO - transformers.modeling_utils - Weights from pretrained model not used in BertForTokenClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias']
```
Could you please explain the meaning of this? I have read other issues about it, but I didn't really grasp the meaning and the solution.
Thank you very much! | 11-25-2019 17:14:35 | 11-25-2019 17:14:35 | Hi, this means that the script did not find your tokenization file. You're pointing to the folder `../scibert_model` but either that folder does not exist, either it does not contain a `vocab.txt` file which is required by the `BertTokenizer`.<|||||>Hi LysandreJik,
Many thanks for your reply.
`vocab.txt` does indeed exist, as well as the folder. I can see it is loaded in the section I proposed above.
It also states that the weights from the provided model in the folder are loaded, but then it specifies that weights for `BertForTokenClassification` are not.
Are there weights for separate objects?
Sorry for the stupid questions, just trying to understand whether I'm doing things the proper way.<|||||>Could you try to load the tokenizer/model in a standalone script, or in a python console? Here are a the required commands to load a tokenizer and a model from a saved checkpoint:
```py
from transformers import BertTokenizer, BertModelForTokenClassification
tokenizer = BertTokenizer.from_pretrained(folder)
model = BertModelForTokenClassification.from_pretrained(folder)
```
Thanks!<|||||>Sure, I loaded it in Ipython.
I just changed `BertModelForTokenClassification` to `BertForTokenClassification` and I'm in folder `/transformers` instead of `transformers/examples`:
```
In [2]: from transformers import BertTokenizer, BertForTokenClassification
In [3]: tokenizer = BertTokenizer.from_pretrained('./scibert_model/')
In [4]: model = BertForTokenClassification.from_pretrained('./scibert_model/')
In [5]: print(tokenizer)
<transformers.tokenization_bert.BertTokenizer object at 0x7fe34c08add8>
In [6]: print(model)
BertForTokenClassification(
(bert): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(31090, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(1): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(2): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(3): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(4): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(5): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(6): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(7): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(8): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(9): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(10): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(11): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
(dropout): Dropout(p=0.1, inplace=False)
(classifier): Linear(in_features=768, out_features=2, bias=True)
)
```<|||||>Ah right, my mistake I thought there was an error in your first message but there actually is none, it's just a warning! I misunderstood.
The first warning concerning the tokenizer means that no special tokens were added when the vocabulary was saved.
The second warning means that some weights were not loaded by the model: `['classifier.weight', 'classifier.bias']` and that some weights were not present in the checkpoint: ` ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias']`.
This means that if you want to use this model for some task you will need to fine-tune it as the classifier layers were initialized randomly. This is the case for most of our models as each task requires specific training.<|||||>Many thanks for your quick response and your availability, @LysandreJik!
By fine-tuning it, do you mean I should run it in training and evaluation mode without prediction?<|||||>Fine-tuning a model means that in **training mode**:
- you initialize the weights of the entire model you have equals to the ones in SciBERT model
- after that, you train the model with _your own data_ in order to obtain better performance on your specific task
Once you have finished to train the model, you can use for **prediction purpose** and see whether the model has enough accuracy for your task.
> Many thanks for your quick response and your availability, @LysandreJik!
>
> By fine-tuning it, do you mean I should run it in training and evaluation mode without prediction?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,940 | closed | Add TF2 NER example | Hi,
Here my small contribution. I have implemented the TF2 version of the NER example already existing in the repo. I tried to have an implementation as close as possible of the Pytorch version.
Let me know for any needed changes :) | 11-25-2019 17:03:00 | 11-25-2019 17:03:00 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1940?src=pr&el=h1) Report
> Merging [#1940](https://codecov.io/gh/huggingface/transformers/pull/1940?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/88b317739fe56888528c857fc8e90967148a0051?src=pr&el=desc) will **decrease** coverage by `1.04%`.
> The diff coverage is `51.37%`.
[](https://codecov.io/gh/huggingface/transformers/pull/1940?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #1940 +/- ##
==========================================
- Coverage 84.26% 83.21% -1.05%
==========================================
Files 104 106 +2
Lines 15431 15679 +248
==========================================
+ Hits 13003 13048 +45
- Misses 2428 2631 +203
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1940?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [transformers/tokenization\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1940/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9kaXN0aWxiZXJ0LnB5) | `100% <ø> (ø)` | :arrow_up: |
| [transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1940/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2Rpc3RpbGJlcnQucHk=) | `95.87% <ø> (ø)` | :arrow_up: |
| [transformers/configuration\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1940/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fZGlzdGlsYmVydC5weQ==) | `89.74% <ø> (ø)` | :arrow_up: |
| [transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1940/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2F1dG8ucHk=) | `32.5% <ø> (-18.75%)` | :arrow_down: |
| [transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1940/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2F1dG8ucHk=) | `31.11% <0%> (-0.71%)` | :arrow_down: |
| [transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1940/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `95.13% <0%> (+2.18%)` | :arrow_up: |
| [transformers/modeling\_tf\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/1940/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3RyYW5zZm9feGxfdXRpbGl0aWVzLnB5) | `85.57% <100%> (ø)` | :arrow_up: |
| [transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1940/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `90.86% <100%> (+1.42%)` | :arrow_up: |
| [transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1940/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fdXRpbHMucHk=) | `88.31% <100%> (-3.8%)` | :arrow_down: |
| [transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1940/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX29wZW5haS5weQ==) | `95.92% <100%> (ø)` | :arrow_up: |
| ... and [38 more](https://codecov.io/gh/huggingface/transformers/pull/1940/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1940?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1940?src=pr&el=footer). Last update [88b3177...938da1c](https://codecov.io/gh/huggingface/transformers/pull/1940?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I finished to implement the TF2 NER, it should be really similar to the Pytorch one as I tried to reproduce most of the parameters.<|||||>Thanks for your contribution @jplu
Do you mind doing a clean rebase (and force-push to this branch), or create a new PR, with just your changes?<|||||>I will remake the PR, that will be cleaner :) |
transformers | 1,939 | closed | Abruptly model training was stopped | I'm fine tuning NER custom dataset using BERT cased model. I'm using script of examples/run_ner.py to fine tune. **In 2nd epoch training was stopped abruptly without displaying any error.**
Epoch: 67%|██████▋ | 2/3 [2:48:41<1:24:21, 5061.32s/it]
Iteration: 98%|█████████▊| 3734/3816 [1:22:30<01:48, 1.32s/it]
Iteration: 98%|█████████▊| 3735/3816 [1:22:31<01:47, 1.33s/it]
**The training was stopped at 98%.**
Training details are given here:
no. of training sentences: 122095, batch size=32, num_epochs=3, save_steps=750, GPU server: Tesla K40m
Could anybody help me out how to solve this issue and please let me know if you need any further information.
Thanks in advance
| 11-25-2019 16:58:49 | 11-25-2019 16:58:49 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,938 | closed | Load output file from fine-tuned bert language model | Hi,
I have fine-tuned bert cased language model using **run_lm_finetuning.py**. 'output' is the output file directory and '**bert-base-cased.txt**' is another file created by the model.
1. Does the .txt file mentioned has the output of fine-tuned model??
2. If so, how should I open the file?? I am getting 'UTF-8' encoding issue with the file.
Thank you.
| 11-25-2019 16:09:59 | 11-25-2019 16:09:59 | Hmm this script should not output any `.txt` files except `eval_results.txt`. What is inside this output directory except for this file?<|||||>I didn't provide any evaluation file.
However logging info is as follows:
- Creating features from dataset file at
- Saving features into cached file **bert-base-cased_cached_lm_32.txt** |
transformers | 1,937 | closed | access to the vocabulary | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Is there any way we can get access to the vocabulary in GPT2? Like a list: [subtoken1, subtoken2, ...subtoken 10000...]
Thank you in advance! | 11-25-2019 15:53:33 | 11-25-2019 15:53:33 | You can obtain the **50.257 different tokens** with the following code:
```
import transformers
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
vocab = list(tokenizer.encoder.keys())
assert(len(vocab) == tokenizer.vocab_size) # it returns True!
```
Close the issue if you've resolved your problem! ;)
> ## Questions & Help
> Is there any way we can get access to the vocabulary in GPT2? Like a list: [subtoken1, subtoken2, ...subtoken 10000...]
>
> Thank you in advance!<|||||>thank you! |
transformers | 1,936 | closed | how to output specific layer of TFBertForSequenceClassification, or add layer? | how to output the last layer of TFBertForSequenceClassification?
I want to output the layer before classifier (Dense)
```
Model: "tf_bert_for_sequence_classification"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
bert (TFBertMainLayer) multiple 102267648
_________________________________________________________________
dropout_37 (Dropout) multiple 0
_________________________________________________________________
classifier (Dense) multiple 3845
=================================================================
Total params: 102,271,493
Trainable params: 102,271,493
Non-trainable params: 0
```
I tried tf.keras function
```
dense1_layer_model = Model(inputs=model.input, outputs=model.get_layer('bert').output)
```
It didnt worked. | 11-25-2019 12:32:44 | 11-25-2019 12:32:44 | Please, copy and paste the source code in order to reproduce your problem.
> how to output the last layer of TFBertForSequenceClassification?
>
> I want to output the layer before classifier (Dense)
>
> ```
> Model: "tf_bert_for_sequence_classification"
> _________________________________________________________________
> Layer (type) Output Shape Param #
> =================================================================
> bert (TFBertMainLayer) multiple 102267648
> _________________________________________________________________
> dropout_37 (Dropout) multiple 0
> _________________________________________________________________
> classifier (Dense) multiple 3845
> =================================================================
> Total params: 102,271,493
> Trainable params: 102,271,493
> Non-trainable params: 0
> ```
>
> I tried tf.keras function
>
> ```
> dense1_layer_model = Model(inputs=model.input, outputs=model.get_layer('bert').output)
> ```
>
> It didnt worked.<|||||>> Please, copy and paste the source code in order to reproduce your problem.
this is my original code
```
model = TFBertForSequenceClassification.from_pretrained('bert-base-chinese', num_labels=5)
model.summary()
```
```
Model: "tf_bert_for_sequence_classification"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
bert (TFBertMainLayer) multiple 102267648
_________________________________________________________________
dropout_37 (Dropout) multiple 0
_________________________________________________________________
classifier (Dense) multiple 3845
=================================================================
Total params: 102,271,493
Trainable params: 102,271,493
Non-trainable params: 0
```
I know if I use TFBertModel, I could get the (N, 512, 768) output without fine tuning.
```
model = TFBertModel.from_pretrained('bert-base-chinese')
model.summary()
```
But I need the (N, 512, 768) output after fine tuning.<|||||>I tried this too
```
model = Sequential()
model.add( TFBertModel.from_pretrained('bert-base-chinese') )
model.add( Dropout(0.5))
model.add( Dense(5,activation="softmax") )
model.summary()
```
```
ValueError: This model has not yet been built. Build the model first by calling `build()` or calling `fit()` with some data, or specify an `input_shape` argument in the first layer(s) for automatic build.
```<|||||>In order to create a `Sequential` model with TensorFlow.Keras framework, you have to specify the input shape through `input_shape` parameter on the input layer, otherwise TensorFlow.Keras doesn't know the input shape of the model you're creating.<|||||>> In order to create a `Sequential` model with TensorFlow.Keras framework, you have to specify the input shape through `input_shape` parameter on the input layer, otherwise TensorFlow.Keras doesn't know the input shape of the model you're creating.
add layer
```
input_layer = Input(shape = (512,), dtype='int64')
bert = TFBertModel.from_pretrained('bert-base-chinese')(input_layer)
bert = bert[0] # i think there is a bug here
flat = Flatten()(bert)
classifier = Dense(units=5)(flat)
model = Model(inputs=input_layer, outputs=classifier)
model.summary()
```
```
Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_4 (InputLayer) [(None, 512)] 0
_________________________________________________________________
tf_bert_model_3 (TFBertModel ((None, 512, 768), (None, 102267648
_________________________________________________________________
flatten_2 (Flatten) (None, 393216) 0
_________________________________________________________________
dense_1 (Dense) (None, 5) 1966085
=================================================================
Total params: 104,233,733
Trainable params: 104,233,733
Non-trainable params: 0
```
thanks it worked!!!
fit
```
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
model_fit = model.fit(train_input_ids, train_label,
batch_size=4, epochs=4,
validation_data=(validation_input_ids, validation_label)
)
```
extract layer
```
flatten_layer_model = Model(inputs=model.input, outputs=model.get_layer('flatten_2').output)
predictions = flatten_layer_model.predict(validation_input_ids)
print(type(predictions))
print(predictions.shape)
```
```
<class 'numpy.ndarray'>
(8359, 393216)
```<|||||>Hi @roccqqck, I am also doing something similar. Most of my queries are cleared by your comment. I have just one more doubt. The [documentation](https://huggingface.co/transformers/model_doc/bert.html#tfbertforsequenceclassification) states that the input of model should look like this `[input_ids, attention_mask]`. So, are you providing attention mask as input?
Have you uploaded full the code mentioned above on your github with data? If yes, can you please share the link?<|||||>@sainimohit23
I didn’t provide attention mask. |
transformers | 1,935 | closed | attention_mask added, not multiplied ... is this correct? | https://github.com/huggingface/transformers/blob/master/transformers/modeling_bert.py#L233
This line adds (+ operator) the attention mask. I wonder whether this is correct, as I would have very much expected the mask to be multiplied. | 11-25-2019 08:40:38 | 11-25-2019 08:40:38 | Ping. :) To me this still looks like the code actually fails to apply the attention mask and also the parts of the sequence intended to be masked are accessible. The line is now https://github.com/huggingface/transformers/blob/master/transformers/modeling_bert.py#L237<|||||>Hi, yes this is correct. Inside the `BertModel` forward method, the `attention_mask` is set to `0` for the tokens which should be attended (no modification) and `-10000` for the tokens which must be ignored, resulting in nullification of their attention scores.
You can read the relevant source code [here](https://github.com/huggingface/transformers/blob/master/transformers/modeling_bert.py#L683).<|||||>I see. Thank you very much for the informative answer!<|||||>Thank you for the explanation. I was thinking the same thing when reading the code.
In that case, shouldn't the `attention_mark` input for BertEncoder, BertLayer, ... be renamed to `extended_attention_mask` or `scaled_attention_mark`? Because those inner modules do expect the scaled (and reshaped?) mark, not the user input attention_mark.
Just a suggestion.
|
transformers | 1,934 | closed | Download model too slow, is there any way | in run_lm_finetuning.py
transformers.file_utils - https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-pytorch_model.bin not found in cache or force_download set to True, downloading to .... | 11-25-2019 08:13:18 | 11-25-2019 08:13:18 | If your model download is too slow and fails, you can manually download it from our S3 using your browser, wget or cURL as an alternative method.
You can then point to a directory that has both the model weights (xxx-pytorch_model.bin) and the configuration file (xxx-config.json) instead of the checkpoint name as the argument for `run_lm_finetuning.py`.<|||||>The models on s3 are downloaded by **botocore**. And can be accelerated using a proxy. Detailed information can be found on [](https://botocore.amazonaws.com/v1/documentation/api/latest/reference/config.html ).
Because It only supports **http** proxy now, other form of proxies like socks5 need to be converted to a http form.<|||||>OSError: Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json' to download pretrained model configuration file. <|||||>Can you open this [ https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json](url) in your browser ?<|||||>It can be opened in a browser<|||||>in run_lm_finetuning.py,
probability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.bool), value=0.0).
Why does dtyped equal torch.bool?
I have a difficulty here:
Expected object of scalar type Byte but got scalar type Bool for argument #2 'mask'<|||||>Tipically, when you say _masked_ *, you want to use boolean values (0 for absence and 1 for presence). In this particular case (rows n.144-151), you are sampling some tokens in the in each sequence for **masked** language modeling. For this reason, the _probability_matrix_ variable is being set to boolean values. In fact, the first argument of the _masked_fill()_ method is a boolean Torch tensor (i.e. the boolean vector). You can read more info in the PyTorch docs [here](https://pytorch.org/docs/stable/tensors.html).
For what concern your issue, post the code for reproducibility and version of TensorFlow, PyTorch, Transformers.
> in run_lm_finetuning.py,
> probability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.bool), value=0.0).
> Why does dtyped equal torch.bool?
> I have a difficulty here:
> Expected object of scalar type Byte but got scalar type Bool for argument #2 'mask'<|||||>@bigzhouj this is probably due to a Pytorch version error. I believe `bool` was introduced in pytorch v1.2.0. What is your Pytorch version? |
transformers | 1,933 | closed | Can I use HF XLNet to make a Model that Predicts Backwards? | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I would like to create a model that lets me generate multi sentence text sequences but backwards, given the tail end of some input text.
Could I do that using the HF XLNet framework?
I am new to this stuff so if this is possible would you be able to please give me some general pointers on how to go go about doing this?
Grateful for any advice!
Cheers
Fred
| 11-25-2019 01:21:07 | 11-25-2019 01:21:07 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,932 | closed | Using GPT-2 XL | Hi, Im trying to use the pretrained gpt-xl
but I get the following error:
OSError: Model name 'gpt2-xl' was not found in tokenizers model name list (gpt2, gpt2-medium, gpt2-large, distilgpt2). We assumed 'gpt2-xl' was a path or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.
Im following an example from the documentation.
7 # Load pre-trained model tokenizer (vocabulary)
---> 8 tokenizer = GPT2Tokenizer.from_pretrained('gpt2-xl')
9 # Load pre-trained model (weights)
10 model = GPT2LMHeadModel.from_pretrained('gpt2-xl')
Any Idea why?
Thank you. :) | 11-24-2019 23:36:35 | 11-24-2019 23:36:35 | You don't have to use the PyPi version of this library, but **from the source code** with `pip install git+https://github.com/huggingface/transformers.git`. This is because, at the moment, GPT2 XL version is available only in a dedicated branch called *gpt2-xl* and not in the PyPi version.
My environment is the following:
- __Python__: 3.6.9
- __O.S.__: Linux-4.15.0-70-generic-x86_64-with-debian-buster-sid
- __Transformers__: 2.1.1 (installed from source)
- __Torch__: 1.3.1
After that, you're able to use OpenAI GPT-2 XL version as always, e.g.
```
import transformers
from transformers import GPT2Tokenizer
from transformers import GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-xl')
model = GPT2Model.from_pretrained('gpt2-xl')
...
```
Please, close this issue!
> Hi, Im trying to use the pretrained gpt-xl
> but I get the following error:
> OSError: Model name 'gpt2-xl' was not found in tokenizers model name list (gpt2, gpt2-medium, gpt2-large, distilgpt2). We assumed 'gpt2-xl' was a path or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.
>
> Im following an example from the documentation.
>
> ```
> 7 # Load pre-trained model tokenizer (vocabulary)
> ```
>
> ---> 8 tokenizer = GPT2Tokenizer.from_pretrained('gpt2-xl')
> 9 # Load pre-trained model (weights)
> 10 model = GPT2LMHeadModel.from_pretrained('gpt2-xl')
> Any Idea why?
>
> Thank you. :) |
transformers | 1,931 | closed | Using model output by transformers (v2.0) in older versions (0.4.0 or 1.0.0) | ## ❓ Questions & Help
I have a bert model finetuned on in-domain data on the latest version of the package (`2.0`). I would now like to use this in some code that is written with the older version of the package (say `0.4.0` or `1.0.0`). Would this be possible?
I tried pointing the code which imports version `0.4.0` to the model output and it gave an error saying `bert_config.json` not found, but there was a `config.json` in the model folder. I renamed `config.json` file and made the code run again and it seems to run.
Am I on the right track? Is this all I have to do to get it run?
| 11-24-2019 06:09:02 | 11-24-2019 06:09:02 | Models' architectures should have stayed relatively the same between versions. If you did not get warnings telling you that some layers had not been loaded, then you should be good to go!
You could try and compare inferences between two environments which have different versions of the library installed to make sure that they're the same.<|||||>Well, I don't get any warnings, so that is good.
Could someone comment on `bert_config.json` vs `config.json`. Has there been any change in the naming scheme with the newer versions?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,930 | closed | BERT bertviz | 11-24-2019 02:05:47 | 11-24-2019 02:05:47 | ||
transformers | 1,929 | closed | configuration of the optimizer | ## 📚 Migration
<!-- Important information -->
Model I am using (Bert, XLNet....):
Bert
Language I am using the model on (English, Chinese....):
English
The problem arise when using:
* [ ] the official example scripts: (give details)
* [x] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details)
Sequence labeling task
Details of the issue:
So, what is the issue:
I have a code that works with ```pytorch-pretrained-bert==0.4.0```
with the following setup for the optimizer:
```
FULL_FINETUNING = True
if FULL_FINETUNING:
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'gamma', 'beta']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.0}
]
else:
param_optimizer = list(model.classifier.named_parameters())
optimizer_grouped_parameters = [{"params": [p for n, p in param_optimizer]}]
optimizer = Adam(optimizer_grouped_parameters, lr=3e-5)
```
With this configuration I have an f1-score near to 68% from the beginning.
But with transformers, and migrating to something like (taken from documentation):
```
um_training_steps = 1000
num_warmup_steps = 100
warmup_proportion = float(num_warmup_steps) / float(num_training_steps) # 0.1
#optimizer = AdamW(model.parameters(), lr=lr, correct_bias=False) # To reproduce BertAdam specific behavior set correct_bias=False
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=num_warmup_steps, num_training_steps=num_training_steps)
```
I stay in an f1-score near to 14%.
How can I simulate the former functionality to get back to to the better f1-score?
The changes between the two versions are
```
> num_training_steps = 1000
> num_warmup_steps = 100
> warmup_proportion = float(num_warmup_steps) / float(num_training_steps) # 0.1
> #optimizer = AdamW(model.parameters(), lr=lr, correct_bias=False) # To reproduce BertAdam specific behavior set correct_bias=False
> scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=num_warmup_steps, num_training_steps=num_training_steps)
>
181c199
< epochs = 50
---
> epochs = 10
195c213
< attention_mask=b_input_mask, labels=b_labels)
---
> attention_mask=b_input_mask, labels=b_labels)[0]
204a223,224
> #optimizer.step()
> torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm) # Gradient clipping is not in AdamW anymore (so you can use amp without issue)
205a226,227
> scheduler.step()
>
220c242
< attention_mask=b_input_mask, labels=b_labels)
---
> attention_mask=b_input_mask, labels=b_labels)[0]
222c244
< attention_mask=b_input_mask)
---
> attention_mask=b_input_mask)[0]
276d297
```
Also in the documentation https://huggingface.co/transformers/migration.html
suggests the following order
```
scheduler.step()
optimizer.step()
```
but that raises a warning from latest version of pytorch which wants the opposite order.
<!-- A clear and concise description of the migration issue. If you have code snippets, please provide it here as well. -->
## Environment
* OS:
* Python version: 3.6
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): latest
* Using GPU ? yes
* Distributed of parallel setup ? No
* Any other relevant information:
## Checklist
- [x] I have read the migration guide in the readme.
- [ ] I checked if a related official extension example runs on my machine.
<!-- Add any other context about the problem here. -->
| 11-23-2019 23:38:21 | 11-23-2019 23:38:21 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,928 | closed | Split on punc should receive never_split list | When tokenizing when a never_split token that contains any punctuation, such as [ or ] they currently get split when they shouldn't be. | 11-23-2019 16:27:51 | 11-23-2019 16:27:51 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1928?src=pr&el=h1) Report
> Merging [#1928](https://codecov.io/gh/huggingface/transformers/pull/1928?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/176cd1ce1b337134425b426207fbe155099c18b4?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `100%`.
[](https://codecov.io/gh/huggingface/transformers/pull/1928?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #1928 +/- ##
=======================================
Coverage 84.04% 84.04%
=======================================
Files 97 97
Lines 14333 14333
=======================================
Hits 12046 12046
Misses 2287 2287
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1928?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1928/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9iZXJ0LnB5) | `95.92% <100%> (ø)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1928?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1928?src=pr&el=footer). Last update [176cd1c...35b06fa](https://codecov.io/gh/huggingface/transformers/pull/1928?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Hi, thanks for opening a PR! Could you provide an example of where this is an issue?<|||||>Hi @LysandreJik, thanks for the quick answer. The issue arises if you have `never_split` list that contains strings with punctuation, for example, square brackets. That means that you cannot easily append or use tokens in the vocabulary that have square brackets around them.
Two ways I can think of doing that is trying to reuse the [unusedN] tokens that come with BertTokenizer or adding new ones to the vocabulary. Something like the following:
BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True, never_split=['[unused1]']).tokenize('hi how are you [unused1]')
> ['hi', 'how', 'are', 'you', '[', 'unused', '##1', ']']
It's also telling that the method receives a never_split list that is never used, so it seems like it was originally meant to be used in that way.<|||||>Hi the `never_split` option is deprecated now (and kept for backward compatibility purposes only).
To avoid splitting a token, you should add it to the vocabulary using `tokenizer.add_tokens(['[unused1]'])`. |
transformers | 1,927 | closed | Mask probability in run_lm_finetuning.py | Hi:
I don't understand why 0.5 is used when replacing masked input tokens with a random word. I think the probability should be 0.1? Or, the positions replaced with [MASK] shall already be stripped out before using 0.5. I think it is a small bug here? Thanks!
# 80% of the time, we replace masked input tokens with tokenizer.mask_token ([MASK])
indices_replaced = torch.bernoulli(torch.full(labels.shape, 0.8)).bool() & masked_indices
inputs[indices_replaced] = tokenizer.convert_tokens_to_ids(tokenizer.mask_token)
# 10% of the time, we replace masked input tokens with random word
indices_random = torch.bernoulli(torch.full(labels.shape, 0.5)).bool() & masked_indices & ~indices_replaced
random_words = torch.randint(len(tokenizer), labels.shape, dtype=torch.long)
inputs[indices_random] = random_words[indices_random]
# The rest of the time (10% of the time) we keep the masked input tokens unchanged
return inputs, labels
Best Regards,
Qi
| 11-23-2019 13:36:53 | 11-23-2019 13:36:53 | The lines:
```
indices_replaced = torch.bernoulli(torch.full(labels.shape, 0.8)).bool() & masked_indices
```
and
```
indices_random = torch.bernoulli(torch.full(labels.shape, 0.5)).bool() & masked_indices & ~indices_replaced
```
make it that `indices_random` has a 50% chance of being true when `indices_replaced`, which has a 80% chance of being active, is not active, which is: 100% - 80% = 20%. 50% of 20% is 10%, so the chance that indices_random is true \*is\* 10%<|||||>Thanks for the reply. Yes. You are right! I missed the ~indices_replaced when reading the code. Thanks! |
transformers | 1,926 | closed | How to process ARC dataset with HuggingFace GPT2 | Hello,
I am interested in processing ARC dataset with HuggingFace GPT2.
The ARC dataset (http://nlpprogress.com/english/question_answering.html) is a question answering, which contains 7,787 genuine grade-school level, multiple-choice science questions. The dataset also comes with the full corpus of texts extracted from various articles that explains various scientific concepts that can be used to solve these 7,787 multiple choice questions (i.e. this full-corpus is not in the multiple choice format; it's just a series of excerpts from various articles)
I am assuming I'd have to use the GPT2DoubleHeadsModel to process this ARC dataset, since it is a set of multiple-choice questions. However, I also need to somehow train my GPT2DoubleHeadsModel based on the contents of the full corpus of texts that contains excerpts from various scientific articles, since GPT2DoubleHeadsModel wouldn't have acquired any scientific knowledge prior to processing this dataset.
But the thing is, the corpus of series of scientific articles on which I am interested in training my GPT2DoubleHeadsModel with is not written in a multiple-choice format -- is it possible to train only those parts of the GPT2DoubleHeadsModel that are responsible for language modelling with the series of scientific articles, and then fine-tune the entire component of the GPT2DoubleHeadsModel with the training data from the multiple choice questions?
If it is possible, how can I do it?
Thank you, | 11-23-2019 12:36:30 | 11-23-2019 12:36:30 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,925 | closed | Need a Restore training mechenisim in run_lm_finetuning.py | ## 🚀 Feature
<!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. -->
## Motivation
When training run_lm_finetuning.py for a long time, a restore training feature should be added.
Otherwise, states of sheduler and optimizer are changed when restart.
For example, when it breaks at step checkpoint-30000, it will restart at step 0 with initial learning rate and other configs. This is really troublesome.
Thanks, please.
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. -->
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
| 11-23-2019 08:39:41 | 11-23-2019 08:39:41 | If you want to resume training with the same learning rate, you can save the scheduler and optimizer and reload them when resuming training.
For example, you could save the current training state with:
```python
# Save the model and tokenizer
model.save_pretrained('./checkpoints/')
tokenizer.save_pretrained('./checkpoints/')
# Save the optimizer and scheduler
torch.save(optimizer.state_dict(), './checkpoints/optimizer.pt')
torch.save(scheduler.state_dict(), './checkpoints/scheduler.pt')
```
And resume training with:
```python
# Initialize model and tokenizer from checkpoints dir
model = BertModel.from_pretrained('./checkpoints/')
tokenizer = BertTokenizer.from_pretrained('./checkpoints/')
# Load optimizer and scheduler state
optimizer.load_state_dict(torch.load('./checkpoints/optimizer.pt'))
scheduler.load_state_dict(torch.load('./checkpoints/scheduler.pt'))
```
If you want more information, take a look at #839 and Pytorch's model serialization [tutorial](https://pytorch.org/tutorials/beginner/saving_loading_models.html)
If you want to resume training at the exact epoch and batch where you left off, like this [person](https://github.com/huggingface/transformers/issues/839#issuecomment-515129371), you could save the epoch and batch number as well and `continue` all iterations until you reach the correct batch<|||||>@bkkaggle Thanks for your reply, it really helps a lot!
Thank you!<|||||>@bkkaggle
However, the reasons that I change to PyTorch (Transformers by huggingface) are easy to use and thousands more positive ones.
> Why not adding an universal functionality to smoothly support this feature, like TF checkpoint does?
I think that is a natural way to save checkpoint when training.
It sounds more troublesome to customize the checkpoint style by users themselves, considering the high-level encapsulation characteristic brought by the framework. |
transformers | 1,924 | closed | TypeError: convert_examples_to_features() got an unexpected keyword argument 'sequence_a_is_doc' | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert):
Language I am using the model on (English):
The problem arise when using:
```
!python run_squad.py \
--model_type bert \
--model_name_or_path bert-large-uncased \
--do_train \
--do_eval \
--do_lower_case \
--train_file train-v1.1.json \
--predict_file dev-v1.1.json \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: Fine-tuning on SQuAD
## Environment
* OS:
* Python version: 3.6
* PyTorch version: 1.3.1'
* PyTorch Transformers version (or branch): 2.1.1
* Using GPU ? Yes
| 11-23-2019 08:06:14 | 11-23-2019 08:06:14 | Please, try to install Transformers library **from source code**, and **not from PyPi**. The former one is the up-to-date version. In fact, if you see in [utils_squad.py](https://github.com/huggingface/transformers/blob/master/examples/utils_squad.py) at row 197, there is a parameter called `sequence_a_is_doc` in the definition of the `convert_examples_to_features()` method. Try it out and keep us updated on this problem!
> ## Bug
> Model I am using (Bert):
>
> Language I am using the model on (English):
>
> The problem arise when using:
>
> ```
> !python run_squad.py \
> --model_type bert \
> --model_name_or_path bert-large-uncased \
> --do_train \
> --do_eval \
> --do_lower_case \
> --train_file train-v1.1.json \
> --predict_file dev-v1.1.json \
> --per_gpu_train_batch_size 12 \
> --learning_rate 3e-5 \
> --num_train_epochs 2.0 \
> --max_seq_length 384 \
> --doc_stride 128 \
> --output_dir /tmp/debug_squad/
> ```
>
> The tasks I am working on is:
>
> * [ ] an official GLUE/SQUaD task: Fine-tuning on SQuAD
>
> ## Environment
> * OS:
> * Python version: 3.6
> * PyTorch version: 1.3.1'
> * PyTorch Transformers version (or branch): 2.1.1
> * Using GPU ? Yes<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,923 | closed | Step restarts from step 0 when reload from an existing checkpoint? | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hi, everyone
I am totally new to Transformers, it's really a good solution. :-)
Question:
When I reload my program from a break point (existing checkpoint, say checkpoint-30000),
what I expect is that, from Tensorboard, I can see the program restarting from 30001.
However, it restarts from step 0, although parameters are up to date.
Does any config point to this problem?
Or any easy solutions?
Thanks.
| 11-23-2019 07:28:12 | 11-23-2019 07:28:12 | |
transformers | 1,922 | closed | update | 11-23-2019 01:12:27 | 11-23-2019 01:12:27 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1922?src=pr&el=h1) Report
> Merging [#1922](https://codecov.io/gh/huggingface/transformers/pull/1922?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/26db31e0c09a8b5e1ca7a61c454b159eab9d86be?src=pr&el=desc) will **decrease** coverage by `<.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/1922?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #1922 +/- ##
==========================================
- Coverage 84.04% 84.03% -0.01%
==========================================
Files 97 94 -3
Lines 14333 14032 -301
==========================================
- Hits 12046 11792 -254
+ Misses 2287 2240 -47
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1922?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1922/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `91.2% <0%> (-0.95%)` | :arrow_down: |
| [transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1922/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYXV0by5weQ==) | `58.82% <0%> (-0.64%)` | :arrow_down: |
| [transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1922/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3JvYmVydGEucHk=) | `89.9% <0%> (-0.53%)` | :arrow_down: |
| [transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1922/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbmV0LnB5) | `87.82% <0%> (-0.37%)` | :arrow_down: |
| [transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/1922/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3RyYW5zZm9feGwucHk=) | `92.21% <0%> (-0.33%)` | :arrow_down: |
| [transformers/tests/modeling\_tf\_common\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1922/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `96.8% <0%> (-0.28%)` | :arrow_down: |
| [transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1922/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3V0aWxzLnB5) | `92.4% <0%> (-0.28%)` | :arrow_down: |
| [transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1922/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2N0cmwucHk=) | `97.75% <0%> (-0.11%)` | :arrow_down: |
| [transformers/tests/modeling\_distilbert\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1922/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2Rpc3RpbGJlcnRfdGVzdC5weQ==) | `99.09% <0%> (-0.09%)` | :arrow_down: |
| [transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/1922/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbS5weQ==) | `90.39% <0%> (-0.08%)` | :arrow_down: |
| ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/1922/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1922?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1922?src=pr&el=footer). Last update [26db31e...4da7586](https://codecov.io/gh/huggingface/transformers/pull/1922?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 1,921 | closed | FileNotFoundError when running run_squad.py | ## ❓ Questions & Help
I tried fine-tuning BERT on squad on my local computer. The script I ran was
```
python3 ./examples/run_squad.py \
--model_type bert \
--model_name_or_path bert-large-uncased-whole-word-masking \
--do_train \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v1.1.json \
--predict_file $SQUAD_DIR/dev-v1.1.json \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ../models/wwm_uncased_finetuned_squad/ \
--per_gpu_eval_batch_size=3 \
--per_gpu_train_batch_size=3 \
```
But I get an error with regards to the `train-v1.1.json` not being found. The full output is
```
I1122 20:03:40.218862 4637015488 tokenization_utils.py:375] loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-vocab.txt from cache at /Users/maxtian/.cache/torch/transformers/b3a6b2c6d7ea2ffa06d0e7577c1e88b94fad470ae0f060a4ffef3fe0bdf86730.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
I1122 20:03:40.596048 4637015488 modeling_utils.py:383] loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-pytorch_model.bin from cache at /Users/maxtian/.cache/torch/transformers/66cc7a7501e3499efedc37e47b3a613e0d3d8d0a51c66224c69f0c669b52dcfb.ae11cc7f2a26b857b76b404a908c7abad793f88bf8ad95caecff154da87994b1
I1122 20:03:54.460903 4637015488 modeling_utils.py:453] Weights of BertForQuestionAnswering not initialized from pretrained model: ['qa_outputs.weight', 'qa_outputs.bias']
I1122 20:03:54.461247 4637015488 modeling_utils.py:456] Weights from pretrained model not used in BertForQuestionAnswering: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias']
I1122 20:03:54.473404 4637015488 run_squad.py:504] Training/evaluation parameters Namespace(adam_epsilon=1e-08, cache_dir='', config_name='', device=device(type='cpu'), do_eval=True, do_lower_case=True, do_train=True, doc_stride=128, eval_all_checkpoints=False, evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=3e-05, local_rank=-1, logging_steps=50, max_answer_length=30, max_grad_norm=1.0, max_query_length=64, max_seq_length=384, max_steps=-1, model_name_or_path='bert-large-uncased-whole-word-masking', model_type='bert', n_best_size=20, n_gpu=0, no_cuda=False, null_score_diff_threshold=0.0, num_train_epochs=2.0, output_dir='../models/wwm_uncased_finetuned_squad/', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=3, per_gpu_train_batch_size=3, predict_file='/dev-v1.1.json', save_steps=50, seed=42, server_ip='', server_port='', tokenizer_name='', train_file='/train-v1.1.json', verbose_logging=False, version_2_with_negative=False, warmup_steps=0, weight_decay=0.0)
I1122 20:03:54.474577 4637015488 run_squad.py:308] Creating features from dataset file at /train-v1.1.json
```
And I get the following error
```
Traceback (most recent call last):
File "./examples/run_squad.py", line 573, in <module>
main()
File "./examples/run_squad.py", line 518, in main
train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False, output_examples=False)
File "./examples/run_squad.py", line 311, in load_and_cache_examples
version_2_with_negative=args.version_2_with_negative)
File "/Users/maxtian/Desktop/Python_Projects/transformers/examples/utils_squad.py", line 114, in read_squad_examples
with open(input_file, "r", encoding='utf-8') as reader:
FileNotFoundError: [Errno 2] No such file or directory: '/train-v1.1.json'
```
| 11-23-2019 01:09:09 | 11-23-2019 01:09:09 | You need to download that json on squad website and put to your local
directory zzz
On Sat, Nov 23, 2019 at 09:09 Max Tian <[email protected]> wrote:
> ❓ Questions & Help
>
> I tried fine-tuning BERT on squad on my local computer. The script I ran
> was
>
> python3 ./examples/run_squad.py \
>
> --model_type bert \
>
> --model_name_or_path bert-large-uncased-whole-word-masking \
>
> --do_train \
>
> --do_eval \
>
> --do_lower_case \
>
> --train_file $SQUAD_DIR/train-v1.1.json \
>
> --predict_file $SQUAD_DIR/dev-v1.1.json \
>
> --learning_rate 3e-5 \
>
> --num_train_epochs 2 \
>
> --max_seq_length 384 \
>
> --doc_stride 128 \
>
> --output_dir ../models/wwm_uncased_finetuned_squad/ \
>
> --per_gpu_eval_batch_size=3 \
>
> --per_gpu_train_batch_size=3 \
>
>
> But I get an error with regards to the train-v1.1.json not being found.
> The full output is
>
> I1122 20:03:40.218862 4637015488 tokenization_utils.py:375] loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-vocab.txt from cache at /Users/maxtian/.cache/torch/transformers/b3a6b2c6d7ea2ffa06d0e7577c1e88b94fad470ae0f060a4ffef3fe0bdf86730.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
>
> I1122 20:03:40.596048 4637015488 modeling_utils.py:383] loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-pytorch_model.bin from cache at /Users/maxtian/.cache/torch/transformers/66cc7a7501e3499efedc37e47b3a613e0d3d8d0a51c66224c69f0c669b52dcfb.ae11cc7f2a26b857b76b404a908c7abad793f88bf8ad95caecff154da87994b1
>
> I1122 20:03:54.460903 4637015488 modeling_utils.py:453] Weights of BertForQuestionAnswering not initialized from pretrained model: ['qa_outputs.weight', 'qa_outputs.bias']
>
> I1122 20:03:54.461247 4637015488 modeling_utils.py:456] Weights from pretrained model not used in BertForQuestionAnswering: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias']
>
> I1122 20:03:54.473404 4637015488 run_squad.py:504] Training/evaluation parameters Namespace(adam_epsilon=1e-08, cache_dir='', config_name='', device=device(type='cpu'), do_eval=True, do_lower_case=True, do_train=True, doc_stride=128, eval_all_checkpoints=False, evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=3e-05, local_rank=-1, logging_steps=50, max_answer_length=30, max_grad_norm=1.0, max_query_length=64, max_seq_length=384, max_steps=-1, model_name_or_path='bert-large-uncased-whole-word-masking', model_type='bert', n_best_size=20, n_gpu=0, no_cuda=False, null_score_diff_threshold=0.0, num_train_epochs=2.0, output_dir='../models/wwm_uncased_finetuned_squad/', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=3, per_gpu_train_batch_size=3, predict_file='/dev-v1.1.json', save_steps=50, seed=42, server_ip='', server_port='', tokenizer_name='', train_file='/train-v1.1.json', verbose_logging=False, version_2_with_negative=False, warmup_steps=0, weight_decay=0.0)
>
> I1122 20:03:54.474577 4637015488 run_squad.py:308] Creating features from dataset file at /train-v1.1.json
>
>
>
>
> And I get the following error
>
> Traceback (most recent call last):
>
> File "./examples/run_squad.py", line 573, in <module>
>
> main()
>
> File "./examples/run_squad.py", line 518, in main
>
> train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False, output_examples=False)
>
> File "./examples/run_squad.py", line 311, in load_and_cache_examples
>
> version_2_with_negative=args.version_2_with_negative)
>
> File "/Users/maxtian/Desktop/Python_Projects/transformers/examples/utils_squad.py", line 114, in read_squad_examples
>
> with open(input_file, "r", encoding='utf-8') as reader:
>
> FileNotFoundError: [Errno 2] No such file or directory: '/train-v1.1.json'
>
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/1921?email_source=notifications&email_token=AIEAE4HBKLYKDQWTFUTKO3TQVB7EZA5CNFSM4JQXY37KYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4H3QZTPA>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AIEAE4ANIT2HJAZA5JSGH2LQVB7EZANCNFSM4JQXY37A>
> .
>
<|||||>oh my mistake. i thought the json files are already in the repo |
transformers | 1,920 | closed | CTRLTokenizer not consistent with the fastBPE tokenizer used in Salesforce/CTRL | ## 🐛 Bug
cc @keskarnitish
<!-- Important information -->
I am using the transformers CTRL re-implementation to fine-tune the original pre-trained model released by Salesforce https://github.com/salesforce/ctrl.
When the input text consists of newline characters, the tokenization of the Transformers tokenizer differs from the one used by Salesforce CTRL.
## To Reproduce
Salesforce tokenization:
```
import fastBPE
import re
bpe = fastBPE.fastBPE('codes', 'vocab')
line = 'This is one sentence.\nAnd this is another sentence!\n'
tokenized_line = bpe.apply([line])[0]
tokenized_line = re.findall(r'\S+|\n', tokenized_line)
toks = list(filter(lambda x: x != u'@@', tokenized_line))
print(toks)
['This', 'is', 'one', 'sentenc@@', 'e.@@', '\n', 'And', 'this', 'is', 'another', 'sent@@', 'ence@@', '!@@', '\n']
```
Transformers tokenization:
```
from transformers import CTRLTokenizer
tokenizer = CTRLTokenizer.from_pretrained('ctrl', do_lower_case=False)
toks = tokenizer.tokenize(line)
print(toks)
['This', 'is', 'one', 'sentenc@@', 'e.@@', '\n@@', 'And', 'this', 'is', 'another', 'sent@@', 'ence!']
```
Also, I get this issue with double space in the input text:
```
line = 'And also a problem with more than one consecutive space'
tokenized_line = tokenizer.tokenize(line)
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-37-747954838180> in <module>
1 line = 'And also a problem with more than one consecutive space'
----> 2 tokenized_line = tokenizer.tokenize(line)
~/anaconda3/envs/hugging/lib/python3.7/site-packages/transformers/tokenization_utils.py in tokenize(self, text, **kwargs)
647
648 added_tokens = list(self.added_tokens_encoder.keys()) + self.all_special_tokens
--> 649 tokenized_text = split_on_tokens(added_tokens, text)
650 return tokenized_text
651
~/anaconda3/envs/hugging/lib/python3.7/site-packages/transformers/tokenization_utils.py in split_on_tokens(tok_list, text)
644 return sum((self._tokenize(token, **kwargs) if token not \
645 in self.added_tokens_encoder and token not in self.all_special_tokens \
--> 646 else [token] for token in tokenized_text), [])
647
648 added_tokens = list(self.added_tokens_encoder.keys()) + self.all_special_tokens
~/anaconda3/envs/hugging/lib/python3.7/site-packages/transformers/tokenization_utils.py in <genexpr>(.0)
644 return sum((self._tokenize(token, **kwargs) if token not \
645 in self.added_tokens_encoder and token not in self.all_special_tokens \
--> 646 else [token] for token in tokenized_text), [])
647
648 added_tokens = list(self.added_tokens_encoder.keys()) + self.all_special_tokens
~/anaconda3/envs/hugging/lib/python3.7/site-packages/transformers/tokenization_ctrl.py in _tokenize(self, text)
141
142 for token in text:
--> 143 split_tokens.extend([t for t in self.bpe(token).split(' ')])
144 return split_tokens
145
~/anaconda3/envs/hugging/lib/python3.7/site-packages/transformers/tokenization_ctrl.py in bpe(self, token)
94 return self.cache[token]
95 word = tuple(token)
---> 96 word = tuple(list(word[:-1]) + [word[-1]+'</w>'])
97 pairs = get_pairs(word)
98
## Expected behavior
I expect the tokenizers to output identical tokenizations so that fine-tuning is consistent with pre-training.
I expect the tokenizer to handle double spaces.
## Environment
* OS: Linux
* Python version: 3.7.4
* PyTorch version: 1.3.0
* PyTorch Transformers version (or branch): 2.1.1
* Using GPU ?
* Distributed of parallel setup ?
* Any other relevant information:
| 11-22-2019 20:39:30 | 11-22-2019 20:39:30 | Indeed, thanks a lot for the bug report.
This is fixed on master (by using regex to split instead of white spaces). |
transformers | 1,919 | closed | Fix typo in documentation. toto -> to | 11-22-2019 19:57:08 | 11-22-2019 19:57:08 | thank you! |
|
transformers | 1,918 | closed | Minor bug fixes on run_ner.py | Adding a dictionary entry outside of initialization requires an '=' instead of ':' | 11-22-2019 18:35:49 | 11-22-2019 18:35:49 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1918?src=pr&el=h1) Report
> Merging [#1918](https://codecov.io/gh/huggingface/transformers/pull/1918?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/26db31e0c09a8b5e1ca7a61c454b159eab9d86be?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/1918?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #1918 +/- ##
=======================================
Coverage 84.04% 84.04%
=======================================
Files 97 97
Lines 14333 14333
=======================================
Hits 12046 12046
Misses 2287 2287
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1918?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1918?src=pr&el=footer). Last update [26db31e...17949e4](https://codecov.io/gh/huggingface/transformers/pull/1918?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>:+1: sorry, that was a pasting mistake in https://github.com/huggingface/transformers/pull/1792 🙈 |
transformers | 1,917 | closed | run_squad.py not running | ## ❓ Questions & Help
When I try to run the script to fine-tune BERT on squad using the code from the examples:
```
python ./examples/run_squad.py \
--model_type bert \
--model_name_or_path bert-large-uncased-whole-word-masking \
--do_train \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v1.1.json \
--predict_file $SQUAD_DIR/dev-v1.1.json \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ../models/wwm_uncased_finetuned_squad/ \
--per_gpu_eval_batch_size=3 \
--per_gpu_train_batch_size=3 \
```
But my terminal just gets stuck on this stage
<img width="518" alt="image" src="https://user-images.githubusercontent.com/8890262/69446607-c39b3780-0d22-11ea-8bea-6135f05640da.png">
When I run the pytests, everything has passed
| 11-22-2019 17:23:03 | 11-22-2019 17:23:03 | |
transformers | 1,916 | closed | Truncating GPT2 past | ## ❓ Questions & Help
I am using code like this to generate text:
```python
while True:
output_token, past = model.forward(output_token, past=past)
output_token = output_token[:, -1, :]
output_token = torch.multinomial(F.softmax(output_token, dim=-1), num_samples=1)
out = torch.cat((out, output_token), dim=1)
```
The problem with this is that `past` keeps growing.
My solution is to check the size of past and truncate it like this:
```python
if past[0].shape[-2] > max_past:
past = [p[..., -max_past:, :] for p in past]
```
I don't think this is correct, can anyone enlighten me? | 11-22-2019 15:18:37 | 11-22-2019 15:18:37 | Hi! Your usage of `past` seems correct to me. After how many iterations do you feel the need to truncate?<|||||>I also would like to have an example that works correctly with GPT2. If past is not truncated, the model crashes after it reaches some threshold, which is, I suppose, `model.config.max_position_embeddings`. So, after it grows to this size, I truncate it as follows:
```
if past[0].shape[3] == model.config.max_position_embeddings - 1:
past = [p[:, :, :, :-1, ...] for p in past]
```
This is clearly broken, as the model's generation capabilities degrade dramatically after the truncation kicks in. Example (gpt2-medium):
>...And here's why... for every person of difference, there's a different template when it comes to talking. We've seen them together for so long that most people don't know who we are. The owner of the room has probably hidden that while we are awake, but since then, there's usually a perception gap to gape at the outside. He's an enemy of rule, we know this. He was excommunicated and let die and our purity is worth more in terms of glory than doing battle together. He probably ran away from us, we're not sure why but we do remember his location. What kind of story is this then? In whatever and wherever he was taken, the hapless thief of light known as Arcadia forced down with the blessing of the goddess Kali from the eternities, has been returned to each of us... in as faceless a form as it's possible to<**TRUNCATION BEGINS HERE**> us we were possibly possible to informally possible to manipulateable to us. NoTa possible to some strange not always been my memories allow, unhidden possible to beholdenf the parts are, only known. Upon the wanderer. This is able to all asked and thus possible to callable to us — being made for the willed possible for them that of receiving the righteous deed has ever been in our power was when we can look of whether it needs permitted to appear plausible to those we may befitting to us you and with you can take.
You can see that it starts producing largely incoherent sentences with bad grammar. It also loses basic abilities like matching brackets and quote marks. If I truncate the first element instead, as @LHolten does, the result is even worse:
>...And here's why... for every person of difference, there's a different template when it comes to talking. We've seen them together for so long that most people don't know who we are. The owner of the room has probably hidden that while we are awake, but since then, there's usually a perception gap to gape at the outside. He's an enemy of rule, we know this. He was excommunicated and let die and our purity is worth more in terms of glory than doing battle together. He probably ran away from us, we're not sure why but we do remember his location. What kind of story is this then? In whatever and wherever he was taken, the hapless thief of light known as Arcadia forced down with the blessing of the goddess Kali from the eternities, has been returned to each of us... in as faceless a form as it's possible to<**TRUNCATION BEGINS HERE**> to to to To Aud To Bed Since I January Nine Thou William July you very well, for this very purpose you can actually do without wearing/washing clothes any specific garments/tutexes, if they are Halloween- Halloween No-How-able Spells for Specific Results Splinterview February Treat him as Jess four of The H: We really dislike why's of Tactics The Neutral Generic Real Sheriff's the equivalent as Uthville He has been Henry's Gender Surprise Our Half<|endoftext|>
I'm afraid the problem is more complex than it initially seemed to me. Maybe losing a single element of history is too damaging somehow? Perhaps the only correct way to deal with "past overflow" is to truncate the context itself, say by removing the first paragraph from it, and then regenerate the past from it?
Generally, what is the best way of doing continuous generation without <|endoftext|> tokens?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,915 | closed | Any plan to include BART and T5? | # 🌟New model addition
## Model description
<!-- Important information -->
## Open Source status
* [ ] the model implementation is available: (give details)
* [ ] the model weights are available: (give details)
* [ ] who are the authors: (mention them)
## Additional context
<!-- Add any other context about the problem here. -->
| 11-22-2019 14:16:22 | 11-22-2019 14:16:22 | please search for issues beforehand (and fill the template if there's not already a relevant issue) |
transformers | 1,914 | closed | How to perform common sense reasoning task with GPT-2? | Hello,
I am new to NLP so I have lots of questions.
I am interested in carrying out common sense reasoning task with GPT-2, for example, with Winograd Schema Challenge dataset.
Q1. How should I tokenize the Winograd Schema Challenge dataset to process it with GPT-2 (with the double heads model, for instance)? Can someone please give me an example?
Q2. Can GPT2DoubleHeadsModel be used to conduct common sense reasoning task with Winograd Schema Challenge dataset?
Thank you, | 11-22-2019 12:51:14 | 11-22-2019 12:51:14 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,913 | closed | Some Questions about XLNet | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
1, XLNet is a model with relative position embeddings, so can either pad the inputs on the right or on the left ?
2, If i pad the inputs on the left, the length of input is 100 and the max length is 128, i don't consider the PAD token in NER task, i.e. just use the hidden state of input token, use [-100:, :] over the output of model ?
For length dimension, the last 100 are taken; for hidden state dimension, the values of all dimensions are taken.
3, CLS token must be at end ?
| 11-22-2019 09:58:01 | 11-22-2019 09:58:01 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,912 | closed | XLNet is getting slower when enabling mems | Per API doc, using mems help to reduce inference time. However, I noticed that more time is needed when increasing mem_len. Do I misunderstand the usage of mem_len parameter?
Here is the testing code
```
import time
import torch
from transformers import XLNetTokenizer, XLNetLMHeadModel
tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased')
text = """
A horrible, messy split second presents
itself to the heart-shaped version as Scott is moved. The upcoming movie benefits at
the mental cost of ages 14 to 12. Nothing substantial is happened for almost 48 days.
When that happens, we lose our heart. <eod> The quick brown fox jumps over the lazy dog. <mask>
"""
mems = None
input_ids = tokenizer.encode(text)
input_ids = torch.tensor(input_ids).unsqueeze(0)
epoch = 20
for men_len in [0, 16, 32, 64, 128]:
model = XLNetLMHeadModel.from_pretrained('xlnet-base-cased', mem_len=men_len)
start_dt = time.monotonic()
for i in range(epoch):
outputs = model(input_ids=input_ids, mems=mems)
if men_len > 0:
mems = outputs[1]
end_dt = time.monotonic()
print('Average Duration for men_len {}: {}'.format(men_len, round(end_dt-start_dt, 2)))
```
Output is
```
Average Duration for men_len 0: 2.49
Average Duration for men_len 16: 2.62
Average Duration for men_len 32: 2.67
Average Duration for men_len 64: 2.81
Average Duration for men_len 128: 3.28
``` | 11-22-2019 04:17:22 | 11-22-2019 04:17:22 | When using `past` or `mems` values, you should be careful not to give the model the input ids which have already been computed. We've recently [added a documentation section ](https://huggingface.co/transformers/quickstart.html#using-the-past)detailing the use of `past`, which is similar to the way `mems` should be used.
Please notice we're only feeding the model the tokens for which the attention values have not be computed yet; which is only the last token in the case of sequential decoding.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,911 | closed | Fix GPT2 docstring from #1906 | Fixes #1906
Changes the GPT2 Tokenizer's docstrings to correctly explain the reason for `add_prefix_space` parameter | 11-22-2019 02:56:42 | 11-22-2019 02:56:42 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1911?src=pr&el=h1) Report
> Merging [#1911](https://codecov.io/gh/huggingface/transformers/pull/1911?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/26db31e0c09a8b5e1ca7a61c454b159eab9d86be?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/1911?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #1911 +/- ##
=======================================
Coverage 84.04% 84.04%
=======================================
Files 97 97
Lines 14333 14333
=======================================
Hits 12046 12046
Misses 2287 2287
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1911?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/1911/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9ncHQyLnB5) | `96.72% <ø> (ø)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1911?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1911?src=pr&el=footer). Last update [26db31e...65c5080](https://codecov.io/gh/huggingface/transformers/pull/1911?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Great, thanks! |
transformers | 1,910 | closed | Bart Tokenizer treat symbols in a word as a new word. | ## 🐛 Bug
<!-- Important information -->
The model I am using is Bart:
The problem arises when using:
* [ ] `tokenizer.encode`
* [ ] `tokenizer.decode`
The tasks I am working on is:
* [ ] Encode a string, then decode it back.
## To Reproduce
```
import torch
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
test_string = 'text with percentage%'
# encode Converts a string in a sequence of ids (integer), using the tokenizer and vocabulary.
input_ids = tokenizer.encode(test_string)
output = tokenizer.decode(input_ids)
print(output)
```
`>>> text with percentage %`
## Expected behavior
It should be `text with percentage%`, which treats the symbol in the word as one word.
## Environment
* OS: MacOS
* Python version: 3.7
* PyTorch version: 1.3
* PyTorch Transformers version (or branch): Master
| 11-21-2019 17:55:05 | 11-21-2019 17:55:05 | Hi, this is due to the original BERT's tokenization. You can try it using the [google-research's implementation](https://github.com/google-research/bert):
```py
raw_text = 'text with percentage%'
tokenizer = tokenization.FullTokenizer(vocab_file=vocab_file, do_lower_case=True)
tokens = tokenizer.tokenize(raw_text)
print(tokens) # ['text', 'with', 'percentage', '%']
```
The goal is to be as close as possible to the original implementation, hence the similar behavior concerning special tokens. |
transformers | 1,909 | closed | Passing inputs to TFGPT2LMHeadModel results in error: 'TensorSliceDataset' object has no attribute 'shape' | ## 🐛 Bug
Model I am using (Bert, XLNet....): `TFGPT2LMHeadModel`
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [ ] the official example scripts: (give details)
* [X ] my own modified scripts: (give details)
```
import tensorflow as tf
from transformers import *
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2LMHeadModel.from_pretrained('gpt2')
raw_text = "Here comes the sun"
tokens = tokenizer.encode(raw_text, add_special_tokens=False)
inputs = tf.data.Dataset.from_tensor_slices( np.array(tokens) )
inputs = {'input_ids': inputs}
outputs = model(inputs)
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ X ] my own task or dataset: (give details)
Trying to work out a stripped-down version of `run_generation.py` using `TFGPT2LMHeadModel` only.
## To Reproduce
Steps to reproduce the behavior: just run the code above, you should get the following error:
```
Traceback (most recent call last):
File "./generate_text.py", line 47, in <module>
out = sample_sequence(tokens, num_samples=num_samples)
File "./generate_text.py", line 27, in sample_sequence
outputs = model(inputs)
File "/Users/Riccardo/development/ideal/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 891, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/Users/Riccardo/development/ideal/lib/python3.7/site-packages/transformers/modeling_tf_gpt2.py", line 490, in call
transformer_outputs = self.transformer(inputs, **kwargs)
File "/Users/Riccardo/development/ideal/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 891, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/Users/Riccardo/development/ideal/lib/python3.7/site-packages/transformers/modeling_tf_gpt2.py", line 257, in call
position_ids = tf.range(past_length, shape_list(input_ids)[-1] + past_length, dtype=tf.int32)[tf.newaxis, :]
File "/Users/Riccardo/development/ideal/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 475, in shape_list
static = x.shape.as_list()
AttributeError: 'TensorSliceDataset' object has no attribute 'shape'
```
## Expected behavior
Still not sure!
## Environment
* OS: MacOsX 11.14.6 (Mojave)
* Python version: 3.7.5
* Tensorflow version: 2.0.0
* Tensorflow Transformers version (or branch): 2.1.1
* Using GPU ? No
* Distributed of parallel setup ? No
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| 11-21-2019 16:04:57 | 11-21-2019 16:04:57 | Hi! You can simply use `tf.constant` to build your input tensors, like this:
```py
raw_text = "Here comes the sun"
tokens = tokenizer.encode(raw_text, add_special_tokens=False)
inputs = {'input_ids': tf.constant(tokens)}
outputs = model(inputs)
```
You can use datasets when using building a custom loop or using keras.fit for example, as these will generally feed the tensors directly to the model, instead of feeding the `tf.data.Dataset` directly. Here's how I would go about starting a basic custom loop using a `tf.data.Dataset`:
```py
import tensorflow as tf
import numpy as np
from transformers import *
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2LMHeadModel.from_pretrained('gpt2')
raw_text = "Here comes the sun"
tokens = tokenizer.encode(raw_text, add_special_tokens=False)
inputs = tf.data.Dataset.from_tensor_slices( np.array([tokens]) )
for input_value in inputs:
outputs = model(input_value)
```
Please notice I converted to a numpy array by adding a dimension (`[tokens]`) otherwise you would end up with individual IDs held by the dataset rather than sequences of ids.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,908 | closed | Training transformer XL from scratch with my own dataset | ## ❓ Questions & Help
Is it possible to train a transformer XL model from scratch with my own dataset?
Just initialize the model with default params and compile & fit the model?
<!-- A clear and concise description of the question. -->
| 11-21-2019 15:59:39 | 11-21-2019 15:59:39 | I think yes,
Just load the model (class) and then start training. As it is mentioned [here](https://huggingface.co/transformers/model_doc/transformerxl.html#transformers.TransfoXLModel), model classes are just a PyTorch torch.nn.Module sub-class.
> This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.<|||||>Is it also possible to train tensorflow model?<|||||>If we think about [this](https://huggingface.co/transformers/model_doc/transformerxl.html#tftransfoxlmodel) expression, yes.
> This model is a tf.keras.Model tf.keras.Model sub-class. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
Also, check [this](https://github.com/huggingface/transformers#quick-tour-tf-20-training-and-pytorch-interoperability) out.<|||||>Thanks very much! |
transformers | 1,907 | closed | lm_fine-tuning on small dataset of 3 documents | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hi,
I am trying to use [run_lm_finetuning.py](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py) on a sample dataset [here](https://github.com/dmlc/gluon-nlp/blob/master/scripts/bert/sample_text.txt).
I am running the script with following arguments but I get the exact identical pytorch_model.bin [440.5 MB] saved in the output_dir=op:
python run_lm_finetuning.py --train_data_file=sample_text.txt --output_dir=op --mlm --do_train --overwrite_output_dir --do_lower_case --save_steps=50
I was wondering if this dataset of 3 documents is too small to fine-tune on or if I can modify some arguments to get a domain-fine-tuned model.
Thanks! | 11-21-2019 14:07:44 | 11-21-2019 14:07:44 | How do you know you have exact identical `pytorch_model.bin` files? Do you just compare file sizes? IF so, it is not a qualified method just because weights usually are just float numbers and they (almost) always occupy same size on the disk. You can compare the hashes of files to make sure.<|||||>Yes, I just thought of comparing the files naively by comparing their sizes.
I see, yes, "hashes" sounds a much better way of comparing files, thanks. I'll post here if that works.
Also, do you have any beginner suggestions on generating the hashes quickly and efficiently?<|||||>I used md5sum pytorch_model.bin to generate the hashes of the files and both are different. Anyway, thanks, again! |
transformers | 1,906 | closed | Documentation error in GPT2Tokenizer | Documentation page:
https://huggingface.co/transformers/model_doc/gpt2.html#transformers.GPT2Tokenizer
Code place:
https://github.com/huggingface/transformers/blob/d7d36181fdefdabadc53adf51bed4a2680f5880a/transformers/tokenization_gpt2.py#L112-L113
This phrase:
> Otherwise, this tokenizer encode and decode method will not conserve the absence of a space at the beginning of a string: tokenizer.decode(tokenizer.encode(“Hello”)) = ” Hello”
is **NOT** correct.
Actually:
> tokenizer.decode(tokenizer.encode(“Hello”)) = ”Hello”
Try this example:
```
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
print("'" + tokenizer.decode(tokenizer.encode("Hello")) + "'")
```
Output:
```
'Hello'
``` | 11-21-2019 14:00:46 | 11-21-2019 14:00:46 | According to the code (https://github.com/huggingface/transformers/blob/master/transformers/tokenization_gpt2.py#L183) the docstring is incomplete.
The `add_prefix_space` parameter should be passed to `tokenizer.tokenize()` as well as `tokenizer.encode()` or `tokenizer.decode()`.
As well, the docstring should actually say:
```
Otherwise, this tokenizer encode and decode method will not conserve a space at the beginning of a string: tokenizer.decode(tokenizer.encode(“ Hello”)) = ”Hello”
```
According to #1380, the Roberta/GPT2 tokenizer expects sequences to start with a space. Simply prepending a space to the input sequence doesn't give the same result because `GPT2Tokenizer` only overrides the internally-used `_tokenizer()` method and `PreTrainedTokenizer`'s `tokenize()` method (which is called by the user) does it's own preprocessing. |
transformers | 1,905 | closed | run_summarization_finetuning.py | ## ❓ Questions & Help
Hi
Does your code run_summarization_finetuning.py implement the abstractive summarization approach described in "Text summarization with pretrained encoders." by Liu, Yang, and Mirella Lapata?
Thanks
<!-- A clear and concise description of the question. -->
| 11-21-2019 13:37:14 | 11-21-2019 13:37:14 | I think so. The script called `run_summarization_finetuning.py` has the goal of extracting text summary given as input a text document. This code shows how to fine-tune a text summarizer model (called **BertSum**) with two different datasets: CNN and Daily Mail.
> ## Questions & Help
> Hi
> Does your code run_summarization_finetuning.py implement the abstractive summarization approach described in "Text summarization with pretrained encoders." by Liu, Yang, and Mirella Lapata?
> Thanks<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,904 | closed | Typo in Documentation for GPT2LM Output "past" | I think the described output shape for "past" for GPT-2 is wrong.
In:
[https://github.com/huggingface/transformers/blob/0cdfcca24b5739e25b584c05b866baa19ea382ef/transformers/modeling_gpt2.py#L332](url)
it says the output shape of each key, value tensor in each self attention layer is
``(batch_size, num_heads, sequence_length, sequence_length)``
, but it should be
``(batch_size, num_heads, sequence_length, hidden_size / num_heads)``
or
``(batch_size, num_heads, sequence_length, embed_size_per_head)``
.Also since the past tensor per layer always shows both key and value tensors it might even be clearer to write:
``(2, batch_size, num_heads, sequence_length, embed_size_per_head)`` | 11-21-2019 12:06:45 | 11-21-2019 12:06:45 | Yes, feel free to open a PR to fix this Patrick ;-) |
transformers | 1,903 | closed | Valohai integration | Huggingface / Transformers Valohai integration
Changes to existing Transformers code:
- Prints Valohai-styled logs (JSON)
Additional info:
- valohai.yaml has most (but not all) parameters used by run_glue.py
- Valohai execution downloads all glue datas by default (still pretty fast). Download script placed in `utils/download_glue_data.py`.
- Valohai execution only saves model/checkpoint at the end by default (adjust with Valohai UI)
- Valohai execution logs every 25 steps (adjust with Valohai UI)
| 11-21-2019 10:02:44 | 11-21-2019 10:02:44 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1903?src=pr&el=h1) Report
> Merging [#1903](https://codecov.io/gh/huggingface/transformers/pull/1903?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0cdfcca24b5739e25b584c05b866baa19ea382ef?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/1903?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #1903 +/- ##
=======================================
Coverage 84.04% 84.04%
=======================================
Files 97 97
Lines 14333 14333
=======================================
Hits 12046 12046
Misses 2287 2287
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1903?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1903?src=pr&el=footer). Last update [0cdfcca...66fc8d2](https://codecov.io/gh/huggingface/transformers/pull/1903?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>> * Loss logging no longer accumulates (why was that?) but shows raw loss
With the revised commits, this is no longer true (but I'm not able to edit @JuhaKiili's original message, and he's AFK at Slush :) )
|
transformers | 1,902 | closed | Add CamemBERT models to modeling_auto | CamemBERT was in autoconfig but not in automodel, this PR aims to correct that. Also first PR here so please tell me if I missed some things :-)
| 11-21-2019 09:45:21 | 11-21-2019 09:45:21 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1902?src=pr&el=h1) Report
> Merging [#1902](https://codecov.io/gh/huggingface/transformers/pull/1902?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e70cdf083ddb8bfe298d43e6d70d698a3a2f56d3?src=pr&el=desc) will **decrease** coverage by `0.03%`.
> The diff coverage is `14.28%`.
[](https://codecov.io/gh/huggingface/transformers/pull/1902?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #1902 +/- ##
==========================================
- Coverage 84.08% 84.04% -0.04%
==========================================
Files 97 97
Lines 14316 14323 +7
==========================================
+ Hits 12037 12038 +1
- Misses 2279 2285 +6
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1902?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1902/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2F1dG8ucHk=) | `31.81% <14.28%> (-1.52%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1902?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1902?src=pr&el=footer). Last update [e70cdf0...75ef125](https://codecov.io/gh/huggingface/transformers/pull/1902?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Great, thanks @Evpok ! |
transformers | 1,901 | closed | Methods get_input_embeddings() and set_input_embeddings() appear in documentation but not available. | ## ❓ Questions & Help
Hi, I'm trying to access the word embeddings layer of Bert multilingual, as I want to take out all tokens not belonging to spanish and add some tokens which are part of this language, with the objective of adapting BERT multilingual to spanish. The thing is, in your documentation you claim that there are 2 functions for this purpose: get_input_embeddings() and set_input_embeddings() (https://huggingface.co/transformers/model_doc/bert.html); in fact there is a link to the source code and they appear to be there. However, once I try to do this in version 2.1.1. (the one documentation refers to), none of this methods are part of BertModel class, which is astonishing to me. Please tell me what's wrong!! Did you remove these methods from current versions but have a unique documentation for all versions? Which versions can I find these methods in? Is the source code showed in the documentation different from the actual source code of the library?
<!-- A clear and concise description of the question. -->
| 11-21-2019 09:39:57 | 11-21-2019 09:39:57 | Did you check if the methods are in `modeling_utils.py` in your local package?
See #1837 . As @julien-c said, the version of the lib you use may not be in sync with the scripts you run.
Try to install the lib from `master`:
`pip install git+https://github.com/huggingface/transformers`
<|||||>The doc you linked to is for the `master` version, if I'm not mistaken. Maybe we should make that clearer, cc @LysandreJik
Yes, @loveritsu929 is right – install from source if you want to use those methods right now. Thanks! |
transformers | 1,900 | closed | Can GPT2DoubleHeadsModel be used for regular next token prediction task without adjusting its head? | Hello,
According to the HuggingFace Transformer's website (https://huggingface.co/transformers/model_doc/gpt2.html#gpt2doubleheadsmodel), **GPT2DoubleHeadsModel** is the GPT2 Model transformer with a language modelling and a multiple-choice classification head on top e.g. for RocStories/SWAG tasks.
Does this mean that we can use the **GPT2DoubleHeadsModel** for the regular language modelling task (i.e. next word prediction) without modifying its head? or would I need to adjust the head of **GPT2DoubleHeadsModel** if I want to do the next word prediction, since GPT2DoubleHeadsModel is for answering multiple-choice type questions only?
Thank you,
| 11-21-2019 09:25:20 | 11-21-2019 09:25:20 | You don't need to do anything in order to make it predict the next word. That's what it is. But you may consider finetuning it with your own dataset.<|||||>Hello,
Thank you for your reply.
so the **GPT2DoubleHeadsModel** (NOT GPT2LMHeadModel but the DoubleHeadsModel), without any adjustment on its head, can be used for any "non-multiple-choice-based" next token prediction, as well as for the multiple-choice questions?
Thank you,<|||||>Edit:
I'm not sure. I did missread the GPT2DoubleHeadsModel.<|||||>If GPT2DoubleHeadsModel can process both multiple-choice questions as well as non-multiple-choice next token predictions without any adjustment, why did HuggingFace make 2 different GPT2 models -- GPT2DoubleHeadsModel and GPT2LMHeadModel ?
Can GPT2DoubleHeadsModel process both multiple-choice questions as well as non-multiple-choice next token predictions without any further adjustment(s) [e.g. adjustment on its head, etc,]?<|||||>So? |
transformers | 1,899 | closed | Classify entities - run_ner script | Hello all,
I am just wondering what extra input to the "BertForTokenClassification" if I want to classify the entities TO (PER, LOC ..) . Note that the entities are given in advance. I used run_ner script but it extracts the entities and classify them (the extraction is not needed). I did not get how the script(or the input) can be modified for my task? any idea? | 11-21-2019 03:19:44 | 11-21-2019 03:19:44 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,898 | closed | Is the usage of scheduler described in README correct? | https://github.com/huggingface/transformers/blame/master/README.md#L546
I think it's not here to do `scheduler.step` but in epoch loop. | 11-21-2019 02:18:06 | 11-21-2019 02:18:06 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,897 | closed | Distilling GPT2 with gives OOM | ## ❓ Questions & Help
Distilling GPT2 gives OOM what is the best way to fit both teacher student in single GPU and train?
Tried reducing batch size but that itself results into an error.
File "train.py", line 285, in main
distiller.train()
File "trabsformersexamples\distillation\distiller.py", line 340, in train
self.step(input_ids=token_ids, attention_mask=attn_mask, lm_labels=lm_labels)
File "trabsformersexamples\distillation\distiller.py", line 378, in step
s_logits, _, s_hidden_states = self.student(input_ids=input_ids, attention_mask=None) # (bs, seq_length, voc_size)
File "conda\conda\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "conda\conda\envs\pytorch\lib\site-packages\transformers\modeling_gpt2.py", line 549, in forward
inputs_embeds=inputs_embeds)
File "conda\conda\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "conda\conda\envs\pytorch\lib\site-packages\transformers\modeling_gpt2.py", line 439, in forward
inputs_embeds = self.wte(input_ids)
File "conda\conda\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "conda\conda\envs\pytorch\lib\site-packages\torch\nn\modules\sparse.py", line 114, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "conda\conda\envs\pytorch\lib\site-packages\torch\nn\functional.py", line 1484, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.IntTensor instead (while checking arguments for embedding)
| 11-21-2019 01:37:16 | 11-21-2019 01:37:16 | Do you need pretrain distilgpt2 from scratch? You can consider just finetuning it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,896 | closed | Tokenizing/Loading Data for GPT-2 (1 example per line) | I want to finetune GPT-2 on a dataset that has one example per line, and the examples all have different length. What is the best way to alter the TextDataset class to allow for this? I understand GPT-2 requires fixed length inputs but I'm not sure how to apply the attention mask to achieve this? Also do I need to add a bos/eos token to each line?
Appreciate the help | 11-21-2019 01:10:06 | 11-21-2019 01:10:06 | Please check this out: https://github.com/huggingface/transformers/issues/1816#issuecomment-554759751
This finetuning script https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py and the library do almost everything for you. Thus you don't need to split the text into fixed blocksize etc. Just give it one datafile for training and one for evaluating.<|||||>I was using the script but when generating samples it seemed like it was mixing information from multiple examples in the training set, when they should be treated as individual examples. Is there a way to avoid this?<|||||>Like in this thread it mentions not concatenating different articles in the same input: https://github.com/huggingface/transformers/issues/1145#issuecomment-527225854? Is there a way to do that with single examples (if they are not correlated)<|||||>Language models look for entire corpus. You can seperate them via `<|endoftext|>` special token. The model learns (and already learned if you based it on a pretrained one while finetuning) to separate contexts by `<|endoftext|>` token. This is a common case for language models. For example, XLNet uses `<eod>` as separator. This is also implied at the original GPT paper while covering how to train GPT for document classification etc. See figure one: https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf
But, still if you want to make sure that different rows in the data are not even connected while training, then you can pad each line until fill `block_size`. For example, GPT2 has 1024 input size and you have a data long 1540 (=1024+516) as tokens. Then you could have:
<1024 tokens><516 tokens + 508 padding><nextdata..>
Thus you can make sure that GPT2 doesn't get mixed data as input.<|||||>Also, please check this tutorial http://jalammar.github.io/illustrated-gpt2/ by Jay Alammar<|||||>I thought GPT-2 didn't use padding tokens https://github.com/huggingface/transformers/issues/1435#issuecomment-538779523 and it's unclear to me how to use the attention mask. I'll try just using the <|endoftext|> token however<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,895 | closed | Update Squ | Update https://github.com/huggingface/transformers/blob/master/examples/run_squad.py
Implemented solution from
https://github.com/huggingface/transformers/issues/1837#issuecomment-554206349 | 11-21-2019 00:50:16 | 11-21-2019 00:50:16 | Could you give at least some explanation in the title and the body of your post explaining what fixed or changed? Now it's very vague. <|||||>> Could you give at least some explanation in the title and the body of your post explaining what fixed or changed? Now it's very vague.
Sorry about the vagueness, I'll do this for any new pull requests. As for this one, it's obsolete now since this script has been refactored. |
transformers | 1,894 | closed | `overwrite_cache` argument in `run_lm_finetuning.py` not used at all | There is an unused argument in `run_lm_finetuning.py`
https://github.com/huggingface/transformers/blob/e70cdf083ddb8bfe298d43e6d70d698a3a2f56d3/examples/run_lm_finetuning.py#L418
Is it forgotten? | 11-20-2019 23:09:55 | 11-20-2019 23:09:55 | Indeed, thank you! |
transformers | 1,893 | closed | Cleanup TPU bits from `run_glue.py` | TPU runner is currently implemented in:
https://github.com/pytorch-tpu/transformers/blob/tpu/examples/run_glue_tpu.py.
We plan to upstream this directly into `huggingface/transformers`
(either `master` or `tpu`) branch once it's been more thoroughly tested. | 11-20-2019 22:51:57 | 11-20-2019 22:51:57 | Great, thanks @jysohn23 !<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1893?src=pr&el=h1) Report
> Merging [#1893](https://codecov.io/gh/huggingface/transformers/pull/1893?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/454455c695ff38df1ed3670a43677fdd1abcedf3?src=pr&el=desc) will **decrease** coverage by `0.02%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/1893?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #1893 +/- ##
==========================================
- Coverage 84.05% 84.03% -0.03%
==========================================
Files 97 94 -3
Lines 14316 14032 -284
==========================================
- Hits 12034 11792 -242
+ Misses 2282 2240 -42
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1893?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1893/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `91.2% <0%> (-0.95%)` | :arrow_down: |
| [transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1893/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYXV0by5weQ==) | `58.82% <0%> (-0.64%)` | :arrow_down: |
| [transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1893/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3JvYmVydGEucHk=) | `89.9% <0%> (-0.53%)` | :arrow_down: |
| [transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1893/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbmV0LnB5) | `87.82% <0%> (-0.37%)` | :arrow_down: |
| [transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/1893/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3RyYW5zZm9feGwucHk=) | `92.21% <0%> (-0.33%)` | :arrow_down: |
| [transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1893/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3V0aWxzLnB5) | `92.4% <0%> (-0.28%)` | :arrow_down: |
| [transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1893/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2N0cmwucHk=) | `97.75% <0%> (-0.11%)` | :arrow_down: |
| [transformers/tests/modeling\_distilbert\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1893/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2Rpc3RpbGJlcnRfdGVzdC5weQ==) | `99.09% <0%> (-0.09%)` | :arrow_down: |
| [transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/1893/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbS5weQ==) | `90.39% <0%> (-0.08%)` | :arrow_down: |
| [transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1893/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2Rpc3RpbGJlcnQucHk=) | `98.59% <0%> (-0.07%)` | :arrow_down: |
| ... and [19 more](https://codecov.io/gh/huggingface/transformers/pull/1893/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1893?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1893?src=pr&el=footer). Last update [454455c...2a9df0c](https://codecov.io/gh/huggingface/transformers/pull/1893?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 1,892 | closed | error on bert.fit for Squad dataset | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
The size of tensor a (384) must match the size of tensor b (12) at non-singleton dimension 1.
On passing values to fit function in modelling/bert.py facing above error.
| 11-20-2019 19:47:26 | 11-20-2019 19:47:26 | We can't help you without more information.
Please fill in the issues templates required information.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 1,891 | closed | fixes issue with unrecognized arguments for AdamW | as suggested in https://github.com/huggingface/transformers/issues/830 | 11-20-2019 19:29:37 | 11-20-2019 19:29:37 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1891?src=pr&el=h1) Report
> Merging [#1891](https://codecov.io/gh/huggingface/transformers/pull/1891?src=pr&el=desc) into [xlnet](https://codecov.io/gh/huggingface/transformers/commit/1b35d05d4b3c121a9740544aa6f884f1039780b1?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/1891?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## xlnet #1891 +/- ##
=====================================
Coverage 78.9% 78.9%
=====================================
Files 34 34
Lines 6181 6181
=====================================
Hits 4877 4877
Misses 1304 1304
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1891?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1891?src=pr&el=footer). Last update [1b35d05...dcbea8c](https://codecov.io/gh/huggingface/transformers/pull/1891?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Same as #1890 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.