repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
β | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 12,125 | closed | Correct typo in summary of tasks doc | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-12-2021 00:33:25 | 06-12-2021 00:33:25 | |
transformers | 12,124 | closed | [style] consistent nn. and nn.functional | As discussed in https://github.com/huggingface/transformers/issues/11600 this PR normalizes to `nn.functional.foo()` replacing `F.` and `torch.nn.` with `nn.`.
This is all automated by:
```
# deal with torch.nn
perl -pi -e 's|^import torch\n|from torch import nn\nimport torch\n|' `grep -Ilr torch.nn src`
find src -type f -exec perl -X -pi -e 's{(?<!(from |import |[`#/]))torch\.nn\.}{nn.}g' {} \;
find src -type f -exec perl -pi -e 's|import torch\.nn as nn|from torch import nn|g' {} \;
# deal with F
find src -type f -exec perl -pi -e 's|from torch.nn import functional as F|from torch import nn|g' {} \;
find src -type f -exec perl -pi -e 's|import torch.nn.functional as F|from torch import nn|g' {} \;
find src -type f -exec perl -pi -e 's|(?<!\w)F\.|nn.functional.|g' {} \;
git checkout src/transformers/data/data_collator.py
perl -pi -e 's|import torch||' src/transformers/models/prophetnet/convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py
make fixup
```
This is just `src/transformers` for now. If happy I can do the same for `templates`, `tests` and `examples` next
To kind reviewers: This is a massive auto-rewrite, if you notice some missed patterns please just point one instance to me, so I will adjust regex to catch them all. (since we need to do at least `tests`/`examples` too)
@sgugger, @LysandreJik, @patrickvonplaten
| 06-11-2021 23:45:05 | 06-11-2021 23:45:05 | |
transformers | 12,123 | closed | [optim] implement AdafactorSchedule | Currently Adafactor doesn't use an external scheduler and doesn't expose its lr values, and especially as reported in https://github.com/huggingface/transformers/issues/11612 the Trainer can't work without a scheduler, so this PR:
- implements `AdafactorSchedule` which is a proxy to `Adafactor` and can pull the lr values out of it
- adds a basic test
- updates docs
The implementation is somewhat hackish, but it's good enough for now.
Fixes: https://github.com/huggingface/transformers/issues/11612
@sgugger, @LysandreJik | 06-11-2021 21:36:35 | 06-11-2021 21:36:35 | Correct me if I am wrong, but I see two flaws with the current solution:
1) Only the first learning rate is returned as float, the following ones are tensors of size 1 - gives error while trying to pickle the logging history,
2) Adafactor has separate learning rates for each of the network components (Linear layers, normalizations...). The current solution gives only the LR of the first component, usually the embedding matrix.
<|||||>In other words you're saying this was a half-baked solution. It is very much so. The original workaround idea was just to return a dumb number, to make things work with HF Trainer, as Adafactor wasn't designed to share its LRs with other components.
@LukasStankevicius, would you like to enhance my initial hack to fully support the features you mentioned lacking/incomplete? It surely could use some TLC.<|||||>For my own use, I modified Adafactor scheduler as follows:
```python
from transformers.optimization import AdafactorSchedule
class MyAdafactorSchedule(AdafactorSchedule):
def get_lr(self):
opt = self.optimizer
if "step" in opt.state[opt.param_groups[0]["params"][0]]:
lrs = [opt._get_lr(group, opt.state[p]).item() for group in opt.param_groups for p in group["params"]]
else:
lrs = []
return [lrs]
```
Now it does not give errors while pickling logging history and reports learning rates for all components. However, it pollutes the logs (a single logged step may contain a list of over 100 learning rates).
You may average, but then, that is a point of logging lr at all?
So, I do not know the optimal solution here. Maybe just warning in documentation about Adafactor learning rates. |
transformers | 12,122 | closed | Model card defaults | # What does this PR do?
This PR adds some better defaults to the auto-generated model cards for:
- the dataset names and tags
- the checkpoint it's fine-tuned from
- the type of task
As an example, on the classic fine-tuning of bert using the Trainer on GLUE, this is what we get for the metadata without telling the Trainer anything:
```
---
license: apache-2.0
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: finetuned-bert
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8431372549019608
- name: F1
type: f1
value: 0.8915254237288135
---
```
As a side note, the implementation completes the worked begun for a file with the mapping of the auto model names (to avoid importing all models) in order to properly guess the class name. | 06-11-2021 21:25:12 | 06-11-2021 21:25:12 | Hi @sgugger , really cool feature :+1:
It would also be a good feature to distinguish between dev and test score :)<|||||>@LysandreJik yeah no let's not add it as it's not required
Will be easier to maintain a mapping on the hub's side if it's not (needlessly) overridden. cc @osanseviero cf. https://github.com/huggingface/huggingface_hub/pull/109 in "How is a model's type of inference API and widget determined?" |
transformers | 12,121 | closed | Don't log anything before logging is setup in examples | # What does this PR do?
As flagged out by #12090, the examples contain some logs before the logging is properly set up. This PR fixes that.
Fixes #12090 | 06-11-2021 20:51:57 | 06-11-2021 20:51:57 | |
transformers | 12,120 | closed | ValueError in predict function for ClassificationModel | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-4.15.0-122-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.2
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@LysandreJik
## Information
Model I am using (Bert, XLNet ...): bert
The problem arises when using:
* [ ] my own modified scripts:
I noticed then when I decrease my train batch size from 32 to 16, i get the following bug:

(Please note that training for 10 epochs happens successfully)
The tasks I am working on is:
* [ ] my own task or dataset:
My own dataset for binary classification of text documents.
## Expected behavior
Evaluation should happen as expected. I am not sure what to fix/how to investigate. Could not find much about it online.
ValueError: could not broadcast input array from shape (16,2) into shape (4,2) | 06-11-2021 20:10:52 | 06-11-2021 20:10:52 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,119 | closed | Adding ZeroShotImageClassificationPipeline | # What does this PR do?
- Based on CLIP
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@suraj-patil @LysandreJik
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 06-11-2021 19:49:29 | 06-11-2021 19:49:29 | Pinging @patil-suraj @LysandreJik <|||||>Friendly ping @LysandreJik @patil-suraj<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>unstale?<|||||>> This is looking good, thanks a lot for adding this! Left a few comments.
Thanks for this, many places where doc/code was out of sync/bad copy paste here.
> Could you explain a bit how batching is handled ?
https://github.com/huggingface/transformers/pull/14225
This should contain more information on how it's handled internally. The pseudo code and images try to convey how it's done.
Tell me how this could be improved, it should belong in the doc actually.<|||||>@LysandreJik Can you do a second quick review please ? I think adding new pipeline merits a bit more eyes than 4.<|||||>@patil-suraj Do you think we can add a sigmoid to get `multi_label` or are the outputs of the model not compatible with this ?
@FrancescoSaverioZuppichini <|||||>> @patil-suraj Do you think we can add a sigmoid to get `multi_label` or are the outputs of the model not compatible with this ? @FrancescoSaverioZuppichini
Technically, yes. But I don't know how well that will work.<|||||>Ok let's drop it then. I actually thought about it with maybe multiple prompts too (This photo is about ..., This photo is not about ...) to recover somehow the entailment thing, but CLIP was not trained with this in mind so let's just skip it.)
<|||||>@LysandreJik friendly ping to get a third opinion before merging.<|||||>After thinking about it, sigmoid won't probably work well since it wasn't trained directly with it. We could (in theory) normalize the `image_logits` and return the ones that are more close to biggest other (meaning they all "fit" the image in the same way). Following @patil-suraj comment, I'm not sure how well this works either.<|||||>time to continue the widget PR? https://github.com/huggingface/huggingface_hub/pull/118 π |
transformers | 12,118 | closed | Passing a custom stopping_criteria list to model.generate() yields a multiple value error for that keyword arg | ---
name: "\U0001F41B Bug Report"
about: Submit a bug report to help us improve transformers
title: ''
labels: ''
assignees: ''
---
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: macOS-10.15.5-x86_64-i386-64bit
- Python version: 3.8.8
- PyTorch version (GPU?): 1.18.1 (no)
- Tensorflow version (GPU?): N/A
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
- set model_kwargs programmatically: @patrickvonplaten
- set stopping_criteria programmatically: @Narsil
## Information
Model I am using (Bert, XLNet ...): GPT2DoubleHeadsModel (pretrained model: distilgpt2)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below): Any script I write that passes a custom StoppingCriteriaList via the stopping_criteria keyword arg of generation_utils.GenerationMixin.generate() seems to reproduce this issue.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below): a simple personal chatbot harness with a custom newline stopping criterion
## To reproduce
Steps to reproduce the behavior:
1. Load a trained model using transformer.generation_utils.GenerationMixin
2. Define a custom StoppingCriteria and StoppingCriteriaList
3. Pass the custom StoppingCriteriaList as a keyword arg to model.generate(), e.g. model.generate(...stopping_criteria=my_custom_list...)
The above steps will yield a "got multiple values for keyword argument 'stopping_criteria'" error message.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Ideally, there would be no error message, and the stopping_criteria kwarg would be passed through normally. | 06-11-2021 19:30:57 | 06-11-2021 19:30:57 | Hey @bitbanger,
Could you provide a reproducible code snippet that we could just copy paste into a python shell to reproduce the error? :-) Thanks!<|||||>Hi there! Thanks for your response! Sure, here you go. I've confirmed that this code yields the error when run in the environment described in my report:
```
import torch
from transformers import GPT2Tokenizer, GPT2DoubleHeadsModel
from transformers.generation_stopping_criteria import StoppingCriteria, StoppingCriteriaList
class DummyStopCriterion(StoppingCriteria):
def __call__(self, input_ids: torch.LongTensor, score: torch.FloatTensor, **kwargs):
return len(input_ids.squeeze()) > 10
tok = GPT2Tokenizer.from_pretrained('distilgpt2')
model = GPT2DoubleHeadsModel.from_pretrained('distilgpt2')
input_ids = tok.encode('This should reproduce the bug', return_tensors='pt')
model.generate(input_ids, stopping_criteria=StoppingCriteriaList([DummyStopCriterion()]))
```<|||||>Adding a bit more context,
the error is
```
transformers.generation_utils.GenerationMixin.greedy_search() got multiple values for keyword argument 'stopping_criteria'
```
The reason is, stopping_criteria is **not** a valid argument to `generate` so it get passed as `model_kwargs` which in turn are passed to `greedy` which already receives `stopping_criteria` because it gets created within `generate`.
The proposed solution is simply to enable it (with `logits_processor`) as a real argument of `generate` (doc should specify it's intended for users with know-how, most users should use simple arguments)
wdyt ? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,117 | closed | GPT Neo Tokenizers can't change BOS or EOS token | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- Platform: Linux-5.8.0-55-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: RTX 3090
- Using distributed or parallel set-up in script?: Using DeepSpeed
Conda env:
channels:
- pytorch
- nvidia
- conda-forge
- defaults
dependencies:
- _libgcc_mutex=0.1=main
- blas=1.0=mkl
- bzip2=1.0.8=h7b6447c_0
- ca-certificates=2021.5.30=ha878542_0
- certifi=2021.5.30=py37h89c1867_0
- cudatoolkit=11.1.74=h6bb024c_0
- ffmpeg=4.3=hf484d3e_0
- freetype=2.10.4=h5ab3b9f_0
- gmp=6.2.1=h2531618_2
- gnutls=3.6.15=he1e5248_0
- intel-openmp=2021.2.0=h06a4308_610
- joblib=1.0.1=pyhd8ed1ab_0
- jpeg=9b=h024ee3a_2
- lame=3.100=h7b6447c_0
- lcms2=2.12=h3be6417_0
- ld_impl_linux-64=2.33.1=h53a641e_7
- libblas=3.9.0=9_mkl
- libcblas=3.9.0=9_mkl
- libffi=3.3=he6710b0_2
- libgcc-ng=9.1.0=hdf63c60_0
- libgfortran-ng=7.5.0=h14aa051_19
- libgfortran4=7.5.0=h14aa051_19
- libiconv=1.15=h63c8f33_5
- libidn2=2.3.1=h27cfd23_0
- liblapack=3.9.0=9_mkl
- libpng=1.6.37=hbc83047_0
- libstdcxx-ng=9.1.0=hdf63c60_0
- libtasn1=4.16.0=h27cfd23_0
- libtiff=4.2.0=h85742a9_0
- libunistring=0.9.10=h27cfd23_0
- libuv=1.40.0=h7b6447c_0
- libwebp-base=1.2.0=h27cfd23_0
- lz4-c=1.9.3=h2531618_0
- mkl=2021.2.0=h06a4308_296
- mkl-service=2.3.0=py37h27cfd23_1
- mkl_fft=1.3.0=py37h42c9631_2
- mkl_random=1.2.1=py37ha9443f7_2
- ncurses=6.2=he6710b0_1
- nettle=3.7.2=hbbd107a_1
- numpy=1.20.2=py37h2d18471_0
- numpy-base=1.20.2=py37hfae3a4d_0
- olefile=0.46=py37_0
- openh264=2.1.0=hd408876_0
- openssl=1.1.1k=h27cfd23_0
- pillow=8.2.0=py37he98fc37_0
- pip=21.1.1=py37h06a4308_0
- python=3.7.10=hdb3f193_0
- python_abi=3.7=1_cp37m
- pytorch=1.8.1=py3.7_cuda11.1_cudnn8.0.5_0
- readline=8.1=h27cfd23_0
- scikit-learn=0.23.2=py37hddcf8d6_3
- scipy=1.5.3=py37h8911b10_0
- setuptools=52.0.0=py37h06a4308_0
- six=1.15.0=py37h06a4308_0
- sqlite=3.35.4=hdfb4753_0
- threadpoolctl=2.1.0=pyh5ca1d4c_0
- tk=8.6.10=hbc83047_0
- torchaudio=0.8.1=py37
- torchvision=0.9.1=py37_cu111
- typing_extensions=3.7.4.3=pyha847dfd_0
- wheel=0.36.2=pyhd3eb1b0_0
- xz=5.2.5=h7b6447c_0
- zlib=1.2.11=h7b6447c_3
- zstd=1.4.9=haebb681_0
- pip:
- chardet==4.0.0
- click==8.0.1
- datasets==1.7.0
- deepspeed==0.4.0+8def3cb
- dill==0.3.3
- filelock==3.0.12
- fsspec==2021.6.0
- huggingface-hub==0.0.8
- idna==2.10
- importlib-metadata==4.5.0
- multiprocess==0.70.11.1
- ninja==1.10.0.post2
- packaging==20.9
- pandas==1.2.4
- protobuf==3.17.3
- psutil==5.8.0
- pyarrow==3.0.0
- pyparsing==2.4.7
- python-dateutil==2.8.1
- pytz==2021.1
- regex==2021.4.4
- requests==2.25.1
- sacremoses==0.0.45
- tensorboardx==1.8
- tokenizers==0.10.3
- tqdm==4.49.0
- transformers==4.6.1
- triton==0.4.2
- urllib3==1.26.5
- xxhash==2.0.2
- zipp==3.4.1
### Who can help
@LysandreJik seems to be the one to tag as this is an issue with the tokenizer
## Information
When loading the GPT Neo tokenizer with either the GPT2Tokenizer, or with the AutoTokenizer, you are unable to change the EOS of BOS tokens through passing arguments
Model I am using: GPT Neo 2.7B and 1.3B and its Tokenizer
The problem arises when using:
* [* ] my own modified scripts: (give details below)
The tasks I am working on is:
*[ *] my own task or dataset: (give details below)
I am trying to finetune the model using DeepSpeed and a custom dataset
## To reproduce
Steps to reproduce the behavior:
1. Load GPT Neo Tokenizer from pretrained using either AutoTokenizer or GPT2Tokenizer
2. Pass arguments to change EOS and BOS tokens
3. Print out the tokens using tokenizer.bos_token and tokenizer.eos_token
4. Notice that it has not changed
5. Do steps 1-3 for another model, say gpt2 and notice that it does change
```python
tokenizer = AutoTokenizer.from_pretrained(
"gpt2-xl",bos_token='<|beginoftext|>',
eos_token='<|endoftext|>', pad_token='<|pad|>')
print(tokenizer.bos_token)
print(tokenizer.eos_token)
print()
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/gpt-neo-2.7B",bos_token='<|beginoftext|>',
eos_token='<|endoftext|>', pad_token='<|pad|>')
print(tokenizer.bos_token)
print(tokenizer.eos_token)
print()
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/gpt-neo-1.3B",bos_token='<|beginoftext|>',
eos_token='<|endoftext|>', pad_token='<|pad|>')
print(tokenizer.bos_token)
print(tokenizer.eos_token)
quit()
```
That gives this:
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
<|beginoftext|>
<|endoftext|>
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
<|endoftext|>
<|endoftext|>
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
<|endoftext|>
<|endoftext|>
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The expected behavior is that the values of the bos and eos tokens change. It does not change though.
<!-- A clear and concise description of what you would expect to happen. -->
| 06-11-2021 18:22:49 | 06-11-2021 18:22:49 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi there, I just tried this but couldn't reproduce it. Here's the colab if you want to check https://colab.research.google.com/drive/1gGWMOdjF6wIVfUlo0XE1LfupY5T7ioVS?usp=sharing<|||||>I was using 4.6.1. Perhaps its been fixed. I ran your code as well and didn't see the issue. |
transformers | 12,116 | closed | Enable add_prefix_space on run_ner if necessary | # What does this PR do?
Enable add_prefix_space for a tokenizer on run_ner and run_ner_no_trainer if it needs to be instantiated with.
Fixes #9607
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
I've tested it:
```
% python -m pytest -n auto --dist=loadfile -s -v ./examples/
...
Results (256.02s):
24 passed
21 skipped
```
additionally checked style and quality then fixed it up:
```
% make style && make quality && make fixup
...
All done! β¨ π° β¨
```
## Who can review?
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
| 06-11-2021 16:37:43 | 06-11-2021 16:37:43 | Thanks for your PR, but this is too complicated. The examples are just example, and should be modified by users directly for specific use-cases, we can't support everything out of the box.<|||||>@sgugger I agree with you it's somewhat complicated.
So, I've pushed codes that simplified as possible and also support for training roberta.
Actually I've experienced #9607 on training roberta for ner task.
Refering to #9607, only roberta-base and roberta-large have the issue.
So, it's enough that run_ner supports roberta for now.
If you think it's still too complicated, I will close this pr and just use it for me.<|||||>Thank you for your suggestion!
Let me update the pr.<|||||>I've updated the pr.
@sgugger, please take a look through it. |
transformers | 12,115 | closed | Hosted inference api keeps returning 400 error | I'm not sure if it's okay to make issue with this topic, but I couldn't find a place to share my problem so I'm making an issue.
### Problem description
When I try to inference a public model (facebook/blenderbot-1B-distill), it keeps returning 400 error with message below, whether I tried it on model hub or through HTTP request.
`'We could not properly load your model with any of the classes {model_classes}, are you sure this model can be loaded with the specified task ?'`
I used this model normally a few days ago, but now it's not working. May I ask for a help? Any advice would be appreciated..

| 06-11-2021 13:58:43 | 06-11-2021 13:58:43 | I found it resolved by fixing model config. Now it's working |
transformers | 12,114 | closed | Get the loss in LongformerForQuestionAnswering for fine-tuning | Hello,
I'm trying to fine-tune **LongformerForQuestionAnswering** on a custom dataset. I've written a script for training (without using hugging face _Trainer_), and I need the loss of the model for that. On the Longformer docs page, it's written that :
**loss** (torch.FloatTensor of shape (1,), optional, returned when labels is provided) β Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
Meaning that the model is supposed to return **loss** when the input _label_ is provided, however, the model takes no such an input (it's not mentioned in the doc, and the model generates an error when passing the input (label=...))
I've noticed that for **LongformerForMaskedLM** this is not an issue since the model does take _label_ as an input.
I am wondering if there is a way to get the **loss** from LongformerForQuestionAnswering and perhaps to correct this on the docs page.
Thanks!
| 06-11-2021 13:28:21 | 06-11-2021 13:28:21 | In HuggingFace Transformers, `xxxForQuestionAnswering` models don't take a `labels` argument as input. Rather, one should provide `start_positions` and `end_positions`. These indicate which token are the start of the answer, and which token are the end of the answer.
Check out this notebook which showcases how to fine-tune a model for question-answering: https://github.com/huggingface/notebooks/blob/master/examples/question_answering.ipynb
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,113 | closed | Optimizing away the `fill-mask` pipeline. | # What does this PR do?
- Don't send anything to the tokenizer unless needed. Vocab check is much
faster
- Keep BC by sending data to the tokenizer when needed. User handling
warning messages will see performance benefits again
- Make `targets` and `top_k` work together better `top_k` cannot be higher
than `len(targets)` but can be smaller still.
- Actually simplify the `target_ids` in case of duplicate (it can happen
because we're parsing raw strings)
- Removed useless code to fail on empty strings. It works only if empty
string is in first position, moved to ignoring them instead.
- Changed the related tests as only the tests would fail correctly
(having incorrect value in first position)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #12099
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik @EtaoinWu
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 06-11-2021 12:07:35 | 06-11-2021 12:07:35 | Ping @LysandreJik Do you mind doing a quick review ?
(Tests do not have to be modified, for this to work, but it will output a lot of warning and be slower than necessary)<|||||>Thanks for the ping, reviewing now!<|||||>That's a very neat idea !
It must be quite slow though, right ?<|||||>Yes, definitely too slow to actually put in tests and generally a bad idea to rely on model hub checkpoints for this I think, it was just the quickest way to ensure that all tokenizer/model pairs really do continue working<|||||>Yes, maybe have a script or something for larger refactors for sure. |
transformers | 12,112 | closed | How to pass `past_key_values` to GPTNeo model? | How to pass `past_key_values` to the GPTNeo model?
I want to pass `past_key_values` to the GPTNeo model. I set `past_key_values` as a size Tuple[Tuple[torch.Tensor]] => `(num_layers, 2, batch_size, seq_length, num_heads, d_head)`. But I got a below error message.
```
Original Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/kaki_ai/test/big-lm/prefix_tuning/models/prefix_tuning_gpt_neo.py", line 61, in forward
use_cache=True
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/kaki_ai/test/big-lm/prefix_tuning/models/gpt_neo_for_causal_lm_wrapper.py", line 87, in forward
return_dict=return_dict,
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/kaki_ai/test/big-lm/prefix_tuning/transformers/models/gpt_neo/modeling_gpt_neo.py", line 866, in forward
output_attentions=output_attentions,
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/kaki_ai/test/big-lm/prefix_tuning/transformers/models/gpt_neo/modeling_gpt_neo.py", line 563, in forward
output_attentions=output_attentions,
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/kaki_ai/test/big-lm/prefix_tuning/transformers/models/gpt_neo/modeling_gpt_neo.py", line 505, in forward
output_attentions=output_attentions,
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/kaki_ai/test/big-lm/prefix_tuning/transformers/models/gpt_neo/modeling_gpt_neo.py", line 412, in forward
key_value_hidden_states = torch.cat([past, hidden_states], dim=1)
RuntimeError: Tensors must have same number of dimensions: got 3 and 4
```
@patil-suraj | 06-11-2021 10:48:20 | 06-11-2021 10:48:20 | never mind. I solved it.<|||||>How? I find the shape of `past_key_values` is very strange. |
transformers | 12,111 | closed | add readme for flax clm | # What does this PR do?
Update the language modeling readme for CLM.
| 06-11-2021 10:15:41 | 06-11-2021 10:15:41 | |
transformers | 12,110 | closed | Fix head masking generate tests | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes:
FAILED tests/test_modeling_bart.py::BartModelTest::test_generate_with_head_masking
FAILED tests/test_modeling_bigbird_pegasus.py::BigBirdPegasusModelTest::test_generate_with_head_masking
FAILED tests/test_modeling_blenderbot.py::BlenderbotModelTest::test_generate_with_head_masking
FAILED tests/test_modeling_blenderbot_small.py::BlenderbotSmallModelTest::test_generate_with_head_masking
FAILED tests/test_modeling_fsmt.py::FSMTModelTest::test_generate_with_head_masking
FAILED tests/test_modeling_led.py::LEDModelTest::test_generate_with_head_masking
FAILED tests/test_modeling_m2m_100.py::M2M100ModelTest::test_generate_with_head_masking
FAILED tests/test_modeling_marian.py::MarianModelTest::test_generate_with_head_masking
FAILED tests/test_modeling_mbart.py::MBartModelTest::test_generate_with_head_masking
FAILED tests/test_modeling_pegasus.py::PegasusModelTest::test_generate_with_head_masking
FAILED tests/test_modeling_speech_to_text.py::Speech2TextModelTest::test_generate_with_head_masking
on GPU
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-11-2021 08:02:35 | 06-11-2021 08:02:35 | |
transformers | 12,109 | closed | Why attention mask is -10000 but not * 0? | I am reading the code in Roberta and I found that the implementation of avoiding padding token to be calculated in self-attention is to minus its value by 10000 in function `get_extended_attention_mask`.
I am wondering that why implement the mask by directly multiplying the value of padding tokens with zero? | 06-11-2021 06:48:47 | 06-11-2021 06:48:47 | Hi,
searching previous Github issues, I found this one, which might help you: #542<|||||>Thanks for the reference! While I consider that maybe implementing a softmax function that allows masking after e^x may also be an approach to implement `attention_mask`?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,108 | closed | How to access training loss in TrainerCallback? | Hi,
How can i access the current loss in the `on_step` function in `TrainerCallback`? | 06-11-2021 04:27:46 | 06-11-2021 04:27:46 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,107 | closed | How can I add a CNN layer on top of bert model? | ### Information
I'm working on a **binary classification task** and used **BERT** model from transformers library to do it using the custom model below:
```python
class BERT(nn.Module):
def __init__(self):
super(BERT, self).__init__()
self.bert = BertModel.from_pretrained(BERT_PATH, return_dict=False)
self.dropout = nn.Dropout(0.2)
self.out = nn.Linear(768, 1)
def forward(self, ids, mask, token_type_ids):
outputs = self.bert(ids, attention_mask=mask,token_type_ids=token_type_ids)
# Use the pooled output
output = self.dropout(outputs[1])
return self.out(output)
```
### What I'm looking for?
Now I'm looking to use a `CNN` layer on top of `BERT` with the following configurations to see how my model will perform:
```
self.cnn = nn.Sequential(
nn.Conv2d(? ? ?),
nn.ReLU(),
nn.MaxPool2d(? ? ?)
)
```
### The problem encountered.
I have already tried but encountered errors regarding setting the dimensions. In your opinion what configuration should I put in the sequential model to avoid the problem of adjusting the dimensions? If you can **copy-paste** my code and offer me the final custom model with the right **Sequential model included**, I will be thankful. | 06-11-2021 01:44:01 | 06-11-2021 01:44:01 | Hi,
please ask this question on the [forum](https://discuss.huggingface.co/). We like to keep Github issues for bugs/feature requests.
Thanks. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,106 | closed | Add GPT-J 6B support to the gpt-neo implementation | # What does this PR do?
This PR mainly adds support for the GPT-J 6B model. A [conversion script](https://gist.github.com/finetuneanon/ee196c6cd16af1de4ca444862414683a) and [config.json](https://gist.github.com/finetuneanon/a55bdb3f5881e361faef0e96e1d41f09) for the slim checkpoint are also available.
It also addresses the local attention issue from #11320 in the same way as PR #11630 and works around an issue with torch.multinomial that will allow zero probability tokens to be chosen when sampling from an fp16 model.
Special thanks to the great folks of the EleutherAI discord, who helped me debug the RoPE implementation and to @kurumuz (NovelAI) who worked on this as well.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #11320
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-11-2021 00:47:00 | 06-11-2021 00:47:00 | Just as a note, we have a PyTorch checkpoint for GPT-J that we will be ready to upload once this PR goes through.<|||||>The conversion script in the top post generates a single file checkpoint, but for models of this size, I've found split up checkpoints usually more efficient to load and handle. Such split up checkpoints can be generated using [this conversion script](https://gist.github.com/finetuneanon/7dd417a31338a63f219a49702e0550db) and loaded as follows:
```python
try:
from collections.abc import MutableMapping
except ImportError:
from collections import MutableMapping
from pathlib import Path
class Checkpoint(MutableMapping):
def __init__(self, chkpt_dir, device="cpu"):
self.device = device
self.chkpt_dir = Path(chkpt_dir)
self.checkpoint = torch.load(str(chkpt_dir / Path("m.pt")))
def __len__(self):
return len(self.checkpoint)
def __getitem__(self, key):
path = self.chkpt_dir / Path(self.checkpoint[key]).name
return torch.load(str(path), map_location=self.device)
def __setitem__(self, key, value):
return
def __delitem__(self, key, value):
return
def keys(self):
return self.checkpoint.keys()
def __iter__(self):
for key in self.checkpoint:
yield (key, self.__getitem__(key))
def __copy__(self):
return Checkpoint(self.chkpt_dir, device=self.device)
def copy(self):
return Checkpoint(self.chkpt_dir, device=self.device)
from transformers import GPTNeoForCausalLM, AutoConfig
config = AutoConfig.from_pretrained(model_name)
model = GPTNeoForCausalLM.from_pretrained(pretrained_model_name_or_path=None, config=config, state_dict=Checkpoint("checkpoint"))
```
Having a more integrated or better specifid way of loading them would be helpful, but I'm not sure what the best place for that would be.
**Edit: Updated to handle renamed checkpoint folders.**<|||||>I noticed that there was a typo in the config file linked from the PR text, which caused it to be invalid JSON. It's fixed now.<|||||>Also ran some evaluations using the [eval harness](https://github.com/EleutherAI/lm-evaluation-harness) on the ported model now:
| Task | Metric |Value |
|----------|---------------|-----:|
|lambada |ppl |4.1060|
| |ppl_stderr |0.0886|
| |acc |0.6833|
| |acc_stderr |0.0065|
|winogrande|acc |0.6480|
| |acc_stderr |0.0134|
|piqa |acc |0.7541|
| |acc_stderr |0.0100|
| |acc_norm |0.7612|
| |acc_norm_stderr|0.0099|
|hellaswag |acc |0.4895|
| |acc_stderr |0.0050|
| |acc_norm |0.6614|
| |acc_norm_stderr|0.0047|<|||||>The eval numbers are a little shy of what we have for the Jax model, but close enough that FP rounding could plausibly explain the difference:
Lambada: 3.99 ppl, 0.697 acc
Winogrande: 0.653
PiQA: 0.765
HellaSwag: 0.661<|||||>It should also be noted that my results were with fp16. It should be easy enough to modify the conversion script to cast to fp32 (just replace `half()` with `float()`), which might give results closer to the original evaluation, but I don't currently have access to hardware which could run the model in fp32 at a reasonable speed to do an evaluation on it.<|||||>> It should also be noted that my results were with fp16. It should be easy enough to modify the conversion script to cast to fp32 (just replace `half()` with `float()`), which might give results closer to the original evaluation, but I don't currently have access to hardwhere which could run the model in fp32 at a reasonable speed to do an evaluation on it.
Oh yes, I concur. That wasnβt meant as a detraction at all. Iβm not sure if EAI had enough free GPUs, but I can look at getting the evals run at full precision later this week.<|||||>Just curious - how long before new models are merged to the repo, generally speaking? And how long until it's available in the hosted inference API?<|||||>Hi @finetuneanon
Amazing, thanks a lot for porting `GPT-J` so quickly!
The changes to local attention look good to me. But would be nice to split the PR into two
1. simplify local attention
2. and add GPT-J in a new model file.
While supporting `GPT-J` in the `GPTNeo` model is totally doable, we would like to avoid that. The overall philosophy is to combine more than one model only if the forward pass is exactly similar or requires some really minor changes. If I understand it correctly, here are the differences between `GPT-J` and `GPTNeo` :
- `GPT-J` uses rotary embeddings
- It scales attention weights
- no bias in the attention projection layer (the `out_proj` layer in attention class)
- does not use layer_norm before the feed forward layer (`mlp`)
- no residual connection between `hidden_states` and `attention_output` , just one residual connection which is added to `attention + mlp(hiddn)`
- uses bias in the output projection layer
- does not ties word embeddings with output layer
The current PR handles this using the `config.jax` argument, but itβs a bit confusing, and generally, one config param should not control these many changes. So if we really decide to support this in `GPTNeo` we would probably end-up with different config params like `attention_projection_bias`, `output_bias`, `attention_residual`, `scale_attention`. So itβs cleaner IMO to add a new model for this.
Also, `Transformers` isnβt really a modular toolkit for building different models, The goal is to keep every model file responsible for one model so it becomes easier for everyone to understand it and modify it according to their needs. And also makes it easy for us to maintain these different models.
cc @LysandreJik , @sgugger , @patrickvonplaten
Happy to help in any way to add the model :) <|||||>To be quite honest, I think reading a couple of if branches makes the differences between models much clearer than having to compare two completely different model classes with small differences. You mention that transformers is not intended to be a modular framework, so there should be no issue with controlling these changes through a single configuration variable, although perhaps the naming of `jax` is not optimal. I would be open to changing this to e.g. `gptj`. Splitting it up into multiple options would only make sense to actually turn it into a modular framework.
I would also prefer not splitting the pull request.<|||||>Let's agree to disagree: this is one of the core principle of the Transformers library, explicitly stated in our [philosophy](https://huggingface.co/transformers/philosophy.html). We've been proceeding like this since the beginning, and while we definitely understand where you're coming from, this is a defining principle of our library which we are not eager to change as it has been validated both by [surveys](https://discuss.huggingface.co/t/transformers-huge-community-feedback/120) and by community feedback.
Unfortunately, we will have to insist on GPT-J following the same approach as the rest of the models - for philosophy, maintenance and coherence's sake. Let us know if you would like us to take over, we are happy to! Thank you for your understanding.<|||||>Yes, please take over in that case.<|||||>> Let's agree to disagree: this is one of the core principle of the Transformers library, explicitly stated in our [philosophy](https://huggingface.co/transformers/philosophy.html). We've been proceeding like this since the beginning, and while we definitely understand where you're coming from, this is a defining principle of our library which we are not eager to change as it has been validated both by [surveys](https://discuss.huggingface.co/t/transformers-huge-community-feedback/120) and by community feedback.
>
> Unfortunately, we will have to insist on GPT-J following the same approach as the rest of the models - for philosophy, maintenance and coherence's sake. Let us know if you would like us to take over, we are happy to! Thank you for your understanding.
If the current changes were to be submitted as a new model, instead of a modification to GPT-Neo, would there be any significant further changes to be made?<|||||>> If the current changes were to be submitted as a new model, instead of a modification to GPT-Neo, would there be any significant further changes to be made?
The new modeling file is the main thing, we have a [template to add new models](https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model) that should take care of everything else.<|||||>I have opened a PR that attempts to refit this into HF's paradigm. I recommend closing this PR and directing discussion to #12243 12243<|||||>As discussed above, this PR will be split into two
- GPT-J (which Stella has already started)
- simplifying GPTNeo local attention
Closing this PR now. |
transformers | 12,105 | closed | What is the correct way to pass labels to DetrForSegmentation? | The [current documentation](https://huggingface.co/transformers/master/model_doc/detr.html#transformers.DetrForSegmentation.forward) for `DetrModelForSegmentation.forward` says the following about `labels` kwarg:
> The class labels themselves should be a torch.LongTensor of len (number of bounding boxes in the image,), the boxes a torch.FloatTensor of shape (number of bounding boxes in the image, 4) and the **masks a torch.FloatTensor of shape (number of bounding boxes in the image, 4).**
But when I looked at the tests, it seems the shape of `masks` is `torch.rand(self.n_targets, self.min_size, self.max_size)` .
https://github.com/huggingface/transformers/blob/d2753dcbec7123500c1a84a7c2143a79e74df48f/tests/test_modeling_detr.py#L87-L103
---
I'm guessing this is a documentation mixup!
Anyways, it would be super helpful to include a snippet in the DETR docs that shows how to correctly pass masks/other labels + get the loss/loss dict. π
CC: @NielsRogge | 06-10-2021 22:15:23 | 06-10-2021 22:15:23 | Thanks for noticing, that's a mistake. The masks needs to be torch.FloatTensor of shape (number of bounding boxes in the image, height, width) - with height and width equal to those of the `pixel_values`.
Note that predicting boxes is required for the training to be possible, since the Hungarian matching is computed using distances between boxes.
However, I've got no less than 5 notebooks coming up that illustrate how to use DETR ;)
I will fix this docs issue, together with some other small improvements, in a PR. <|||||>No worries! I got it working after this. Training is a bit finicky though π
. Looking forward to those notebooks!! |
transformers | 12,104 | open | Issue with mBART50 es-en translation | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: colab
- Python version: 3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): mBART-large-50-many-to-one-nmt
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is: Translation
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behaviour:
The below notebook can be used to reproduce the results
1. https://colab.research.google.com/drive/1LEY3bI9mS7D-n6rJ70iKq3lN9_DQCQh7?usp=sharing
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I've used this model to translate a lot of Spanish text. But I observed that for some examples it's printing completely random things.
The above example should return something like this `1980 Mount St. Helens eruption`
The current output is `The Committee recommends that the State party take all necessary measures to ensure the full implementation of the present recommendations, inter alia, by transmitting them to the members of the Council of Ministers, the Parliament, the Parliamentary Assembly and the Senate, the Parliamentary Assembly and the National Assembly, for appropriate consideration and further action.`
Tagging @patrickvonplaten, @patil-suraj here. I believe this is not really a code issue, but something intrinsic to the model. Any ideas why this is happening?
| 06-10-2021 18:19:34 | 06-10-2021 18:19:34 | Ping <|||||>Hi @patrickvonplaten any thing you wanted to check. Sorry for the late response, was a bit tied up<|||||>@patil-suraj - seems like multiple people have problems with mBART50...should we maybe leave a note in the official docs about it? |
transformers | 12,103 | open | ViT tensorflow Implementation | # π Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
I was reading about ViT in the HuggingFace document and noticed there is no TF implementation of it. It would be great to have it in HuggingFace repo.
## Motivation
I have seen [this](https://keras.io/examples/vision/image_classification_with_vision_transformer/) and think it wouldn't be so hard. We can convert pytorch pretrain weights and use it for tensroflow model.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. --> | 06-10-2021 15:43:36 | 06-10-2021 15:43:36 | Hi, I've contributed ViT in PyTorch and wanted to also add ViT in Tensorflow, but there's currently a limitation to adding TF models that don't expect `input_ids` as an input (ViT only requires `pixel_values`). Any TF model in HuggingFace currently relies on a `input_processing` function (defined [here](https://github.com/huggingface/transformers/blob/77f4c46b501322e9bffb5416dfbf0397deefd7d8/src/transformers/modeling_tf_utils.py#L315)), and this function needs to be updated to also support models that don't expect input_ids as input.
cc @Rocketknight1
My current implementation can be found [here](https://github.com/NielsRogge/transformers/blob/modeling_vit_tf_v2/src/transformers/models/vit/modeling_tf_vit.py).<|||||>> Any TF model in HuggingFace currently relies on a `input_processing` function (defined [here](https://github.com/huggingface/transformers/blob/77f4c46b501322e9bffb5416dfbf0397deefd7d8/src/transformers/modeling_tf_utils.py#L315)), and this function needs to be updated to also support models that don't expect input_ids as input.
nice work @NielsRogge. Could you answer these two questions, please?
1. Can't /Should we use something like `ViTFeatureExtractor` that was defined [here](https://github.com/huggingface/transformers/blob/fe3576488ad122b12364c66ef09dee38b3763f5f/src/transformers/models/vit/feature_extraction_vit.py#L31)??
2. What's the problem of current implementation of `input_processing ` ? if we feed `input_ids` tensor of shape `[batch_size, w, h, c]` to it what would be the problems ? <|||||>> 1. Can't /Should we use something like `ViTFeatureExtractor` that was defined [here](https://github.com/huggingface/transformers/blob/fe3576488ad122b12364c66ef09dee38b3763f5f/src/transformers/models/vit/feature_extraction_vit.py#L31)??
If `TFViTModel` and `TFViTForImageClassification` will be available, you can indeed use `ViTFeatureExtractor` to prepare images for the model (you only need to update the `return_tensors` parameter value to `"tf"` instead of `"pt"`).
> 2\. What's the problem of current implementation of `input_processing ` ? if we feed `input_ids` tensor of shape `[batch_size, w, h, c]` to it what would be the problems ?
Currently it only works as follows:
```
inputs = {"input_ids": None, "pixel_values": pixel_values}
outputs = model(inputs)
```
<|||||>Hey! TF maintainer here - we're definitely aware of the issues with `input_processing`, but we're still working on the ways to fix it without breaking other things! If your model works when passing a null `input_ids`, it's fine to use that for now - you could possibly insert a shim into your `call()` method to avoid the user having to do it themselves?<|||||>I see a TF version of Wav2Vec2 has just been added, and they overwrote the `input_processing` function with a custom `input_values_processing` function as seen [here](https://github.com/huggingface/transformers/blob/040283170cd559b59b8eb37fe9fe8e99ff7edcbc/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L61). So I might do the same for ViT. |
transformers | 12,102 | closed | Appending label2id and id2label to models for inference | 06-10-2021 14:13:12 | 06-10-2021 14:13:12 | ||
transformers | 12,101 | closed | GPT2 medium config n_ctx is wrong I guess? | Hi Guys,
For gpt2-medium ```n_ctx: 4096``` right ?
But in config it is showing ```n_ctx: 1024``` . | 06-10-2021 09:46:33 | 06-10-2021 09:46:33 | No, 1024 is correct. It refers to the sequence length of the model. <|||||>Oh sorry.
Then whats the config parameter of ```intermediate projection after attention```, which is ```4096``` in gpt2-medium.<|||||>Looking at the [config attributes of GPT-2](https://huggingface.co/transformers/model_doc/gpt2.html#transformers.GPT2Config), there's an attribute called `n_inner` which is defined as "Dimensionality of the inner feed-forward layers. None will set it to 4 times `n_embd`".
Apparently, the `n_embd` attribute of the medium-sized GPT-2 model is 1024. So this times 4 equals 4096. <|||||>Oh my bad. Thanks. There is too much of inconsistencies between different model configs. |
transformers | 12,100 | closed | 'Speech2TextProcessor' has no attribute 'from_pretrained'` | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: ubuntu
- Python version: 3.7.10
- PyTorch version (GPU?): cpu 1.8.1
- Tensorflow version (GPU?): no tensorflow
- Using GPU in script?: no cpu
Models:
Speech2TextProcessor
## To reproduce
Steps to reproduce the behavior:
1. processor =Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")
The error I got :
`AttributeError: type object 'Speech2TextProcessor' has no attribute 'from_pretrained'`
<!-- A clear and concise description of what you would expect to happen. -->
I don't understand why I got this issue because in processing_speech_to_text.py, line 78 "from_pretrained" exist. | 06-10-2021 09:34:12 | 06-10-2021 09:34:12 | Hello! I'm fixing this in #12145 to return a better error. |
transformers | 12,099 | closed | FillMaskPipeline very slow when provided with a large `targets` | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-5.4.0-67-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1 (False)
- Tensorflow version (GPU?): N/A
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik @Narsil
## Information
The model I am using: `ethanyt/guwenbert-base`, with a `RoBERTa` model and a `BertTokenizerFast` tokenizer.
## To reproduce
Steps to reproduce the behavior:
1. Initialize a `fill-mask` pipeline with the model and the tokenizer mentioned above
2. Call it with any sentence and a large `targets` (with a length of ~10k single words)
## Problem
The call would be much slower than a similar call without a `targets` argument. A call without a `targets` argument costs ~0.1s, while a call with a `targets` argument costs ~0.3s.
The following code is present in `src/transformers/pipelines/fill_mask.py`:
```python
class FillMaskPipeline(Pipeline):
# ...
def __call__(self, *args, targets=None, top_k: Optional[int] = None, **kwargs):
# ...
if targets is not None:
# ...
targets_proc = []
for target in targets:
target_enc = self.tokenizer.tokenize(target)
# ...
targets_proc.append(target_enc[0])
```
This function iterates through targets, rather than sending it directly to `tokenize`, which does not utilize the batch processing optimization of `TokenizerFast`s, hence the slow speed. | 06-10-2021 09:16:27 | 06-10-2021 09:16:27 | Do you have an example to reproduce the issue ? Benchmarking is sometimes suprising and hardware dependant.
I can imagine that this is indeed a slowdown as Python - Rust communication is not free.
However the omitted part of your comment is error detection, which is important and we need to keep it.
```python
if len(targets) == 0 or len(targets[0]) == 0:
raise ValueError("At least one target must be provided when passed.")
if isinstance(targets, str):
targets = [targets]
targets_proc = []
for target in targets:
target_enc = self.tokenizer.tokenize(target)
if len(target_enc) > 1 or target_enc[0] == self.tokenizer.unk_token:
logger.warning(
f"The specified target token `{target}` does not exist in the model vocabulary. "
f"Replacing with `{target_enc[0]}`."
)
targets_proc.append(target_enc[0])
target_inds = np.array(self.tokenizer.convert_tokens_to_ids(targets_proc))
```
I think we can get away with encoding every target at once, then iterating through the whole array to do the error detection.
However, as this is a performance problem, I think we realistically need to test that improving performance on 10K targets, does not reduce performance significantly on 10targets (which is a more common usage).
Caveat: when a target is going to be very long (like 20 tokens) with 10k targets, the resulting array will be 20 x 10k for ids, that can pile up quite fast memory usage. In that context, it could be much slower to pass everything at once. We need to benchmark that too.
We don't have benchmarking tests right now, but if a PR goes in I think a test should demonstrate the usage and have a clear comment at leat about this specific decision.<|||||>An example code can be found [in this colab example](https://colab.research.google.com/gist/EtaoinWu/0cf5b37882bd18bcc554d3da717a3974/fillmaskpipeline-test.ipynb). On the default Google machine that I wrote this notebook on, the version with a `targets` argument slows down significantly (100ish ms to 600ish ms).
> Caveat: when a target is going to be very long (like 20 tokens) with 10k targets, the resulting array will be 20 x 10k for ids, that can pile up quite fast memory usage. In that context, it could be much slower to pass everything at once. We need to benchmark that too.
The current behavior of `FillMaskPipeline` is that when a multi-token string is passed, only the first token is used. I doubt anyone would actually need this, because if someone want to choose a token from a subset of the vocabulary to fill into a mask, they usually know the subset exactly. Deliberately passing multi-token strings into `FillMaskPipeline` (and expecting it to tokenize them and drop all-but-first tokens) does not make much sense.
### Another discovery
When coding my example, I just discovered the bottleneck of the performance problem. When provided with a `targets` argument, `FillMaskPipeline` ignores its `top_k` parameter, which means that it has to output a whole list proportional to `len(targets)`, and that's the bottleneck (at least in my test). The code example above actually respects `top_k` parameter when a `targets` is present, hence much faster when constructing the return value. After the optimization, the code costs 200ish milliseconds.<|||||>> An example code can be found [in this colab example](https://colab.research.google.com/gist/EtaoinWu/0cf5b37882bd18bcc554d3da717a3974/fillmaskpipeline-test.ipynb). On the default Google machine that I wrote this notebook on, the version with a `targets` argument slows down significantly (100ish ms to 600ish ms).
Thanks, this will help !
>
> > Caveat: when a target is going to be very long (like 20 tokens) with 10k targets, the resulting array will be 20 x 10k for ids, that can pile up quite fast memory usage. In that context, it could be much slower to pass everything at once. We need to benchmark that too.
>
> The current behavior of `FillMaskPipeline` is that when a multi-token string is passed, only the first token is used. I doubt anyone would actually need this, because if someone want to choose a token from a subset of the vocabulary to fill into a mask, they usually know the subset exactly. Deliberately passing multi-token strings into `FillMaskPipeline` (and expecting it to tokenize them and drop all-but-first tokens) does not make much sense.
As a maintainer of a live product, I can tell you not everyone is aware of what happens behind a pipeline (and it is exactly why they exist, so we can abstract away all nitty gritty details of transformers). So it will happen that some users will try out those examples and be surprised at slowness.
It's something that `pipelines` should try to address if possible.
> ### Another discovery
>
> When coding my example, I just discovered the bottleneck of the performance problem. When provided with a `targets` argument, `FillMaskPipeline` ignores its `top_k` parameter, which means that it has to output a whole list proportional to `len(targets)`, and that's the bottleneck (at least in my test). The code example above actually respects `top_k` parameter when a `targets` is present, hence much faster when constructing the return value. After the optimization, the code costs 200ish milliseconds.
Ok, I think I remember the `targets` being added and the decision was that if `top_k` > `len(targets)` we were not obliged of honoring `top_k` because it wouldn't make any sense. `top_k` < `len(targets)` should be honored though.
<|||||>I was able to reproduce and optimize away most of the performance, now any example should run at roughly the same speed.
Slowdown will happen when you miss the vocabulary, but the warnings should help users figure it out.<|||||>Thanks a lot. As a background, I found the issue when reproducing the following paper:
> Deng, Liming, et al. "An Iterative Polishing Framework Based on Quality Aware Masked Language Model for Chinese Poetry Generation." _Proceedings of the AAAI Conference on Artificial Intelligence_. Vol. 34. No. 05. 2020.
which involves calling `FillMaskPipeline` iteratively 10 times at most for each API call, which depending on the input, may or may not have the `targets` parameter. The time difference in the two types of API calls made me find this issue. |
transformers | 12,098 | closed | π New model addition - GPT-J-6B | # π New model addition - GPT-J-6B
## Model description
The GPT-J-6B model (GPT-NEO model in Jax with 6B parameters trained on the Pile)
Repo: https://github.com/kingoflolz/mesh-transformer-jax
Weights:
[Slim weights (bf16 weights only, for inference, 9GB)](https://the-eye.eu/public/AI/GPT-J-6B/step_383500_slim.tar.zstd)
[Full weights (including optimizer params, 61GB)](https://the-eye.eu/public/AI/GPT-J-6B/step_383500.tar.zstd)
## Open source status
* [x] the model implementation is available: (give details)
* [x] the model weights are available: (give details)
* [ ] who are the authors: (mention them, if possible by @gh-username)
| 06-10-2021 07:26:01 | 06-10-2021 07:26:01 | Hello @patrickvonplaten! would you be able to give an estimation for the timeline of the implementation of the this model in huggingface? <|||||>I have a PR adding support for this model here: #12098<|||||>> I have a PR adding support for this model here: #12098
You probably wanted to link this PR: https://github.com/huggingface/transformers/pull/12106<|||||>Yeah, copied the wrong thing somehow.<|||||>@finetuneanon great! Do you know when it will be ready to use from the transformer library? Thnx for the work. <|||||>Depends on when it will be merged. Until then you can install my branch like this:
```
pip install git+https://github.com/finetuneanon/transformers@gpt-j
```
Convert the weights with the conversion script linked from the PR.<|||||>@finetuneanon I did pip install the transformers@gpt-j and I managed to convert the weights through the script you referenced but only thing I'm now struggling with is making the config file. I uploaded the gpt-j-6b.json file to colab but I don't how to make the config variable via AutoConfig class(don't know if that is how it is made). So If you could let me know how to make the config file, I would appreciate it a lot.
this [colab](https://colab.research.google.com/drive/1xl5tRYTiVISn6FMfhgyB70LZ-Biwj43E#scrollTo=QI_zE5QA8ycF) file containts all the code. <|||||>Rename it into config.json, put it into a folder and you should be able to `AutoConfig.from_pretrained("whatever-folder")` |
transformers | 12,097 | closed | Add from_pretrained to dummy timm objects | Closes https://github.com/huggingface/transformers/issues/12091
cc @NielsRogge
@sgugger Am I missing something relative to specifying that these dummy items should be generated with the `from_pretrained` method? | 06-10-2021 07:22:38 | 06-10-2021 07:22:38 | Failure of the templates came from the failure of `make quality` on master, fixed in a commit, so this is good to merge!<|||||>Thanks a lot @sgugger! |
transformers | 12,096 | closed | DetrFeatureExtractor post_process not rescaling bboxes as expected | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:master
- Platform:Google Colab
- Python version:3.7
- PyTorch version (GPU?):1.8.1
- Tensorflow version (GPU?):N/A
- Using GPU in script?:N/A
- Using distributed or parallel set-up in script?:N/A
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@NielsRogge
## Information
Model I am using (Bert, XLNet ...): `DetrForObjectDetection`
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Colab Below
## To reproduce
Steps to reproduce the behavior:
<a href="https://colab.research.google.com/gist/nateraw/b844f1f5118abd05c09a077fdec75dd3/detr-resize-issue.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I would expect for `feature_extractor.post_process` to rescale the bounding boxes so they match the input images. Right now they seem to be scaled differently.
For example - the following should create `processed_outputs` that contain bbox values that are ready to be plotted along with the original image.
```python
import PIL
import torch
from transformers import DetrFeatureExtractor, DetrForObjectDetection
feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-50')
model = DetrForObjectDetection.from_pretrained('facebook/detr-resnet-50')
images: List[PIL.Image.Image]) = ... # Some list of PIL images
inputs = feature_extractor(images, return_tensors='pt')
outputs = model(**inputs)
img_sizes = torch.tensor([im.size for im in images])
processed_outputs = feature_extractor.post_process(outputs, img_sizes)
```
:thought_balloon: - One thought I had was that I'm not sure if I'm preparing the `img_sizes` tensor correctly above. | 06-10-2021 06:11:22 | 06-10-2021 06:11:22 | False alarm...img sizes need to be flipped.
```
# ...
img_sizes = torch.tensor([tuple(reversed(im.size)) for im in images])
# ...
``` |
transformers | 12,095 | closed | Continuous training on Fine-tuned Model | # π Feature request
How can I continue training on a Fine-tuned Model?
I have a fine tuned model from OpenSLR data. And I want to continue training on an model as I continue to gain transcribed audio data over time. Can I do like making the fine-tuned model as a checkpoint?
## Motivation
I am aiming to make a model for Nepali Language. I have a way to collect data over time and it is continuous. So, I want to find a way I can train the model continuously as I gain data over time
| 06-10-2021 04:59:40 | 06-10-2021 04:59:40 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>Sure, thank you. |
transformers | 12,094 | closed | Create a torchscript version of Tokenizer in Bert | Hi,
Not sure if a feature request is a proper flag for the below request:
I want to create an executable version of Tokenizer for Bert - Below is a small code piece:
```
from transformers import AutoTokenizer, AutoModel
import torch
sentences = ['This framework generates tokens for each input sentence']
tokenizer_model = AutoTokenizer.from_pretrained("sentence-transformers/paraphrase-mpnet-base-v2", torchscript=True)
encoded_input = tokenizer_model(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt')
# !!! complains that 'tokenizer_model' doesn't have eval()
tokenizer_model.eval();
# !!! tokenizer_model takes list of sentences as inputs, how should I provide tensorial dummpy inputs?
traced_tokenizer_model = torch.jit.trace(tokenizer_model, dummpy_inputs)
torch.jit.save(traced_tokenizer_model, "traced_tokenize_bert.pt")
```
My first problem is that `tokenizer_model` doesnβt have `eval()` - So how I can follow the guideline for creating the traced_models.
My second problem is that the `tokenizer_model` takes as inputs a list of strings. How am I supposed to provide tensorial form dummy inputs to create the `traced_tokenizer_model`?
I have followed the instructions on your page for creating torchscripts but do not know how I can create one for the Tokenizer module above.
| 06-10-2021 01:50:54 | 06-10-2021 01:50:54 | Hello! I think you're mistaking tokenizers for models. The two are part of the NLP pipeline, but they're very different. The tokenizer prepares the inputs for the model - but it isn't a PyTorch module. It's either plain Python code, or a rust object with a Python wrapper (like it is the case here).
Since it's not a torch module - it doesn't make sense for it to have the `eval` method. Same for your second question, a tokenizer cannot be traced as it's not a torch module.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,093 | closed | Speedup batch matmul in pytorch | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
Hi there,
I'm trying to combine Sparse Transformer into Vision Transformer to speed up the time-consuming training also inference.
However, the speed is really slow with my code which is shown below. Can someone help me point out where I was wrong?
Thanks.
`import torch
import math
query_layer = torch.randn(32, 12, 197, 64)
key_layer = torch.randn(32, 12, 197, 64)
key_layer_transpose = key_layer.transpose(-1, -2)
dim0 = query_layer.shape[0]
dim1 = query_layer.shape[1]
dim2 = query_layer.shape[2]
dim3 = query_layer.shape[3]
print(dim0, dim1, dim2, dim3)
#orignal transformer attention score calculation
attention_scores = torch.matmul(query_layer, key_layer_transpose)
print(attention_scores)
#my modified based on Sparse Transformer for speeding up the training (but actually lower than at least 20 times in comparison with the original transformer
N = math.sqrt(dim3)
attention_scores = torch.zeros(dim0, dim1, dim2, dim2, device='cuda:0')
for i_dim0 in range(dim0):
for i_dim1 in range(dim1):
for i in range(dim2):
for j in range(dim2):
if (i == j) or ((i - j) % N == 0 and i - j > 0):
attention_scores[i_dim0, i_dim1, i, j] = torch.matmul(query_layer[i_dim0, i_dim1, i, :], key_layer_transpose[i_dim0, i_dim1, :, j])
attention_scores.shape
print(attention_scores)
`
| 06-10-2021 01:35:41 | 06-10-2021 01:35:41 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>I've found the solution by using deepspeed.
However, I encountered the error and I opened an issue here. Please help me
with this issue. Thanks.
https://github.com/microsoft/DeepSpeed/issues/1153
On Thu, Jun 10, 2021 at 2:04 PM Lysandre Debut ***@***.***>
wrote:
> Hello, thanks for opening an issue! We try to keep the github issues for
> bugs/feature requests.
> Could you ask your question on the forum <https://discuss.huggingface.co>
> instead?
>
> Thanks!
>
> β
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/12093#issuecomment-858369696>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AJY7KIWLCP5AOY7CA3JI6ZDTSBPY3ANCNFSM46ND2SZQ>
> .
>
|
transformers | 12,092 | closed | Replicating PEGASUS results on a benchmark dataset | I'm trying to replicate the PEGASUS results on Reddit-TIFU dataset, but the scores I'm getting are a bit far from what has been reported in the main paper. I'm using the same test set as the one authors used in the main paper (80-10-10 splits based on `TensorflowDataset` according to their code-base). Would anyone have had similar experience of working with PEGASUS on either of reported datasets? Although I'm looking to replicate Reddit-TIFU results, but that would be also good to see if anyone could get the results replicated on either of the experimental datasets.
It has to be mentioned that I'm using the finetuned checkpoint on the Reddit-TIFU dataset: `google/pegasus-reddit_tifu` without further fine-tuning (actually I don't need that) using the following script; `pegasus.sh`
```
CUDA_VISIBLE_DEVICES=0,1,2,3,4 python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path google/pegasus-reddit_tifu \
--do_predict \
--train_file $DS_BASE_DIR/train.json \
--validation_file $DS_BASE_DIR/val.json \
--test_file $DS_BASE_DIR/test.json \
--output_dir /home/code-base/user_space/saved_models/pegasus/ \
--per_device_train_batch_size=2 \
--per_device_eval_batch_size=2 \
--overwrite_output_dir \
--predict_with_generate \
--text_column text \
--summary_column summary
```
The scores I'm achieving: `do_predict` output:
> ***** predict metrics *****
> predict_gen_len = 40.294
> predict_loss = 3.9969
> **predict_rouge1 = 27.13
> predict_rouge2 = 8.38
> predict_rougeL = 20.68**
> predict_samples = 4214
However, the (best) reported scores are:
> **predict_rouge1 = 26.63
> predict_rouge2 = 9.01
> predict_rougeL = 21.60**
Even, assuming that `google/pegasus-reddit_tifu`'s pretraining is [improved](https://huggingface.co/google/pegasus-reddit_tifu) (Mixed & Stochastic), I can't reproduce the reported results on Reddit-TIFU, which are: R-1: 27.99/ R-2: 9.81/ R-L: 22.94
## Environment info
- `transformers` version: 4.7.0 dev
- Platform: Linux Ubuntu
- Python version: 3.8
- PyTorch version (GPU?): 1.6
- Tensorflow version (GPU?): --
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes, I'm using four GPUs for prediction.
### Who can help
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @sshleifer, @patrickvonplaten, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): PEGASUS
The problem arises when using:
* [ x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ x] an official GLUE/SQUaD task: (give the name): Reddit-TIFU
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. `bash pegasus.sh` _bash script is posted above_
## Expected behavior
I expect to be able to reproduce the official results reported in the main PEGASUS paper on Reddit-TIFU dataset; however, I'm getting higher Rouge-1 score, while lower Rouge-2 and Rouge-L scores. | 06-09-2021 23:07:31 | 06-09-2021 23:07:31 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,091 | closed | Provide more useful error message in Detr from_pretrained when timm not installed | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: master
- Platform: Google Colab
- Python version: 3.7
- PyTorch version (GPU?):1.8.1+cu101
- Tensorflow version (GPU?):N/A
- Using GPU in script?:N/A
- Using distributed or parallel set-up in script?:N/A
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
[Here's a colab notebook.
](https://colab.research.google.com/drive/1b8RVCARcZU8kBFywRID8ZLPAYwhcd10p?usp=sharing)
1. install latest transformers from master w/o installing `timm`.
2. Try to init any `DetrModel` `from_pretrained`, and you'll see you get a misleading error
```python
from transformers import DetrForObjectDetection
model = DetrForObjectDetection.from_pretrained('facebook/detr-resnet-50')
```
Error thrown:
```bash
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-2-175d7bae5f8e> in <module>()
1 from transformers import DetrForObjectDetection
2
----> 3 model = DetrForObjectDetection.from_pretrained('facebook/detr-resnet-50')
AttributeError: type object 'DetrForObjectDetection' has no attribute 'from_pretrained'
```
3. Try the same w/ timm installed, and see that it works.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Have message informing user that they must have `timm` installed to use `from_pretrained` with `Detr` models instead of just telling them there is no attribute `from_pretrained`. | 06-09-2021 22:15:38 | 06-09-2021 22:15:38 | Thanks for raising an issue! Fixing this in #12097 <|||||>Thanks for already testing out the model! Highly appreciate your feedback. If anything related to the model/docs can be improved, feel free to reach out.
|
transformers | 12,090 | closed | Checkpoint detected info log in run_clm.py | I think # Setup logging should above the # Detecting last checkpoint. so it can show warning Checkpoint detected, resuming training at... in examples of pytorch's language-modeling run_clm.py
@sgugger | 06-09-2021 19:20:32 | 06-09-2021 19:20:32 | Thanks for flagging this! It should be fixed by the PR mentioned above. |
transformers | 12,089 | closed | [Wav2Vec2ForPretraining] Correct checkpoints wav2vec2 & fix tests | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Correct weights have been uploaded so change the tests accordingly. Also some minor fixes are added.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-09-2021 19:17:51 | 06-09-2021 19:17:51 | |
transformers | 12,088 | closed | [versions] rm require_version_examples | As explained in https://github.com/huggingface/transformers/issues/12086 `require_version_examples` wrapper is no longer useful since examples' requirements are now scattered across multiple files, so removing it as it can't be used because of that.
Fixes: https://github.com/huggingface/transformers/issues/12086
@sgugger | 06-09-2021 17:27:57 | 06-09-2021 17:27:57 | |
transformers | 12,087 | closed | [examples/flax] pass decay_mask fn to optimizer | Fixes Typo | 06-09-2021 17:19:10 | 06-09-2021 17:19:10 | |
transformers | 12,086 | closed | examples requirements isn't in sync with `require_version_examples` | We have `require_version_examples`
https://github.com/huggingface/transformers/blob/b1a8aa94f0a2ccea7c68b79066141aa822b96e42/src/transformers/utils/versions.py#L123-L126
but it looks like it wasn't updated in the last reshuffle and requirements got split and it suggests incorrect solution.
I tried to offer to use it here: https://github.com/huggingface/transformers/pull/11927
but since requirements are now scattered over multiple files we probably should remove it and its usage in legacy scripts, since it gives wrong info where it's currently still used.
It's just one usage in several legacy scripts:
```
require_version_examples("pytorch_lightning>=1.0.4")
```
so we can just replace it with:
```
require_version("pytorch_lightning>=1.0.4")
```
which would do the trick.
@sgugger | 06-09-2021 17:01:29 | 06-09-2021 17:01:29 | Yes, that works. |
transformers | 12,085 | closed | PyTorch MLM - Dummy Script | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-09-2021 13:47:31 | 06-09-2021 13:47:31 | |
transformers | 12,084 | closed | Memory Efficient FP 16 Training | # π Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Fairseq uses memory efficient FP 16 training as explained in https://arxiv.org/pdf/1904.01038.pdf.
## Motivation
Generally the model requires high end GPU's to fine-tune on larger length datasets. Using memory efficient FP 16 we can reduce the need of high GPU's and thus models can be fine-tune without OOM problems.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
| 06-09-2021 13:35:45 | 06-09-2021 13:35:45 | Is "memory efficient" fp16 different to the fp16 available in the training scripts?<|||||>Yes, both are different.
https://github.com/pytorch/fairseq/issues/2907<|||||>Thanks for the link! cc @sgugger @stas00 <|||||>I think you must be referring to this comment https://github.com/pytorch/fairseq/issues/2907#issuecomment-729722676
> --memory-efficient-fp16 gets rid of the FP32 model copy and only maintains FP32 momentum in the optimizer. Thus you'll see 0.5x memory usage from the model weights, 0.5x memory usage in the forward/backward, and 1.0x memory usage in the optimizer (relative to FP32).
correct?
I think this is the implementation: https://github.com/pytorch/fairseq/blob/f8a7c93440cd925f70979a6082c18f830b39e44b/fairseq/optim/fp16_optimizer.py#L456
Appears to be added 2 years ago. And you quoted an old paper from 2019. Do you think it's actually something that's worth investigating? Somehow I'd expect for it to adopted by other projects if it were to work great, so it'd be good to ask someone experienced with it whether it's actually good.
<|||||>Not sure how to evaluate the effectiveness this proposal, it would be helpful to have some case studies that show the actual improvements.
I asked around and someone reported that someone mentioned this was useful for certain models, but it'd help to know which and how they were trained so that there is an actual proven setup to work with.
I found pytext included a variation of it here: https://github.com/facebookresearch/pytext/commit/c6d13acbafc856fdc0291bf6608d6f318b6690d2, but I can't find any other references via google, which is not very encouraging. But we don't know whether other implementations use the same name.
But also let's ask this, @rajgar114, would you like to try to work on this and submit a PR when you have something working?<|||||>`I think you must be referring to this comment pytorch/fairseq#2907 (comment) ?`
Yes, @stas00 I was referring to this comment [pytorch/fairseq#2907](https://github.com/pytorch/fairseq/issues/2907#issuecomment-729722676) only.
`Appears to be added 2 years ago.`
No doubt the paper is quite old but I have found some instances that memory efficient fp16 training worked for people having low end GPU's. Here is an example:
https://bleepcoder.com/fairseq/551167214/oom-while-trying-to-train-bart
`Do you think it's actually something that's worth investigating?`
I am not completely sure that how much impact it can cause on the reduction in GPU memory consumption. We should definitely ask some experienced guys and also try to compare and analyze the results with & without using memory efficient fp16 version.
`Would you like to try to work on this and submit a PR when you have something working?`
@stas00 Thanks for giving this wonderful opportunity. I would love to work on open source projects. But I can't because of my busy schedule, I would not be able to spend time on this project. Sorry for that. <|||||>Thank you for the feedback, @rajgar114 and letting us know that you'd love to work on that but reality won't allow that at the moment.
I have added it to https://github.com/huggingface/transformers/issues/12126 so it won't get lost and hopefully it'd find a champion.
From your part what would help is to find specific models/setups where it has been found to be converging well despite the limitations it imposed on itself. So that whoever works on this will have a good chance of succeeding. Thank you.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I do not know exactly if my case fits this issue, but I have trained seq2seq model with fairseq (v0.10.1) and current transformer 4.9. In both cases I have used encoder-decoder and BART architecture and saw that Fariseq allows me to use a greater batch size 120- 160 (sentences) around 5000 tokens. Transformers library with deepspeed integration handles max 48(sentences).
I have used 3x Geforce 3090 with 24 GB ram. In both cases model size ~70M parameters (6 layers for encoder and decoder, hidden_size=512, ff=2048)
Concludes, fairseq training is almost 3 times faster (measured by the number of tokens seen during training in a fixed time budget).
<|||||>Have you tried using activation checkpointing? That should save a lot of memory and enable much larger batch sizes.
Also it might be good to try both Deepspeed zero-2 and zero-3 stages - I don't know which one you were optimizing with. |
transformers | 12,083 | closed | Add text_column_name and label_column_name to run_ner and run_ner_no_trainer args | # What does this PR do?
it's nice to enable to specify which columns are for `text` and `label` for run_ner with its arguments.
especially it's useful when we train it on not csv (i.e. json) dataset because `text` and `label` columns are determined by column order if default columns are missing.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
```
% python -m pytest -n auto --dist=loadfile -s -v ./examples/
...
Results (1653.79s):
18 passed
3 skipped
```
## Who can review?
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
| 06-09-2021 12:51:47 | 06-09-2021 12:51:47 | @sgugger thank you for your suggestions!
I've pushed a commit based on it.<|||||>I'm appreciated for your fix too! |
transformers | 12,082 | closed | Add support for XLM-R XL and XXL models | Hi,
this PR adds support for the recently released XL and XXL models for XLM-R. These models are described in the ["Larger-Scale Transformers for Multilingual Masked Language Modeling"](https://arxiv.org/abs/2105.00572) paper.
It turns out, that these new models are trained with a more recent version of `fairseq` compared to the "old" XLM-R Base and Large models. Only the current `master` version of `fairseq` is able to load these new models correctly. Unfortunately, some model changes were made (see [this](https://github.com/pytorch/fairseq/commit/54423d3b22a3e7f536e02e9e5445cef9becbd60d) refactoring commit), and the following changes needs also to be done in Transformers library:
The XLM-R Base and Large model used layer normalization in the embeddings, whereas the newer XL and XXL models do not make use of normalized embeddings: layer normalization is done at the end of the transformer. See discussion here: https://github.com/pytorch/fairseq/issues/3600
@patrickvonplaten proposed to introduce a new `RobertaConfig` variable - like `normalize_embeddings` - in order to reflect these model changes in `modeling_roberta.py` directly, instead of writing a new model class (which copies 99% of existing code).
----
Changes made so far:
* [x] Update conversion script to work with lastest `fairseq` master version (*1.0.0a*)
Necessary changes:
* [ ] Introduce new config variable in `RobertaConfig` to indicate different layer normalization "strategies"
* [ ] Implement these different layer normalization settings in all modeling classes
* [ ] Re-run conversion script and upload converted XLM-R XL and XXL models to hub | 06-09-2021 10:43:20 | 06-09-2021 10:43:20 | @Soonhwan-Kwon conversion script is currently not working yet for the newer models, but it is working for XLM-R Base for example. Layer norm changes need to be done first in RoBERTa modeling code, so that conversion script is writing a correct model :)<|||||>Hi @stefan-it thanks for contributing the new models and do you have any plan to push the code and models into https://huggingface.co/models recently @patrickvonplaten ? <|||||>Waiting for this model. Is there any expected timeline? @patrickvonplaten <|||||>Should we try to look into it again @stefan-it ? :-) |
transformers | 12,081 | closed | GPT2 Flax "TypeError: JAX only supports number and bool dtypes, got dtype object in array" | On GPU
```
>>> from transformers import AutoTokenizer, FlaxAutoModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("gpt2-medium")
>>> model = FlaxAutoModelForCausalLM.from_pretrained("gpt2-medium")
>>> input_context = "The dog"
>>> # encode input context
>>> input_ids = tokenizer(input_context, return_tensors="jax").input_ids
>>> # generate candidates using sampling
>>> outputs = model.generate(input_ids=input_ids, max_length=20, top_k=30, do_sample=True)
TypeError: JAX only supports number and bool dtypes, got dtype object in array
```
@patrickvonplaten @patil-suraj | 06-09-2021 09:48:11 | 06-09-2021 09:48:11 | I see where this is coming from. For now, could you try initializing model and tokenizer like this
```
tokenizer = GPT2TokenizerFast.from_pretrained(model_id, padding_side="left", pad_token="<|endoftext|>")
model = FlaxGPT2LMHeadModel.from_pretrained(model_id, pad_token_id=50256,)
```
We'll soon publish a detailed colab about Flax generate <|||||>Done buddy. Worked. Thanks a lot.
Does Flax ```model.generate``` makes use of caching ``` Query and Value ``` in attention layers ?
I have too run a benchmark for generation. Its fair to compare only if caching is supported. <|||||>yeah it only works with caching <|||||>Thanks . Closing the issue. |
transformers | 12,080 | closed | Fix missing id2label and label2id in run_ner.py | This is to retain the NER labels when training, so it can be used to map the labels during later prediction.
This functionality is present in the old version [https://github.com/huggingface/transformers/blob/master/examples/legacy/token-classification/run_ner.py#L170](https://github.com/huggingface/transformers/blob/master/examples/legacy/token-classification/run_ner.py#L170), but missing in the current one.
@sgugger | 06-09-2021 09:28:13 | 06-09-2021 09:28:13 | This has just been done in #12001 :-)
Thanks for the PR! |
transformers | 12,079 | closed | Use Distilbert to run language model, encounter error "Unrecognized configuration class " |
- `transformers` version: 4.6.1
- Platform: centos 7.5
- Python version: 3.7
- PyTorch version (GPU?): 1.10
- Using GPU in script?: v100
- Using distributed or parallel set-up in script?: no
## To reproduce
```
python3 run_clm.py \
--model_name_or_path distilbert-base-uncased \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_train \
--do_eval \
--output_dir ./tmp/test-clm
```
```
[INFO|tokenization_utils_base.py:1717] 2021-06-09 17:11:05,729 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/vocab.txt from cache at /home/work/liujiaxiang/.cache/huggingface/transformers/0e1bbfda7f63a99bb52e3915dcf10c3c92122b827d92eb2d34ce94ee79ba486c.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
[INFO|tokenization_utils_base.py:1717] 2021-06-09 17:11:05,730 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer.json from cache at /home/work/liujiaxiang/.cache/huggingface/transformers/75abb59d7a06f4f640158a9bfcde005264e59e8d566781ab1415b139d2e4c603.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4
[INFO|tokenization_utils_base.py:1717] 2021-06-09 17:11:05,730 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:1717] 2021-06-09 17:11:05,730 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:1717] 2021-06-09 17:11:05,730 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer_config.json from cache at /home/work/liujiaxiang/.cache/huggingface/transformers/8c8624b8ac8aa99c60c912161f8332de003484428c47906d7ff7eb7f73eecdbb.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79
Traceback (most recent call last):
File "run_clm.py", line 536, in <module>
main()
File "run_clm.py", line 322, in main
use_auth_token=True if model_args.use_auth_token else None,
File "/ssd2/liujiaxiang/workfiles/transformer_invariant_bigger/transformers/src/transformers/models/auto/auto_factory.py", line 397, in from_pretrained
f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
ValueError: Unrecognized configuration class <class 'transformers.models.distilbert.configuration_distilbert.DistilBertConfig'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of RoFormerConfig, BigBirdPegasusConfig, GPTNeoConfig, BigBirdConfig, CamembertConfig, XLMRobertaConfig, RobertaConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, XLMConfig, CTRLConfig, ReformerConfig, BertGenerationConfig, XLMProphetNetConfig, ProphetNetConfig, BartConfig, MBartConfig, PegasusConfig, MarianConfig, BlenderbotConfig, BlenderbotSmallConfig, MegatronBertConfig.
``` | 06-09-2021 09:22:31 | 06-09-2021 09:22:31 | As printed by the error, DistilBERT is not supported by `AutoModelForCausalLM`, since it's an encoder-only model. Please use one of the supported models to perform autoregressive (i.e. left-to-right) language modeling.<|||||>Why the [README](https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) have gave a example of distilbert ? <|||||>The example you refer to is `run_mlm.py` (mlm is short for masked language modeling). However, the script you're using above is `run_clm.py` (clm is short for causal language modeling, also called autoregressive language modeling). DistilBERT only supports mlm, not clm. <|||||>Ah! yes, you are right!
My fault !
Thanks for your answer. |
transformers | 12,078 | closed | OSError: Unable to open file (file signature not found) | python version: 3.7.6
transformers: 4.6.1
tensorflow-cpu: 2.3.1
my code:
```python
from transformers import TFAutoModel
model = TFAutoModel.from_pretrained("./chinese-bert-wwm-ext")
```
and `chinese-bert-wwm-ext` is a model dir that is downloaded from [https://huggingface.co/models](url)γ
After I run this code in my jupyter notebook, I get an OSError:
```
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
~\Anaconda3\lib\site-packages\transformers\modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1291 try:
-> 1292 missing_keys, unexpected_keys = load_tf_weights(model, resolved_archive_file, load_weight_prefix)
1293 except OSError:
~\Anaconda3\lib\site-packages\transformers\modeling_tf_utils.py in load_tf_weights(model, resolved_archive_file, _prefix)
470 # Read the H5 file
--> 471 with h5py.File(resolved_archive_file, "r") as f:
472 # Retrieve the name of each layer from the H5 file
~\Anaconda3\lib\site-packages\h5py\_hl\files.py in __init__(self, name, mode, driver, libver, userblock_size, swmr, rdcc_nslots, rdcc_nbytes, rdcc_w0, track_order, **kwds)
407 fapl, fcpl=make_fcpl(track_order=track_order),
--> 408 swmr=swmr)
409
~\Anaconda3\lib\site-packages\h5py\_hl\files.py in make_fid(name, mode, userblock_size, fapl, fcpl, swmr)
172 flags |= h5f.ACC_SWMR_READ
--> 173 fid = h5f.open(name, flags, fapl=fapl)
174 elif mode == 'r+':
h5py\_objects.pyx in h5py._objects.with_phil.wrapper()
h5py\_objects.pyx in h5py._objects.with_phil.wrapper()
h5py\h5f.pyx in h5py.h5f.open()
OSError: Unable to open file (file signature not found)
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-8-724814da42c1> in <module>
----> 1 model = TFAutoModel.from_pretrained('./chinese-bert-wwm-ext/')
~\Anaconda3\lib\site-packages\transformers\models\auto\auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
379 if type(config) in cls._model_mapping.keys():
380 model_class = _get_model_class(config, cls._model_mapping)
--> 381 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
382 raise ValueError(
383 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
~\Anaconda3\lib\site-packages\transformers\modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1293 except OSError:
1294 raise OSError(
-> 1295 "Unable to load weights from h5 file. "
1296 "If you tried to load a TF 2.0 model from a PyTorch checkpoint, please set from_pt=True. "
1297 )
OSError: Unable to load weights from h5 file. If you tried to load a TF 2.0 model from a PyTorch checkpoint, please set from_pt=True.
```
| 06-09-2021 02:54:58 | 06-09-2021 02:54:58 | Hi @Holy-Shine
Try:
```python
from transformers import TFAutoModel
model = TFAutoModel.from_pretrained("hfl/chinese-bert-wwm-ext")
```<|||||>
@vishal-burman
thanks! It works for me.
And I found that my tf_model.h5 file in my local dir definitely too "thin" that model loader cannot figure out it. |
transformers | 12,077 | closed | [Deepspeed] new docs | This PR expands/improves Deepspeed docs:
- documents `sub_group_size` tuneup (thanks @samyam)
- updates install info
- adds issue filing instructions
@sgugger | 06-09-2021 01:30:46 | 06-09-2021 01:30:46 | |
transformers | 12,076 | closed | [wav2vec2 / Deepspeed] sync LayerDrop for Wav2Vec2Encoder + tests | This PR continues https://github.com/huggingface/transformers/pull/11638 and:
- adds the same gpu syncing for `Wav2Vec2Encoder` LayerDrop as there is for ` Wav2Vec2EncoderStableLayerNorm`
- double the tests to test `"patrickvonplaten/wav2vec2_tiny_random"` to exercise `Wav2Vec2Encoder` too
@patrickvonplaten
| 06-09-2021 00:55:09 | 06-09-2021 00:55:09 | |
transformers | 12,075 | closed | Using whitespace tokenizer for training models | ## Environment info
- `transformers` version: 4.6.1
- Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: Yes/depends
- Using distributed or parallel set-up in script?: No
### Who can help
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- tokenizers: @LysandreJik
## Information
Model I am using (Bert, XLNet ...): `BigBird`
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I have a dataset for which I wanted to use a tokenizer based on whitespace rather than any subword segmentation approach.
This snippet I got off github has a way to construct and use the custom tokenizer that operates on whitespaces:-
```py
from tokenizers import Tokenizer, trainers
from tokenizers.models import BPE
from tokenizers.normalizers import Lowercase
from tokenizers.pre_tokenizers import CharDelimiterSplit
# We build our custom tokenizer:
tokenizer = Tokenizer(BPE())
tokenizer.normalizer = Lowercase()
tokenizer.pre_tokenizer = CharDelimiterSplit(' ')
# We can train this tokenizer by giving it a list of path to text files:
trainer = trainers.BpeTrainer(special_tokens=["[UNK]"], show_progress=True)
tokenizer.train(files=['/content/dataset.txt'], trainer=trainer)
```
I wanted to use it for pre-training the `BigBird` model, but facing two issues:
1. I canβt seem to be able to use this snippet with the custom `tokenizer` above to convert tokenized sentences in model-friendly sequences
```py
from tokenizers.processors import BertProcessing
tokenizer._tokenizer.post_processor = tokenizers.processors.BertProcessing(
("</s>", tokenizer.token_to_id("</s>")),
("<s>", tokenizer.token_to_id("<s>")),
)
tokenizer.enable_truncation(max_length=16000)
```
This returns me an error, and without any preprocessing the output does not contain the sequence start and end tokens (`<s>`; `</s>`) as expected.
2. Next problem arises, when I save the tokenizer state in the specified folder, I am unable to use it via:
```py
tokenizer = BigBirdTokenizerFast.from_pretrained("./tok", max_len=16000)
```
since it yields the error that my directory does not βreferenceβ the tokenizer files, which shouldnβt be an issue since using `RobertaTokenizerFast` does work - I assume it has something to do in the tokenization `post-processing` phase.
<h2>Fully Reproducible Colab</h2>
I am really confused about this - I have created a fully reproducible colab notebook, with commented problems and synthetic data. Please find it [here](https://colab.research.google.com/drive/1z_GzMGpcl-7Vg7eWUPOqybojDfw2gli_?usp=sharing).
Thanx a ton in advance!!
| 06-08-2021 21:12:21 | 06-08-2021 21:12:21 | Hello! Thanks a lot for the well-crafted issue and reproducer, this is very helpful. Regarding your problem 2, I have a question: why are you saving the tokenizer's model, rather than the tokenizer itself?
I would argue that saving the entire tokenizer in a `tokenizer.json` would be better:
```py
# And now it is ready, we can save the vocabulary with
tokenizer.save('./tok/tokenizer.json')
```
Then you'll be able to reload your fast tokenizer (that is looking for a `tokenizer.json` file!) seamlessly:
```py
from transformers import BigBirdTokenizerFast
tokenizer = BigBirdTokenizerFast.from_pretrained("tok", max_len=16000)
```
I also verified that you do indeed recover the same encoding as when using the `tokenizers` library:
```py
>>> tokenizer("23 39999 999 8888 212").tokens()
['23', '39999', '999', '8888', '212']
```
Regarding your first question, I don't see anywhere in your code where you're adding a BERT template processor. I've taken the liberty to add it right after your `tokenizer` creation, see below. I am unaware of the error you got, but when trying it I had an error saying that `tokenizer.token_to_id("<s>")` was returning `None`.
To fix this you can specify that `<s>` and `<s/>` are special tokens when initializing your BPE trainer, as I have done below.
```py
from tokenizers import Tokenizer, trainers
from tokenizers.models import BPE
from tokenizers.normalizers import Lowercase
from tokenizers.pre_tokenizers import CharDelimiterSplit
# We build our custom tokenizer:
tokenizer = Tokenizer(BPE())
tokenizer.normalizer = Lowercase()
tokenizer.pre_tokenizer = CharDelimiterSplit(' ')
# We can train this tokenizer by giving it a list of path to text files:
trainer = trainers.BpeTrainer(special_tokens=["[UNK]", "<s>", "</s>"], show_progress=True)
tokenizer.train(files=['/content/dataset.txt'], trainer=trainer)
from tokenizers.processors import BertProcessing
import tokenizers
tokenizer.post_processor = tokenizers.processors.BertProcessing(
("</s>", tokenizer.token_to_id("</s>")),
("<s>", tokenizer.token_to_id("<s>")),
)
tokenizer.enable_truncation(max_length=16000)
```
After this, encoding a sequence returns the correct tokens with the correct special tokens:
```py
>>> tokenizer.encode("23 39999 999 8888 212").tokens
['<s>', '23', '39999', '999', '8888', '212', '</s>']
```<|||||>Thanks a ton @LysandreJik and replying so quickly and efficiently :cake: :+1: :rocket: !!!
For anyone else who might stumble on this problem, I have modified a simple example via the [Colab](https://colab.research.google.com/drive/1z_GzMGpcl-7Vg7eWUPOqybojDfw2gli_?usp=sharing) link attached above. If in any case it might not be working, I have uploaded the `.ipynb` file alongside this comment too. :hugs:
Have a fantastic day!
[HF_issue_repro.zip](https://github.com/huggingface/transformers/files/6623500/HF_issue_repro.zip)
<|||||>@LysandreJik Sorry to disturb you again, but I had this peculiar problem. I wanted to train BigBird on TPU, and its reporting that the config.json might have missing parameters.
```py
[INFO|tokenization_auto.py:427] 2021-06-25 12:16:10,662 >> Could not locate the tokenizer configuration file, will try to use the model config instead.
[INFO|configuration_utils.py:528] 2021-06-25 12:16:10,668 >> loading configuration file ./tok/config.json
Exception in device=TPU:0: Unrecognized model in ./tok. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: visual_bert, roformer, clip, bigbird_pegasus, deit, luke, detr, gpt_neo, big_bird, speech_to_text, vit, wav2vec2, m2m_100, convbert, led, blenderbot-small, retribert, ibert, mt5, t5, mobilebert, distilbert, albert, bert-generation, camembert, xlm-roberta, pegasus, marian, mbart, megatron_bert, mpnet, bart, blenderbot, reformer, longformer, roberta, deberta-v2, deberta, flaubert, fsmt, squeezebert, hubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm-prophetnet, prophetnet, xlm, ctrl, electra, encoder-decoder, funnel, lxmert, dpr, layoutlm, rag, tapas
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn
fn(gindex, *args)
File "/content/run_mlm.py", line 520, in _mp_fn
main()
File "/content/run_mlm.py", line 313, in main
tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, **tokenizer_kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/auto/tokenization_auto.py", line 529, in from_pretrained
config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/auto/configuration_auto.py", line 457, in from_pretrained
f"Unrecognized model in {pretrained_model_name_or_path}. "
ValueError: Unrecognized model in ./tok. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: visual_bert, roformer, clip, bigbird_pegasus, deit, luke, detr, gpt_neo, big_bird, speech_to_text, vit, wav2vec2, m2m_100, convbert, led, blenderbot-small, retribert, ibert, mt5, t5, mobilebert, distilbert, albert, bert-generation, camembert, xlm-roberta, pegasus, marian, mbart, megatron_bert, mpnet, bart, blenderbot, reformer, longformer, roberta, deberta-v2, deberta, flaubert, fsmt, squeezebert, hubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm-prophetnet, prophetnet, xlm, ctrl, electra, encoder-decoder, funnel, lxmert, dpr, layoutlm, rag, tapas
```
So apparently, I have been saving the tokenizer's state only, not the entire model. This is how I am doing
```py
!mkdir tok
# And now it is ready, we can save the tokenizer's state only, not the model
tokenizer.save('./tok/config.json')
```
I think that `config.json` might be a product of the tokenizer's model when saving, which we are omitting by saving the state only?
To make sure, I searched the `json` file to confirm that key is indeed not present there.
Would you happen to have a clue as to what I can do here?<|||||>Assuming the tokenizer state to be saved is the specific one for the model, I did this
```py
tokenizer = BigBirdTokenizerFast.from_pretrained("/content/tok", max_len=16000)
tokenizer.save_pretrained('./tokenizer')
```
And tried to load the tokenizer again. However, I can't verify whether it works because upon running the script, I lose connection to the instance :thinking:
Is this the correct usage though?<|||||>Hi @neel04.
I'm thinking you're facing an issue that was solved in the latest `transformers` release. Before the latest `transformers` release, `AutoTokenizer` couldn't guess which tokenizer to load from *just* the tokenizer files, it also needed to have access to the model's `config.json` in order to see the model and tokenizer classes.
It was addressed in the latest `transformers` release, where the tokenizer class would now be saved in `tokenizer_config.json`.
Please let me know if either of these fixes work:
1. Upgrade to the latest version, complete the `tokenizer_config.json` in your `./tok` directory with the following:
```
"tokenizer_class": "BigBirdTokenizer"
```
If it's not present, then create it.
2. Stay at your current version, and add a `config.json` file containing the same information in your `./tok` folder.
Regarding your second question, yes, using `save_pretrained` alongside `from_pretrained` is the correct usage.<|||||>Hey @LysandreJik,
Thanks a ton for the tips, I will surely try them if I face this error again! :hugs:
I am using the `master` branch now for my project, so I hope I won't face this problem again. However, I can't completely verify whether it works because I am unable to run it on TPU due to some memory leak.
If related problems arise, I would surely try out either of your fixes :rocket:
Have a fantastic day! |
transformers | 12,074 | closed | [test] support more than 2 gpus | This is just a small tweak to have the `tests/test_trainer.py::TrainerIntegrationTest::test_fp16_full_eval` not fail under 3+ gpus rigs.
@LysandreJik | 06-08-2021 19:29:53 | 06-08-2021 19:29:53 | |
transformers | 12,073 | closed | src_lang/tgt_lang missing in mbart example | ## Environment info
- `transformers` version: 4.6
- Platform: linux
- Python version: 3.8
- PyTorch version (GPU?): 1.8
### Who can help
Models:
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
Library:
- tokenizers: @LysandreJik
## Information
Model I am using (Bert, XLNet ...): mbart
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
## To reproduce
I am running the official example in the doc [here](https://huggingface.co/transformers/model_doc/mbart.html) under `Supervised training`. However, there is a warning of
```
Keyword arguments {'src_lang': 'en_XX', 'tgt_lang': 'ro_RO'} not recognized.
```
when running
```
inputs = tokenizer(example_english_phrase, return_tensors="pt", src_lang="en_XX", tgt_lang="ro_RO")
```
Is this normal? | 06-08-2021 18:58:01 | 06-08-2021 18:58:01 | Tagging @patrickvonplaten @patil-suraj @LysandreJik again in case you know what was going on here. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @zijwang , thanks a lot for spotting this.
The tokenizer API is changed a bit, instead of passing `src_lang` and `tgt_lang` to tokenizes `__call__` method, we can now pass these when initializing the tokenizer, or we could set those properties as well. Here's a minimal example
```python
from transformers import MBartForConditionalGeneration, MBartTokenizer
tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro", src_lang="en_XX", tgt_lang="ro_RO")
# to change the src_lang
tokenizer.src_lang = "fr_XX"
``` |
transformers | 12,072 | closed | Inconsistent behavior on CPU vs. GPU | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Linux-4.19.0-16-cloud-amd64-x86_64-with-debian-10.9
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): AutoModel
## To reproduce
Steps to reproduce the behavior:
Hi all - I've been struggling with inconsistent behavior on CPU vs. GPU.
When running on CPU the following code works as expected:
```Python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
def predict(model, tokenizer, test_str, device):
input_ids = tokenizer(test_str, return_tensors='pt', padding=True).to(device)
model.to(device)
model.eval()
with torch.no_grad():
pred = model(**input_ids)
logits = pred.logits.cpu()
return logits
device = 'cpu'
model_dir = 'test_dir'
model_type = 'roberta-base'
test_str = [
'Hello! I am a test string!',
]
model = AutoModelForSequenceClassification.from_pretrained(model_type, num_labels=1)
tokenizer = AutoTokenizer.from_pretrained(model_type)
# save model
model.save_pretrained(model_dir)
pred1 = predict(model, tokenizer, test_str, device)
print(pred1)
model = AutoModelForSequenceClassification.from_pretrained(model_dir)
pred2 = predict(model, tokenizer, test_str, device)
print(pred2)
```
Output:
```
# Obviously output is random, however is identical
tensor([[-0.0238]])
tensor([[-0.0238]])
```
But when I change the to cuda by changing the device
```python
device = 'cuda'
```
I get a significantly different output:
```
tensor([[-0.3194]])
tensor([[-0.3414]])
```
Weirdly the above doesn't happen if I increase the length of my test string:
```
test_str = [
'Hello! I am a test string! Hello! I am a test string! Hello! I am a test string! Hello! I am a test string! ',
]
```
I'm pretty sure I'm missing something obvious - any help is appreciated! π
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I expect the output of the loaded model to be identical not only on CPU but also on GPU.
| 06-08-2021 17:04:56 | 06-08-2021 17:04:56 | Hello! This is weird, you indeed get a significantly different output. Running your exact code sample above, only changing the device to `cuda` yields the same results for me:
```
tensor([[0.0769]])
tensor([[0.0769]])
```
Tried it a few times, and I always get the same results - I've added an additional statement to ensure we get the exact same output:
```py
print(torch.allclose(pred1, pred2))
```
And we do!
I feel this may be a setup issue - would you mind opening trying it on Colab and sharing it if you get the same results so that I can investigate?<|||||>Thanks a lot @LysandreJik - Yes, indeed there's no issues on Colab.
I turns out the problem only occurs with PyTorch versions
```bash
# pip freeze | grep torch
torch==1.8.1+cu111
torchaudio==0.8.1
torchvision==0.9.1+cu111
```
But using `torch==1.8.1` works fine.
This is the output of my `nvidia-smi`:
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |
| N/A 49C P0 70W / 149W | 0MiB / 11441MiB | 100% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
```
I created my environment like this:
```bash
conda create -n ml python==3.8
conda activate ml
pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
pip install transformers
```
Would you mind checking whether you can reproduce with the above?
I'd really like to understand what's going on here π
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,071 | open | XLM-R XL/XXL | # π New model addition
## Model description
The larger version of XLMR.
[Source](https://github.com/pytorch/fairseq/tree/master/examples/xlmr)
Model | Description | #params | vocab size | Download
---|---|---|---|---
`xlmr.xl` | XLM-R (`layers=36, model_dim=2560`) | 3.5B | 250k | [xlm.xl.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/xlmr/xlmr.xl.tar.gz)
`xlmr.xxl` | XLM-R (`layers=48, model_dim=4096`) | 10.7B | 250k | [xlm.xxl.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/xlmr/xlmr.xxl.tar.gz)
<!-- Important information -->
## Open source status
* [x] the model implementation is available: (give details) -> Already available in huggingface
* [x] the model weights are available: (give details) -> link + source provided.
* [ ] who are the authors: (mention them, if possible by @gh-username)
| 06-08-2021 14:52:16 | 06-08-2021 14:52:16 | I believe @stefan-it is on it :sunglasses: <|||||>I successfully converted xlmr.xl to huggingface model.
```
torch.Size([1, 11, 250880]) torch.Size([1, 11, 250880])
max_absolute_diff = 4.482269287109375e-05
Do both models output the same tensors? π₯
Saving model to converted_xlmr.xl2
Configuration saved in converted_xlmr.xl2/config.json
Model weights saved in converted_xlmr.xl2/pytorch_model.bin
```
is there anything I can do to help?
I'm middle of converting xxl model too.<|||||>While processing xxl it produces output with absolute error of 0.003273.
Is it possible because of the model size(10.7B)?
```
torch.Size([1, 11, 250880]) torch.Size([1, 11, 250880])
max_absolute_diff = 0.00327301025390625
Do both models output the same tensors? π©
```<|||||>@Soonhwan-Kwon Did you able to solve the issue? I don't have much experience. But I think the good error margin is `< 1e-6`. <|||||>@sbmaruf I found out that model conversion in fairseq ver 0.10.2 produced wrong result on both side, and it made min absolute diff small. @stefan-it told that he made it work and it is a great news! https://github.com/huggingface/transformers/pull/12082<|||||>I've managed to get the same value and pushed PR in @stefan-it's repo.
```
our_output
tensor([[[ 4.9569e+01, -1.0970e+00, 3.6279e+01, ..., 1.3821e+00,
1.2402e+00, 1.0905e+01],
[ 8.5117e+00, -9.9209e-02, 3.3087e+01, ..., 1.4223e+00,
1.5715e+00, 1.1260e+01],
[ 9.4228e+00, 1.8814e-01, 2.4515e+01, ..., 2.4245e+00,
1.0935e+00, 1.1929e+01],
...,
[ 8.8886e+00, -1.7367e-02, 2.5994e+01, ..., 1.9401e+00,
1.8700e+00, 1.2002e+01],
[ 9.7415e+00, -2.6768e-01, 3.2220e+01, ..., 1.9813e+00,
1.3128e+00, 9.6978e+00],
[ 1.6002e+01, 1.6512e+00, 5.7907e+01, ..., 1.9653e+00,
1.3225e+00, 1.8848e+01]]], grad_fn=<AddBackward0>)
their_output
tensor([[[ 4.9569e+01, -1.0970e+00, 3.6280e+01, ..., 1.3821e+00,
1.2402e+00, 1.0905e+01],
[ 8.5117e+00, -9.9211e-02, 3.3087e+01, ..., 1.4223e+00,
1.5715e+00, 1.1260e+01],
[ 9.4228e+00, 1.8814e-01, 2.4515e+01, ..., 2.4245e+00,
1.0935e+00, 1.1929e+01],
...,
[ 8.8886e+00, -1.7370e-02, 2.5994e+01, ..., 1.9401e+00,
1.8700e+00, 1.2002e+01],
[ 9.7415e+00, -2.6768e-01, 3.2220e+01, ..., 1.9813e+00,
1.3128e+00, 9.6978e+00],
[ 1.6002e+01, 1.6512e+00, 5.7907e+01, ..., 1.9653e+00,
1.3225e+00, 1.8848e+01]]], grad_fn=<AddBackward0>)
```<|||||>Hi @Soonhwan-Kwon Thanks for contributing the convertion code. Have you tested whether you could load the converted xlmr-xl or xlm-xxl using huggingface? <|||||>@ccclyu Yes I have tested the model and confirmed the better performance than xlmr large model in specific task. <|||||>@Soonhwan-Kwon Glad to know that. I have successfully converted the parameters using your PR https://github.com/stefan-it/transformers/pull/1 but it may have minor conflict with the current transformer codebase.
By the way, how do you load the huge model (13GB parameters for xlm-xl) using huggingface since one single GPU could not load the whole model? Did you use DeepSpeed for model parallels ?
<|||||>@ccclyu There are many options and deepspeed is the one option as you mentioned, and you can freeze layers to reduce gpu memory usage.<|||||>In progress in https://github.com/huggingface/transformers/pull/13210 by @Soonhwan-Kwon <|||||>is there any news about this? |
transformers | 12,070 | closed | Properly indent block_size | # What does this PR do?
Fixes a typo in the run_clm example. Fixes #12048 | 06-08-2021 13:15:14 | 06-08-2021 13:15:14 | |
transformers | 12,069 | closed | [WIP] Add helper function to align labels between datasets and model config | # What does this PR do?
This PR adds a helper function to align the `label2id` and `id2label` mappings between a `datasets.Dataset` and `PretrainedConfig`, with the alignment performed on the dataset itself.
This will help us with the Hub evaluation, where we won't know in advance whether a model that is fine-tuned on say MNLI has the same mappings as the MNLI dataset we load from `datasets`.
An example where this is needed is if we naively try to evaluate `microsoft/deberta-base-mnli` on `mnli` because the model config has the following mappings:
```python
"id2label": {
"0": "CONTRADICTION",
"1": "NEUTRAL",
"2": "ENTAILMENT"
},
"label2id": {
"CONTRADICTION": 0,
"ENTAILMENT": 2,
"NEUTRAL": 1
}
```
while the `mnli` dataset has the `contradiction` and `neutral` labels swapped:
```python
id2label = {0: 'entailment', 1: 'neutral', 2: 'contradiction'}
label2id = {'contradiction': 2, 'entailment': 0, 'neutral': 1}
```
As a result, we get a much lower accuracy during evaluation:
```python
from datasets import load_dataset
from transformers.trainer_utils import EvalPrediction
from transformers import AutoModelForSequenceClassification, Trainer
# load dataset for evaluation
mnli = load_dataset("glue", "mnli", split="test")
# load model
model_ckpt = "microsoft/deberta-base-mnli"
model = AutoModelForSequenceClassification.from_pretrained(checkpoint)
# preprocess, create trainer ...
mnli_enc = ...
trainer = Trainer(model, args=args, tokenizer=tokenizer)
# generate preds
preds = trainer.predict(mnli_enc)
# preds.label_ids misalinged with model.config => returns wrong accuracy (too low)!
compute_metrics(EvalPrediction(preds.predictions, preds.label_ids))
```
The fix is to use the helper function before running the evaluation to make sure the label IDs are aligned:
```python
from transformers.modeling_utils import align_dataset_labels_with_config
mnli_enc_aligned = align_dataset_labels_with_config(dataset=mnli_enc, config=model.config, label_column="label")
# preds now aligned and everyone is happy :)
preds = trainer.predict(mnli_enc_aligned)
```
cc @thomwolf @lhoestq
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-08-2021 13:13:19 | 06-08-2021 13:13:19 | Closing this in favour of implementing generic functionality on the `datasets` side here: https://github.com/huggingface/datasets/pull/2457 |
transformers | 12,068 | closed | grads is None when using GPT2 transformers in tensorflow | transformers ver: `4.7.0.dev0`
```
from transformers import GPT2Tokenizer, TFGPT2LMHeadModel, TFGPT2Model, TFAutoModelForCausalLM
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
tokenizer.padding_side = "left"
tokenizer.pad_token = tokenizer.eos_token # to avoid an error
gpt2 = TFGPT2LMHeadModel.from_pretrained('gpt2')
gpt2.trainable = True
num_return_sequences = 1
#token_lens = [len(tokenizer.tokenize(sent)) for sent in prompts]
#max_length = math.ceil(np.array(token_lens).max())*2
max_len = get_tokens_len(ds, 0.99)
cce = tf.keras.losses.CategoricalCrossentropy()
optimizer = keras.optimizers.Adam(learning_rate=0.0001)
def loss_fn(output_sequences, labels):
syn_sents = tokenizer.batch_decode(output_sequences, clean_up_tokenization_spaces=True, skip_special_tokens=True)
syn_sents_pure = []
for sent, sent_syn in zip(prompts, syn_sents):
syn_sents_pure.append(sent_syn.replace(sent, '').replace('\n',' ').strip())
preds = model(np.array(syn_sents_pure))
assert preds.shape[0] == len(prompts) and preds.shape[1] == num_classes
label_oht = tf.keras.utils.to_categorical( np.array([label_idx[l] for l in labels]), num_classes = num_classes, dtype='int' )
label_oht_tf = tf.convert_to_tensor(label_oht)
assert label_oht.shape == preds.shape
loss_value = cce(label_oht_tf, preds)#.numpy()
return loss_value
rows = ds.df_test.sample(5)
prompts = rows['content'].tolist()
labels = rows['label'].tolist()
with tf.GradientTape() as tape:
# Run the forward pass of the layer.
# The operations that the layer applies
# to its inputs are going to be recorded
# on the GradientTape.
#logits = model(x_batch_train, training=True) # Logits for this minibatch
inputs = tokenizer(prompts, padding='max_length', truncation=True, max_length=max_len, return_tensors="tf")
output_sequences = gpt2.generate(
input_ids = inputs['input_ids'],
attention_mask = inputs['attention_mask'],
max_length= max_len*2,
temperature=1,
top_k=0,
top_p=0.9,
repetition_penalty=1,
do_sample=True,
num_return_sequences=num_return_sequences
)
# Compute the loss value for this minibatch.
loss_value = loss_fn(output_sequences, labels) # <tf.Tensor: shape=(), dtype=float32, numpy=0.062384058>
# Use the gradient tape to automatically retrieve
# the gradients of the trainable variables with respect to the loss.
grads = tape.gradient(loss_value, gpt2.trainable_weights)
```
I load the pre-trained model gpt2 from `TFGPT2LMHeadModel` and I use its synthesis sentences given prompts to calculate the loss.
The loss seems ok, it is a tensor, such as
> <tf.Tensor: shape=(), dtype=float32, numpy=1.0446845>
But all the elements of `grads` is None
Why this? Any hints ?
Thanks. | 06-08-2021 12:06:26 | 06-08-2021 12:06:26 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,067 | closed | Selecting specific GPU CUDA devices | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0
- Platform: Linux-4.15.0-65-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?:
### Who can help
Hello @sgugger,
Steps to reproduce the behavior:
1. I would like to use selected CUDA GPU cores among 8 of them in the HF `Trainer` class. I've written something along the following lines:
2. So, I've done `export CUDA_VISIBLE_DEVICES=1,8` to select specific GPU devices, and ran:
```
training_args = TrainingArguments(
output_dir=self._output_dir,
overwrite_output_dir=True,
num_train_epochs=epochs,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size= int(batch_size/2), # since evaluation per
logging_steps = 20,
save_total_limit = 20,
warmup_steps=500,
weight_decay=0.01,
evaluation_strategy = "steps",
load_best_model_at_end = True,
eval_accumulation_steps = 1,
logging_dir = "logs"
)
trainer = Trainer(
model=self._model,
args=training_args,
tokenizer=self._tokenizer,
data_collator=self._data_collator,
train_dataset=self._train,
eval_dataset = self._test,
)
print("Devices used are:")
print(training_args.device)
```
## Expected behavior
I was under the impression that the `training_args.device` should return me cuda:1,8 or something along those lines, but it still reverted back to cuda:0. Are there any arguments I could specify to select a particular core.
Thank you in advance!
| 06-08-2021 10:01:41 | 06-08-2021 10:01:41 | When you do `CUDA_VISIBLE_DEVICES=1,8`, CUDA will still call the two available GPUs 0 and 1, 0 will correspond to 1 and 1 to 8. I f you look at the output of `nvidia-smi`, you will see the training will only run on GPUs 1 and 8.<|||||>Thank you, silly me. I'll close the issue, thanks! |
transformers | 12,066 | closed | Fix LUKE integration tests | # What does this PR do?
Fixes the (slow) integration tests of LUKE. | 06-08-2021 09:01:16 | 06-08-2021 09:01:16 | |
transformers | 12,065 | closed | How can we predict story future based on past events? | Hello, is it possible to predict story future events on the basis of past events using transformer? | 06-08-2021 08:34:30 | 06-08-2021 08:34:30 | Hi,
Please ask this question on the [forum](https://discuss.huggingface.co/), rather than here. Github issues are mostly for bugs/feature requests.
Thanks.<|||||>Okay I will hop there<|||||>Hello @NielsRogge, I posted there but I think the forum is not so resposive. so it would be great if you help me with this request.
Please.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,064 | closed | ImportError: cannot ipmort name 'TFAutoModel' | transformers: 4.6.1
tensorflow-gpu: 2.0.0
when I wrote code below in my jupyter-notebook:
`from transformers import TFAutoModel`
I got an ImportError:
> **ImportError: cannot ipmort name 'TFAutoModel'**
I wonder what's wrong with my code or dev environment. | 06-08-2021 07:26:47 | 06-08-2021 07:26:47 | |
transformers | 12,063 | closed | Fix tapas issue | # What does this PR do?
Fixes #12060
However, the (slow) integration tests of TAPAS that use relative position embeddings are failing for me locally, most likely due to the new version of the [torch-scatter](https://github.com/rusty1s/pytorch_scatter) dependency. I'll look into that.
Update: just tested the models in Google Colab (which has `torch 1.8.1+cu101`). Everything seems to work fine there. However, when running locally on `torch 1.8.1+cu111`, I'm getting entirely different logits/hidden states. Both are using `torch-scatter 2.7.0`. | 06-08-2021 07:13:33 | 06-08-2021 07:13:33 | |
transformers | 12,062 | closed | fp16 models getting auto converted to fp32 in .from_pretrained() | **stas00 edited**: this Issue has nothing to do with Deepspeed, but pure `transformers`
---------------------
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-3.10.0-1127.13.1.el7.x86_64-x86_64-with-centos-7.7.1908-Core
- Python version: 3.6.8
- PyTorch version (GPU?): 1.6.0+cu92 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes (not essential)
- Using distributed or parallel set-up in script?: Yes (not essential)
### Who can help
@LysandreJik @sgugger
## Information
Model I am using (Bert, XLNet ...): BertForMaskedLM
The problem arises when using:
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
Masked LM
## To reproduce
Steps to reproduce the behavior:
1. Finetune a 16-bit low precision BertForMaskedLM model on any dataset using DeepSpeed and Trainer
2. Load the model and check the dtype using:
```python
from transformers import BertTokenizer, BertForMaskedLM
tokenizer = BertTokenizer.from_pretrained(tokenizer_path)
model = BertForMaskedLM.from_pretrained(model_path)
print(model.dtype)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Outputs torch.float32 instead of the expected torch.float16. I was able to recover the original weights using model.half()
I think it would be helpful to highlight this behaviour of forced autoconversion either as a warning or as a part of from_pretrained() method's documentation or provide an additional argument to help retain fp16 weights. Willing to pick this issue up. Please let me know what would be the most appropriate fix.
| 06-08-2021 06:43:32 | 06-08-2021 06:43:32 | cc @stas00 <|||||>Oh, do you mean that your model was already in fp16 to start with? This combination I haven't tried yet.
First when reporting Deepspeed problems please always share the deepspeed config file and the TrainingArguments.
and then we can look at sorting it out.
<|||||>Yes, the saved model was already in fp16. Apologies, here are the needed files:
A) DeepSpeed config file:
```json
{"zero_allow_untested_optimizer": true,
"optimizer": {
"type": "AdamW",
"params": {
"lr":3e-5,
"betas": [
0.9,
0.999
],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
},
"train_batch_size": 24,
"fp16": {
"enabled": true,
"loss_scale": 0,
"initial_scale_power": 16
}
}
```
B) Training Arguments:
```python
TrainingArguments(output_dir=/data/dps_finetune_16_wikitext, overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=IntervalStrategy.STEPS, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=10.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_ratio=0.0, warmup_steps=0, logging_dir=runs/Jun08_18-02-30_jp3-g-31374-37031-i-2p4p2, logging_strategy=IntervalStrategy.STEPS, logging_first_step=False, logging_steps=500, save_strategy=IntervalStrategy.STEPS, save_steps=100, save_total_limit=5, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, fp16_full_eval=False, local_rank=0, tpu_num_cores=None, tpu_metrics_debug=False, debug=[], dataloader_drop_last=False, eval_steps=10, dataloader_num_workers=0, past_index=-1, run_name=/data/dps_finetune_16_wikitext, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed=/data/config_fine_tune_bert.json, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, length_column_name=length, report_to=['mlflow', 'tensorboard'], ddp_find_unused_parameters=None, dataloader_pin_memory=False, skip_memory_metrics=False, use_legacy_prediction_loop=False, push_to_hub=False, resume_from_checkpoint=None, _n_gpu=1, mp_parameters=)
```
fp16 is set to False. I have also tried with fp16=True but no difference in behaviour was observed.
I also tested by loading the saved fp16 state_dict separately using torch.load() and then used it to initialize the BertForMaskedLM as follows:
```python
import torch
from transformers import BertConfig
state_dict = torch.load(model_path+ "pytorch_model.bin")
config = BertConfig.from_json_file(model_path+ "config.json")
model = BertForMaskedLM.from_pretrained(None,config = config, state_dict = a)
model.dtype
```
model.dtype still outputs torch.float32.
The config.json file above (saved model's config file) is as follows:
```json
{
"_name_or_path": "/data/bert-base-cased/",
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"transformers_version": "4.6.1",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 28996
}
```
The _name_or_path points to the location of the pre-finetuning fp32 model. However, changing its value to the post-finetuning fp16 model also does not lead to any change in model.dtype output. Please let me know if there are any checks I could run or files I could provide.
Thanks!<|||||>Thank you for sharing these details. So indeed this looks like a case I haven't run into and this is not an integration issue.
So under zero3 `from_pretrained` calls `zero.Init()` which prepares the model for deepspeed's stage 3 work and it also gathers/scatters the model pieces across the gpus during state_dict loading. So this is the doing of one of these 2. But they are needed in order to use the deepspeed optimizer which works either in fp32 or mixed precision mode - Deepspeeds's `fp16.enabled` == mixed precision. They currently don't have fp16 non-mixed precision mode as far as I know. But clearly there is a need for that.
Most likely this is something Deepspeed core will have to solve. This use case is probably new to them too.
So please kindly use https://github.com/microsoft/DeepSpeed/issues/new to post the same details (Edit -> Copy-n-Paste) there.
and please tag me so that I could track the outcome and adjust things if need be in our side.
Thank you, @asit2898
<|||||>Hi @asit2898 , thanks for reporting your issue. I can help look at things from DeepSpeed's side.
Was the model fine-tuned with ZeRO enabled? From the DS config above it seems not, unless it is enabled somewhere on the HF side of things.
@stas00 , does the `from_pretrained` codepath go through DeepSpeed's `load_checkpoint()`, or is the checkpoint logic all on HF's side?
To start, I did a quick experiment with DeepSpeed (without ZeRO) and examined model parameter dtypes before and after `deepspeed.initialize()`. So far I haven't reproduced the issue:
- When FP16 is *not* enabled, the model's dtype is unchanged (eg., fp32 stays fp32 and fp16 stays fp16).
- When fp16 *is* enabled, the model weights are fp16 after `deepspeed.initialize()` no matter the initial dtype of fp32 or fp16.<|||||>
> @stas00 , does the `from_pretrained` codepath go through DeepSpeed's `load_checkpoint()`, or is the checkpoint logic all on HF's side?
As posted above `from_pretrained`
So under zero3 from_pretrained:
1. calls zero.Init() which prepares the model for deepspeed's stage 3 work and
2. it also gathers/scatters the model pieces across the gpus during state_dict loading.
> I did a quick experiment with DeepSpeed (without ZeRO)
The key is zero3. `from_pretrained` doesn't do anything deepspeed-wise unless it's zero3.<|||||>@ShadenSmith @stas00 Thanks for the replies! I did not enable any stage of ZeRO and just ran DeepSpeed using pure data parallelism.
The saved model was in fp16 at the end of DeepSpeed finetuning using HG Trainer which I think is in accordance with the experiments you carried out...
It is only after I load the saved model using .from_pretrained() method that the weights get auto-converted to 32 bits...
I am not very familiar with HG source code, but given that .from_pretrained() takes only the state_dict and model configuration as arguments, especially in the following case that I mentioned:
```python
import torch
from transformers import BertConfig
state_dict = torch.load(model_path+ "pytorch_model.bin")
config = BertConfig.from_json_file(model_path+ "config.json")
model = BertForMaskedLM.from_pretrained(None,config = config, state_dict = a)
model.dtype
```
The HG object behaviour should be independent of whether or not the model was trained on DeepSpeed right :thinking:
Let me know if there are any experiments that can help isolate the effects of DeepSpeed from those of HG.
<|||||>Thanks for the clarification @asit2898 / @stas00 .
@stas00 , I don't yet understand the conclusion that the issue is in core DeepSpeed. Since ZeRO-3 is not enabled, is HF expecting the `Init()` to do something else? It should just be a no-op so long as Z3 is not enabled. Is the expectation on HF's side that there are fp32 weights that should be converted to fp16 in this instance? Or is the thought that `Init()` is still executing, and the weights are bumped to fp32 there when scattering?
The only model dtype transformations that we should be making are converting to FP16 when that is enabled. This issue is going in the opposite direction and I am not sure where the FP32 conversion would happen.<|||||>OK, Let me try to reproduce this first and then it'd be much easier to discuss this further.
for some reason I was under the impression that zero3 was enabled! but reviewing the config posted by @asit2898 it's not.
I will make an fp16 model, try to reproduce the problem and then follow up.<|||||>OK, this doesn't seem to have anything to do with Deepspeed.
Observe:
```
import torch
from transformers import BertForMaskedLM
mname = "prajjwal1/bert-tiny"
model = BertForMaskedLM.from_pretrained(mname)
model = model.half()
print(model.dtype)
model_path = "/tmp/bert-fp16"
model.save_pretrained(model_path)
model = BertForMaskedLM.from_pretrained(model_path)
print(model.dtype)
```
prints:
```
torch.float16
torch.float32
```
I will look next at why this bug is happening.
<|||||>OK, so it's not even `transformers`, it's pytorch that does that in `load_state_dict` https://github.com/pytorch/pytorch/issues/39428
Here is a standalone torch example:
```
import torch
from torch import nn
model = nn.Linear(1,1)
model = model.half()
print(model.weight.dtype)
torch.save(model.state_dict(), 'model.pkl')
model = nn.Linear(1,1)
model.load_state_dict(torch.load('model.pkl'))
print(model.weight.dtype)
```
prints
```
torch.float16
torch.float32
```
<|||||>Thinking more about it I think `load_state_dict` does the right thing. It adjusts the weights to the dtype of the model.
Since the user can't access the model until after `from_pretrained` they have no chance to choose its dtype.
1. So one possible solution here is to add an optional `dtype` arg to `from_pretrained` and if it's passed, do:
```
model.to(dtype=dtype)
```
as soon as it's instantiated.
2. An alternative approach is to sample the weight's dtype and convert the model automatically to that type. Is it ever possible that the weights could be of different dtype? If not this might be the transparent solution.
Of course, the user could do `model.half()` immediately after `from_pretrained` but the problem is that it will require 2x RAM which the user might not have, so the switching should occur before weights loading.
@sgugger, @LysandreJik, @patrickvonplaten - what do you think?<|||||>I'm okay with having a `dtype` argument to `from_pretrained`, personally.<|||||>I edited just now to offer an automatic detection. item 2.<|||||>@asit2898, until we sort it out please use `model.half()` after `from_pretrained` as a workaround.<|||||>I'm fine with having a `dtype` argument to `from_pretrained` as well, and if possible an automatic detection would be even better.
I would also be fine with a configuration attribute that would identify between fp32/fp16/bfloat16, as users have been surprised in the past that models weighing ~500mb on the hub ended up taking up much more RAM and much more disk space on their machines in the past (automatic detection would be better than having another configuration attribute).<|||||>Ah yes, this is definitely something that could be stored in the configuration!<|||||>Which also connects to my proposal from 2 months ago: https://github.com/huggingface/transformers/issues/11209, though it's slightly different since a model could be pre-trained in mixed precision and saved in fp32.
The thing is - if you have the weights of the model, it doesn't take long to get the dtype of the tensors it contains in its saved `state_dict` (pytorch) - One question - is it guaranteed they are always of the same dtype and it's enough to check one of them, or should all be checked and the highest be used if there are mixed?
<|||||>Specific discussion on auto-detection:
To do auto-detecting `torch.load()` needs to be moved before model instantiating.
Then we need to set default dtype,
https://pytorch.org/docs/stable/generated/torch.set_default_tensor_type.html
So the protocol would be:
1. torch.load (which would need to be moved up) or use `state_dict` if it was passed to `from_pretrained`
2. read one (all?) dtypes of the weights
3. set `torch.set_default_tensor_type(dtype)`
4. instantiate the model
5. restore `torch.set_default_tensor_type` to its previous value (so could be context manager)
6. `_load_from_state_dict`
<|||||>And if we choose to implement this for pytorch what do we do with tf and flax?<|||||>@stas00 Thanks a lot for addressing the issue! I really did not expect the issue to lie in the way PyTorch loads the model. I'll continue using model.half() and would be happy to help in any way I can...<|||||>@Rocketknight1, do you have an idea of how that is/would be managed with TensorFlow?
@patrickvonplaten @patil-suraj, do you have an idea of how that is/would be managed with JAX/Flax?<|||||>@LysandreJik Keras is quite opinionated about this - it has plenty of support for mixed-precision training (like PyTorch AMP) using a `Policy` object but I don't know of too many people doing true full float16/bfloat16 training, and I think you'd have to do that in pure TF or use some backend functions like `K.set_floatx`. I also think it has weird side-effects and breaks some layers.<|||||>Looks like we lost momentum on this one.
Please answer the following 2 questions with 1x and 2x (e.g. 1c, 2b - multiple versions are ok too if you're flexible)
1. dtype setting mechanism:
a. do we autodiscover the dtype from the state_dict
b. do we pass an explicit `dtype` argument to `from_pretrained`
c. a+b - with the `dtype` argument overriding autodiscovery
d. using model config attribute - need to change save_pretrained to save this attribute
e. a+d - with d overriding autodiscovery
2. Scope of the solution:
a. do we try to solve this for all 3 frameworks,
b. just pytorch for now - will be documented as such
Thank you!
p.s. if we add `from_pretrained(..., dtype)` should we do the same for `from_config(..., dtype)` so that the behavior is the same?<|||||>I'd vote for 1a, overridden by a configuration attribute (1d?) rather than the `from_pretrained` argument, and 2b.<|||||>Agreed with Lysandre: using a config attribute (which defaults to None or "auto") and switch back to the autodiscovery if this attribute is not set to a specific value. <|||||>**update**: added 1d and 1e options as proposed.
So if we go with 1e - `from_config` is then just 1d, right? since there is no model to do autodiscovery from.
Question: could it be possible that the model will have some weights that use a different dtype than the rest of the model?<|||||>Yes, `from_config` uses just 1d.
For your question, I'm not aware of such a situation existing.<|||||>@asit2898, please give a try to this PR https://github.com/huggingface/transformers/pull/12316 - it should do the right thing automatically as requested.<|||||>@asit2898, the PR is almost done, and once merged you will need to use one of:
```
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype=torch.float16)
model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype="auto")
```
to meet your needs.<|||||>@stas00 Hi, might off-topic, but want to ask, does specific torch_dtype = torch.float16 but loading actually a float32 model, will it result in correct auto conversion on weights?<|||||>That's correct. It'd be an equivalent of `weight.to(torch_dtype)`
If your model was saved in fp32, `torch.load` will still allocate the fp32 weights as fp32 and then it'll be downcast. So if you don't have enough memory you might want to pre-save the model in your target dtype.
I think future versions of `torch.load` should be able to automatically load in the target dtype, but it's not the case today.<|||||>@stas00 I found if specific torch_dtype to float16, then the loaded model actually be 16 even thought the weights is 32.
But this model is lora, when I merge the lora to base model, the prediction result is not correct. How can i verify is this conversion is correct or not?
(but it might also because of deepspeed, since am using `zero_to_fp32.py` converts deepspeed state dict to a float32 adapter model.)
I am get confused now why the model loaded with lora is not right.
(but the lora saved in fp16 after training done is correct)<|||||>I suggest you start a new issue, @lucasjinreal - and please tag me there.
Also please make sure you have the latest deepspeed version - until recently it wasn't dealing correctly with frozen weights - it wasn't saving those in the checkpoint. I think around 0.9.2 is when it was fixed (or 0.9.3)<|||||>@stas00 I was using 0.8.3, trying 0.9.4. Should transformers using 4.30 to compeletly deal with this problem?<|||||>I can't say until you try, but ds 0.8.3 is definitely not doing the right thing.
`transformers` version has nothing to do with the issue, it's really about deepspeed not saving params that aren't in the optimizer, so `zero_to_fp32.py` doesn't get those and random values are then loaded when you try to load the model.
you can easily test the outcome from `zero_to_fp32.py` - if it's correct it should be 4x parameters (4 bytes per param in fp32), e.g. 10B model will be 40GB large checkpoint file. |
transformers | 12,061 | open | [testing] making network tests more reliable | We have a group of tests that require a reliable network, which is never 100% so they fail for many months.
I propose that those tests will be rewritten with unstable network in mind and include:
1. `time.sleep(3)`
2. retry 3-5 times
e.g. one of the candidates is:
`tests/test_hf_api.py::HfApiEndpointsTest::test_list_repos_objs`
but also recent tests that push to hub.
Perhaps a simple retry context manager can be added to `testing_utils.py`, which would trap exceptions and retry after a pause. And then simply wrap the content of existing tests into that context manager, e.g.:
```
with RetryAfterSleepTest():
# normal test code
```
it could accept the number of retries and sleep time between retries for optional arguments.
Of course, it's probably even better to make it also a decorator. e.g. `@unreliable_network_retry`
@LysandreJik | 06-08-2021 03:31:56 | 06-08-2021 03:31:56 | Yes, I think that can help, we have similar issues in the [`huggingface_hub`](https://github.com/huggingface/huggingface_hub) repository.
I'm wondering if these issues don't come from the fact that these tests are very quick to run, therefore bursting the server which has issues handling all requests. It also happens with tokenizers which also run fast, but not with models.
If that's the case then a `time.sleep(3)` would work, but spreading the tests so that they're not run sequentially could also work.
cc @julien-c <|||||>From what I'm observing this issue doesn't happen anymore - should we close the issue and reopen if the network failures reappear at some point?<|||||>Sounds good, @LysandreJik <|||||>OK, it's happening again,
```
2021-09-28T00:56:00.8216138Z 502 Server Error: Bad Gateway for url: https://huggingface.co/patrickvonplaten/t5-tiny-random/resolve/main/config.json
2021-09-28T00:56:00.8217204Z ___________ TestDeepSpeedWithLauncher.test_do_eval_no_train_1_zero3 ____________
```
Our deepspeed integration tests are now integrated into the Deepspeed core CI and they report these failures.
You can see other HF projects reporting this issue as well:
e.g. see this thread: https://huggingface.slack.com/archives/C01BWJU0YKW/p1632819750394400
I wonder if we should somehow have a way not only to retry the download but gracefully recover and most lilkely having a special setting in our test suite that when network failure occurs despite the retries the test skips rather than fails - we won't use that on our CI but for external use it'd be important not to interfere with their testing needs.<|||||>Reopening this since this is a problem.
e.g. our deepspeed section of tests run on the Deepspeed CI intermittently fails to fetch files from the hub.
```
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url:
https://huggingface.co/sshleifer/tiny-gpt2/resolve/main/config.json
```
which impacts their CI.
I think perhaps we need a retry mechanism in the core of the network fetches and not put the burden on the tests.
@LysandreJik <|||||>Yes! How would you like to tackle this? With a retry on each test, with a small pause?
I wonder how best to handle it, given that chaining that test with no pause would probably result in the same issue happening over and over again, repeatedly, while putting a pause might significantly slow the test suite down.
Do you have any ideas regarding how to solve this best?<|||||>I believe the retry mechanism should be part of the download API, since that's the unreliable link in the chain.
I propose to have new arguments in the download API with sensible defaults:
- `try_times=3` - how many times to try before giving up
- `try_sleep_secs=1` - how long to sleep between trying again
With these defaults the longest delay is 2 seconds, which is probably not an issue for the test suite. Especially if we cache downloads.
If it can't download after 3 tries then if the client is OK then the server is at fault and it needs a higher capacity/scalability to handle a high request rate.
<|||||>That sounds good, even if I'm a bit afraid that retrying in succession won't solve much. When a test fails for server error, then usually other tests fail. I'm still open to trying it out to see if it improves these errors!
Would you like to give it a try? I'm guessing only this method needs to be modified: https://github.com/huggingface/transformers/blob/efea0f868bd381244e3cef51b388293e41a36d1e/src/transformers/file_utils.py#L1594
cc @julien-c as this is a safeguard against the server's instabilities.<|||||>BTW @LysandreJik i think we should soon switch from `file_utils` to `huggingface_hub` no?
none of this is really transformers-specific?<|||||>Indeed, some of the logic could be upstreamed in `huggingface_hub` (was pushing this back as I'm a fervent believer of "if it ain't broke, don't fix it", especially for such a core component of the library which doesn't need to evolve much)<|||||>yes, same feeling. However i think we should try to prevent the two codebases from diverging too much since initially the code was extracted from transformers anyways
(failure retry is an example of a quite big change, for instance)
Maybe if we do this, an option would be to upstream the same change to huggingface_hub then?<|||||>Yes, that sounds like a good plan. We've started moving some methods (like `HfApi`) to `huggingface_hub` anyway, so for iso-behavior methods, I'm fine to move them in `huggingface_hub` sooner rather than later.
Let's go with the retry option first in `transformers`, and I'll take the opportunity to upstream it in `huggingface_hub` once we have settled on the logic and it is merged in `transformers`.<|||||>As @sgugger mentions offline, this issue also appears in the push to hub methods (403 errors, 500 errors), so maybe adding a retry option there for testing would make sense as well<|||||>> That sounds good, even if I'm a bit afraid that retrying in succession won't solve much. When a test fails for server error, then usually other tests fail. I'm still open to trying it out to see if it improves these errors!
Should these incidents (repetitive failures) be also logged or does someone review server logs to ensure that these failures aren't indicative of an issue with the server?
We need to have a clear distinction between a failure due to network transport issues vs server's inability to cope with the traffic. If the server is overloaded, then of course re-trying won't help. But then we need to fix the server not to be overloaded.<|||||>FWIW, this issue continues on our CI:
```
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.16.1/metrics/sacrebleu/sacrebleu.py
```
<|||||>Do you have a link for `ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.16.1/metrics/sacrebleu/sacrebleu.py
`?
cc @lhoestq <|||||>Oh, it was just giving an example of an intermittent failure on our CI. It was fine when CI restarted.
So with re-try it could have been avoided. Since all other files were fetched or looked up just fine.<|||||>Hi ! If it can help, note that in `datasets` we've already added a retry mechanism in [file_utils.py](https://github.com/huggingface/datasets/blob/16f562b381a9e2ad1934b82ffcd6ea1695b6d74e/src/datasets/utils/file_utils.py#L378-L387)<|||||>@lhoestq, I didn't follow all the paths, but it appears that `max_retries` is either 0 or 1 almost everywhere in `datasets` unless the user overrides it. Unless you believe a single retry is sufficient.
But, yes, this is what we want in transformers! Thank you!<|||||>Additionally, I'm thinking this. Most of the time on set ups like CI or a developer's box most of the datasets and transformers files have already been cached.
Would it make sense to check that if
1. there is a failure to fetch a model or a dataset or a support file
2. and there is already a cached version of the same
to simply switch to using a local file and tell the user that this was done?
I believe this is an even more realistic use case and will 10-100x reduce the amount of failures due to network issues.
If you agree I'd use the following algorithm:
1. try to fetch the file
2. look up local cache
3. retry to fetch the file
4. retry to fetch the file
5. assert with: bailing after re-tried 3 times and no local version found cached
with each step being needed only if the previous fails.
<|||||>Here is another example of CI intermittent failure which could have been re-tried and not fail the whole CI:
```
E requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/api/models/facebook/mbart-large-50-one-to-many-mmt
```
Source:
https://app.circleci.com/pipelines/github/huggingface/transformers/31002/workflows/327de938-0361-420e-abb5-c35d45bca5bb/jobs/318450
<|||||>I'm all for a retry mechanism, especially given the recent errors we've been seeing in the CI.
Regarding the fetched files, I'd rather we keep it the same as it is right now: we have a `local_files_only` keyword that enables fetching from the local folder. With this argument, we have this option as an opt-in, rather than as a behind-the-scenes method, which I think is important.
Otherwise, the user might use `from_pretrained` to fetch the latest version of a repository, and the version fetched could actually be the latest they have on their setup, which is a small (but potentially impactful) breaking change.
~Would you have some bandwidth to implement the retry mechanism?~ I should have a bit of time to tackle this by the time I'm back from holidays. <|||||>We can't use `local_files_only` on CI since then we will miss updated remote data.
I agree with your discussion of the user-mode.
Here are a few more extensions to my proposal:
1. we can differentiate between CI-mode and user-mode. in CI-mode (env var activated) we can use the algo suggested in https://github.com/huggingface/transformers/issues/12061#issuecomment-987448258
2. In parallel I think there is a difference when we get a 50x and 40x response. Regardless of CI-mode or not, a 40x is a client error and should not try to use a local cache. 50x is a server error and thus a local cache should be used.
With the caveat for non-public repos where codes are obscure not to expose the private repo layout and the un-authenticated user always get 40x regardless of true path, but I think this falls neatly into the 40x group anyway - a client error.
So here an updated algo:
```
# in this algo a successful "fetch the file from online or cache" exits the algo.
If env["TRANSFORMERS_CI_CACHE_RETRY_ON_500"]:
1. try to fetch the file
2. if 50x: look up local cache
else: fail
3. if not in cache: sleep and retry to fetch the file
4. if 50x: sleep and retry to fetch the file
else: fail
5. assert with: bailing after re-tried 3 times and no local version found cached
else: # normal mode
1. try to fetch the file
2. do nothing
3. if 50x: sleep and retry to fetch the file
else: fail
4. if 50x: sleep and retry to fetch the file
else: fail
5. assert with: bailing after re-tried 3 times
```
The 2 halves are almost the same with the only addition of cache lookup in the CI-mode for step 2. Hence the do nothing 2nd step in the 2nd half.
What do you think?
and of course the same should apply to `datasets` and `transformers`<|||||>Thank you for handling the github bot - would love to make time for this this or next week. |
transformers | 12,060 | closed | [skipped test] to fix | https://github.com/huggingface/transformers/pull/12059 skipped failing: `tests/test_modeling_tapas.py::TapasUtilitiesTest::test_reduce_sum_vectorized`
This issue is to track its resolution so that it won't be forgotten.
@LysandreJik | 06-08-2021 03:23:45 | 06-08-2021 03:23:45 | |
transformers | 12,059 | closed | [CI] skip failing test | skipping a consistently failing test that breaks CI
@LysandreJik | 06-08-2021 03:22:15 | 06-08-2021 03:22:15 | |
transformers | 12,058 | closed | [Deepspeed] various fixes | This PR includes a few small fixes in config files, tests and docs:
- replace deprecated config `cpu_offload` with `offload_optimizer`
- `sub_group_size` setting was too big - needing too much GPU RAM
@sgugger | 06-08-2021 00:41:43 | 06-08-2021 00:41:43 | That's a good call, Sylvain. I think the deprecation warning has been showing up long enough that I didn't bother checking. But I did check now and all is good it was done before deepspeed==0.3.16 was released ([commit](https://github.com/microsoft/DeepSpeed/commit/0d4a54a04d658db40a120bc10c6f1f1a4478f6f1)) and I retested with 0.3.16 just to be sure. |
transformers | 12,057 | closed | adds metric prefix. | # What does this PR do?
Adds metric prefix to metrics dict. This is needed for `metric_for_best_model` to function properly. See https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L1516
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-07-2021 23:36:59 | 06-07-2021 23:36:59 | You just need to tweak the test of this script in examples/pytorch/test_examples.py, since it's looking for f1 instead of eval_f1. Same for exact.<|||||>@sgugger, the example tests are fixed now, this other failure is a mystery to me. I suspect that failure is caused by another change. Let me know if you think the `run_torch_tests` failure is related to this PR and I'll look into it. <|||||>No this failure is independent and currently being investigated, so we can merge this PR safely. Thanks again! |
transformers | 12,056 | closed | [testing] set tests to not rebuild datasets | recently `datasets` created in memory datasets enabled by default - which is great for those who wants it but is a terrible idea for tests and those who need to develop things constantly restarting the scripts as datasets aren't being cached and rebuilt on every run.
So we should turn this feature off in `*/conftest.py` by setting:
```
HF_MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=0
```
But it's going to be renamed shortly to `HF_DATASETS_IN_MEMORY_MAX_SIZE`
https://github.com/huggingface/datasets/pull/2409#issuecomment-850549742
https://github.com/huggingface/datasets/pull/2454
So for now this issue is tracking this change and then will add it to the tests.
| 06-07-2021 18:50:52 | 06-07-2021 18:50:52 | `datasets` has just made this feature disabled by default: https://github.com/huggingface/datasets/pull/2460
So nothing needs to be done. |
transformers | 12,055 | closed | Settings for perfect Story writing based on the input text? | Hello, thank you so much for this awesome project.
What settings do you suggest so by using those settings it seems like a story is continuing based on the previous story input as a text file? | 06-07-2021 13:20:08 | 06-07-2021 13:20:08 | Hello, sorry for the long answer time! Unfortunately, we favor using the [forum](https://discuss.huggingface.co) for questions like this where you're much more likely to get an answer than on Github Issues. If that's not already the case, do you mind opening a thread there?
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,054 | closed | How to update the GPT2 with loss which are provided from another separate module? | Suppose I have N prompts(sentences) for generation. They are fed into GPT2 and get the corresponding synthesis sentences.
And I have a separate black box which can return loss given these synthesis samples. The black box is just another component.
It is natural for every batch that GPT2 generate samples and get the loss, repeatedly.
What I want to do is use the loss from the black box to update the parameters of GPT2, at each batch.
The generation of GPT2 is quite simple, but how can I implement the idea of updating it with the loss?
Is there any example for doing this ?
Please give some thoughts, thanks. | 06-07-2021 11:20:15 | 06-07-2021 11:20:15 | I guess you can do it like so:
```
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')
model.train()
# when generating, we will use the logits of right-most token to predict the next token
# so the padding should be on the left
tokenizer.padding_side = "left"
tokenizer.pad_token = tokenizer.eos_token # to avoid an error
prompts = ["Hello, my dog is a little", "Hello, my dog is"]
inputs = tokenizer(prompts, padding=True, return_tensors="pt")
output_sequences = model.generate(
input_ids=inputs['input_ids'],
attention_mask=inputs['attention_mask']
)
loss = black_box(output_sequences)
loss.backward()
```
Please note that the [forum](https://discuss.huggingface.co/) is a better place to ask questions, Github issues are mostly for bugs/feature requests.
Thanks.<|||||>> I guess you can do it like so:
>
> ```
> from transformers import GPT2Tokenizer, GPT2LMHeadModel
> tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
> model = GPT2LMHeadModel.from_pretrained('gpt2')
>
> model.train()
> # when generating, we will use the logits of right-most token to predict the next token
> # so the padding should be on the left
> tokenizer.padding_side = "left"
> tokenizer.pad_token = tokenizer.eos_token # to avoid an error
>
> prompts = ["Hello, my dog is a little", "Hello, my dog is"]
> inputs = tokenizer(prompts, padding=True, return_tensors="pt")
>
> output_sequences = model.generate(
> input_ids=inputs['input_ids'],
> attention_mask=inputs['attention_mask']
> )
>
> loss = black_box(output_sequences)
> loss.backward()
> ```
>
> Please note that the [forum](https://discuss.huggingface.co/) is a better place to ask questions, Github issues are mostly for bugs/feature requests.
>
> Thanks.
thanks, very helpful. |
transformers | 12,053 | closed | [JAX] Bump jax lib | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Thanks for spotting it @stas00
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-07-2021 10:55:24 | 06-07-2021 10:55:24 | |
transformers | 12,052 | closed | No max_length set on huawei-noah/TinyBERT_General_4L_312D/config.json | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Darwin-20.3.0-x86_64-i386-64bit
- Python version: 3.6.10
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten @JetRunner
## Information
Model I am using: huawei-noah/TinyBERT_General_4L_312D
The problem arises when using:
* [x] my own modified scripts: (give details below)
```{python}
import json
import pandas as pd
import gzip
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('huawei-noah/TinyBERT_General_4L_312D')
def parse(path):
g = gzip.open(path, 'rb')
for l in g:
yield json.loads(l)
def getDF(path):
i = 0
df = {}
for d in parse(path):
df[i] = d
i += 1
return pd.DataFrame.from_dict(df, orient='index')
local_path_to_review_data = "/Users/alexandrecombessie/Downloads/Software_5.json.gz" # See download link below
df = getDF(local_path_to_review_data)
df["review_text_full_embeddings"] = [
json.dumps(x.tolist()) for x in model.encode(df["reviewText"].astype(str))
]
```
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
- Amazon review dataset sample (http://deepyeti.ucsd.edu/jianmo/amazon/categoryFilesSmall/Software_5.json.gz)
## To reproduce
Steps to reproduce the behavior:
See script above
## Expected behavior
A `max_length` should be set in the model `config.json` for the tokenizer to apply truncation (which is my expected behavior).
See https://huggingface.co/huawei-noah/TinyBERT_General_4L_312D/blob/main/config.json
I could do it myself, but I am not able to understand what is the right length to set.
| 06-07-2021 10:22:55 | 06-07-2021 10:22:55 | Hi @patrickvonplaten @JetRunner,
Apologies for following up, I know it's a busy time.
Would you have some time to look into this issue?
Thanks,
Alex <|||||>Hi Alex, I think the right thing to do is to look up `max_len` from the TinyBERT paper. Do you know what is that setting? <|||||>> Hi Alex, I think the right thing to do is to look up `max_len` from the TinyBERT paper. Do you know what is that setting?
Yeah, you are right. The paper seems to indicate 128 for the general distillation.

I will reach out to the authors because they mention another length of 64 for task-specific distillation. I just want to be sure which one is used by the model hosted on Huggingface.
As a side-note, it would be really useful (at least to me) to have some automated checks and/or feedback system on the model hub.
<|||||>Hi @JetRunner,
I got the following answer from the author (Xiaoqi Jiao)
> The max_len of TinyBERT is 128, but if the max sequence length of your downstream task is less than max_len, you may set max_len to a small value like 64 to save the computing resources.
Should I add `max_length: 128` on the model hub? Happy to take this small PR directly.
Cheers,
Alex
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,051 | closed | Add early stopping args to TrainingArguments | # What does this PR do?
While working in the collaborative training project, I added early stopping args to `TrainingArguments`.
Feel free to close this PR if you consider it is not pertinent.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger | 06-07-2021 09:26:07 | 06-07-2021 09:26:07 | I reused the script:
- https://github.com/huggingface/transformers/blob/master/examples/pytorch/token-classification/run_ner.py
but allowing early stopping: [`EarlyStoppingCallback`](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_callback.py#L505)
Therefore, I had to add those args to `TrainingArguments` to allow end users of my script pass those parameters to EarlyStoppingCallback.
I thought that it might be useful for end users to have those args in TrainingArguments so that they can use early stopping in their trainer the same way I did for my script.<|||||>We don't have any `TrainingArguments` that are not used anywhere. Users are already complaining this class has too many, so if we had some they have to do something.
If they go with an update in the example scripts, then all the example scripts should be updated in the PR :-) <|||||>I could also add early stopping to some of the example scripts... π
I may do it this weekend though...<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,050 | closed | [end2end RAG] AttributeError: module 'pickle' has no attribute 'PickleBuffer' | Hi folks,
@shamanez , thanks for your awesome project of end2end RAG. But when I reproduce the results of https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag-end2end-retriever, I face some problems.
```
Traceback (most recent call last):
File "finetune_rag.py", line 789, in <module>
main(args)
File "finetune_rag.py", line 726, in main
model: GenerativeQAModule = GenerativeQAModule(args)
File "finetune_rag.py", line 123, in __init__
hparams.model_name_or_path, hparams.actor_handles, config=config
File "/home/shunyu/container/Project/transformers/examples/research_projects/rag-end2end-retriever/distributed_ray_retriever.py", line 165, in from_pretrained
index=index,
File "/home/shunyu/container/Project/transformers/examples/research_projects/rag-end2end-retriever/distributed_ray_retriever.py", line 93, in __init__
for worker in self.retrieval_workers
File "/home/shunyu/container/Project/transformers/examples/research_projects/rag-end2end-retriever/distributed_ray_retriever.py", line 93, in <listcomp>
for worker in self.retrieval_workers
File "/anaconda/envs/rag2/lib/python3.6/site-packages/ray/actor.py", line 112, in remote
return self._remote(args, kwargs)
File "/anaconda/envs/rag2/lib/python3.6/site-packages/ray/actor.py", line 153, in _remote
return invocation(args, kwargs)
File "/anaconda/envs/rag2/lib/python3.6/site-packages/ray/actor.py", line 147, in invocation
num_returns=num_returns)
File "/anaconda/envs/rag2/lib/python3.6/site-packages/ray/actor.py", line 865, in _actor_method_call
list_args, name, num_returns, self._ray_actor_method_cpus)
File "python/ray/_raylet.pyx", line 1359, in ray._raylet.CoreWorker.submit_actor_task
File "python/ray/_raylet.pyx", line 1364, in ray._raylet.CoreWorker.submit_actor_task
File "python/ray/_raylet.pyx", line 304, in ray._raylet.prepare_args
File "/anaconda/envs/rag2/lib/python3.6/site-packages/ray/serialization.py", line 324, in serialize
return self._serialize_to_msgpack(value)
File "/anaconda/envs/rag2/lib/python3.6/site-packages/ray/serialization.py", line 304, in _serialize_to_msgpack
self._serialize_to_pickle5(metadata, python_objects)
File "/anaconda/envs/rag2/lib/python3.6/site-packages/ray/serialization.py", line 264, in _serialize_to_pickle5
raise e
File "/anaconda/envs/rag2/lib/python3.6/site-packages/ray/serialization.py", line 261, in _serialize_to_pickle5
value, protocol=5, buffer_callback=writer.buffer_callback)
File "/anaconda/envs/rag2/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 73, in dumps
cp.dump(obj)
File "/anaconda/envs/rag2/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 580, in dump
return Pickler.dump(self, obj)
File "pyarrow/io.pxi", line 1021, in pyarrow.lib.Buffer.__reduce_ex__
AttributeError: module 'pickle' has no attribute 'PickleBuffer'
```
We think it's mainly related to the version and dependency of ray, pyarrow and datasets.
My main pip list:
```
faiss-cpu == 1.7.0
datasets == 1.6.2
psutil == 5.7.0
torch == 1.6.0
pytorch-lightning == 1.3.1
nvidia-ml-py3 == 7.352.0
ray == 1.3.0
pyarrow == 3.0.0
```
I think anyone can investigate it from this reproducible example:
```
git clone https://github.com/huggingface/transformers
cd transformers
pip install .
cd ./examples/research_projects/rag-end2end-retriever
pip install -r requirements.txt
bash ./test_run/test_finetune.sh
```
And someone was likely to face the same questions https://discuss.ray.io/t/cant-pickle-pyarrow-dataset-expression/1685/8
So @shamanez, could you please show the entire pip list of the env to run END2END RAG? Or point out how to fix it?
Let me know if more information is needed and thanks for your help. | 06-07-2021 08:32:15 | 06-07-2021 08:32:15 | Hi,
Thanks a lot. I think this error is due to Python version. Check with 3.8 or above. It will work.<|||||>> Hi,
>
> Thanks a lot. I think this error is due to Python version. Check with 3.8 or above. It will work.
Thank you, it really worked when I run test_finetune.sh.
Emm, it's silly that I have tried to change Python version from 3.6 to 3.7, but forgot the 3.8. <|||||>Perfect:)
On Mon, Jun 7, 2021, 21:41 Dopaminezsy ***@***.***> wrote:
> Hi,
>
> Thanks a lot. I think this error is due to Python version. Check with 3.8
> or above. It will work.
>
> Thank you, it really worked when I run test_finetune.sh.
>
> Emm, it's silly that I have tried to change Python version from 3.6 to
> 3.7, but forgot the 3.8.
>
> β
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/12050#issuecomment-855777762>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGRPFOPULVKPX2AAR7LTRSH5PANCNFSM46HEVE4A>
> .
>
<|||||>Hi, friend @shamanez :
Sorry to disturb you again. I face the following bug when run finetune_rag_ray_end2end.sh.
Could you give some sugguestions?
```
2021-06-08 09:40:19,202 INFO worker.py:726 -- Connecting to existing Ray cluster at address: 10.34.96.6:6379
INFO:__main__:Getting named actors for NODE_RANK 0, LOCAL_RANK 1
Traceback (most recent call last):
File "/workspaceblobstore/azureml/rag1_1623144585_ed3652ce/transformers/examples/research_projects/rag-end2end-retriever/finetune_rag.py", line 794, in <module>
main(args)
File "/workspaceblobstore/azureml/rag1_1623144585_ed3652ce/transformers/examples/research_projects/rag-end2end-retriever/finetune_rag.py", line 726, in main
named_actors = [ray.get_actor("retrieval_worker_{}".format(i)) for i in range(args.num_retrieval_workers)]
File "/workspaceblobstore/azureml/rag1_1623144585_ed3652ce/transformers/examples/research_projects/rag-end2end-retriever/finetune_rag.py", line 726, in <listcomp>
named_actors = [ray.get_actor("retrieval_worker_{}".format(i)) for i in range(args.num_retrieval_workers)]
File "/home/t-shzhang/.local/lib/python3.8/site-packages/ray/_private/client_mode_hook.py", line 62, in wrapper
return func(*args, **kwargs)
File "/home/t-shzhang/.local/lib/python3.8/site-packages/ray/worker.py", line 1659, in get_actor
handle = worker.core_worker.get_named_actor_handle(name)
File "python/ray/_raylet.pyx", line 1521, in ray._raylet.CoreWorker.get_named_actor_handle
File "python/ray/_raylet.pyx", line 159, in ray._raylet.check_status
ValueError: Failed to look up actor with name 'retrieval_worker_0'. You are either trying to look up a named actor you didn't create, the named actor died, or the actor hasn't been created because named actor creation is asynchronous.
```<|||||>Seems like a problem in your cluster. What is your system. Seems like it
is a multi node system.
On Tue, Jun 8, 2021, 21:52 Dopaminezsy ***@***.***> wrote:
> Hi, friend @shamanez <https://github.com/shamanez> :
> Sorry to disturb you again. I face the following bug when run
> finetune_rag_ray_end2end.sh.
> Could you give some sugguestions?
>
> 2021-06-08 09:40:19,202 INFO worker.py:726 -- Connecting to existing Ray cluster at address: 10.34.96.6:6379
> INFO:__main__:Getting named actors for NODE_RANK 0, LOCAL_RANK 1
> Traceback (most recent call last):
> File "/workspaceblobstore/azureml/rag1_1623144585_ed3652ce/transformers/examples/research_projects/rag-end2end-retriever/finetune_rag.py", line 794, in <module>
> main(args)
> File "/workspaceblobstore/azureml/rag1_1623144585_ed3652ce/transformers/examples/research_projects/rag-end2end-retriever/finetune_rag.py", line 726, in main
> named_actors = [ray.get_actor("retrieval_worker_{}".format(i)) for i in range(args.num_retrieval_workers)]
> File "/workspaceblobstore/azureml/rag1_1623144585_ed3652ce/transformers/examples/research_projects/rag-end2end-retriever/finetune_rag.py", line 726, in <listcomp>
> named_actors = [ray.get_actor("retrieval_worker_{}".format(i)) for i in range(args.num_retrieval_workers)]
> File "/home/t-shzhang/.local/lib/python3.8/site-packages/ray/_private/client_mode_hook.py", line 62, in wrapper
> return func(*args, **kwargs)
> File "/home/t-shzhang/.local/lib/python3.8/site-packages/ray/worker.py", line 1659, in get_actor
> handle = worker.core_worker.get_named_actor_handle(name)
> File "python/ray/_raylet.pyx", line 1521, in ray._raylet.CoreWorker.get_named_actor_handle
> File "python/ray/_raylet.pyx", line 159, in ray._raylet.check_status
> ValueError: Failed to look up actor with name 'retrieval_worker_0'. You are either trying to look up a named actor you didn't create, the named actor died, or the actor hasn't been created because named actor creation is asynchronous.
>
> β
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/12050#issuecomment-856630780>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGTIOEOYQODXAUE25S3TRXR6FANCNFSM46HEVE4A>
> .
>
<|||||>Actually I have discussed this issue previously.
This happens when you try to run the code in distributed mode. @calderma also mentioned the same thing.
https://github.com/huggingface/transformers/pull/11655#issuecomment-845295355
I think this is not an issue with Ray or anything. It is something with how you run a distributed code in Pytorch Lightining. Sadly I do not have a distributed system to test :(.
But in the above thread I pointed out some workarounds. Also I have mentioned the reason to get this issue.
Just to summarize.. we initialize RAY actors only in master process (when initializing the master ddp process). Other DDP processes simply access the RAY worker by its name.
But when having a distributed system, I think initialization should happen in each node. In order to activate distributed training, you need to add **node** variable to lightning trainer. Then you should initialize the training as given in their tutoria. Please let me know how it goes.
Please follow the following commands to run the code in a Cluster.
https://pytorch-lightning.readthedocs.io/en/latest/clouds/cluster.html#general-purpose-cluster
<|||||>> Actually I have discussed this issue previously.
>
> This happens when you try to run the code in distributed mode. @calderma also mentioned the same thing.
>
> [#11655 (comment)](https://github.com/huggingface/transformers/pull/11655#issuecomment-845295355)
>
> I think this is not an issue with Ray or anything. It is something with how you run a distributed code in Pytorch Lightining. Sadly I do not have a distributed system to test :(.
>
> But in the above thread I pointed out some workarounds. Also I have mentioned the reason to get this issue.
>
> Just to summarize.. we initialize RAY actors only in master process (when initializing the master ddp process). Other DDP processes simply access the RAY worker by its name.
>
> But when having a distributed system, I think initialization should happen in each node. In order to activate distributed training, you need to add **node** variable to lightning trainer. Then you should initialize the training as given in their tutoria. Please let me know how it goes.
>
> Please follow the following commands to run the code in a Cluster.
>
> https://pytorch-lightning.readthedocs.io/en/latest/clouds/cluster.html#general-purpose-cluster
Thank you a lot. I think it will help.
I'm going to try.<|||||>Hi again,
As for the DDP problem, I followed your instructions to add **node** variable before define trainer in lightning_base.py. But it didn't help and the BUG also as before. Do you know there are other instructions on PL?
```
train_params["accelerator"] = "ddp" or "ddp2" or "dp"
train_params["num_nodes"] = 1
trainer = pl.Trainer.from_argparse_args(
args,
weights_summary=None,
callbacks=[logging_callback] + extra_callbacks + [InitCallback()] + [checkpoint_callback],
logger=logger,
plugins=[DDPPlugin(find_unused_parameters=True)], # this is needed in new pytorch-lightning new version
val_check_interval=1,
num_sanity_val_steps=2,
**train_params,
)
```
By the way, I think it may be related to the RAY, for that RAY works before pl.Trainer in finetune_ray.py? Feel free to point out my naive error.<|||||>Yeah Ray workers get initialized before starting the training loop. Checkout if conditions mentioned in RAY worker initialization part. It basically says initialize only in the master process.
But when you try distributed training, it says it can't find already initialized worker. That means processes hasn't shared between nodes.
This is not a problem when you run the code in normal mode. I did not change anything to the RAY initialization. You will get the same problem in Original RAG too.
Check https://pytorch-lightning.readthedocs.io/en/latest/clouds/cluster.html#general-purpose-cluster.
This has several ways to start your cluster. I think you need to play around with those environment variables.<|||||>@Dopaminezsy
There is one more thing you can do. I updated the original [RAG](https://github.com/huggingface/transformers/blob/master/examples/research_projects/rag/finetune_rag.py) with the latest PL. But I use custom plugging there (I did not use it for this project since it is still in an experimental plugging). Can you try to run the original RAG in distributed mode and let me know?
Also, play around with these [lines](https://github.com/huggingface/transformers/blob/master/examples/research_projects/rag/finetune_rag.py#L535).
One more thing, try to print something after this [line](https://github.com/huggingface/transformers/blob/master/examples/research_projects/rag-end2end-retriever/finetune_rag.py#L709). I want to know whether your code fails before the DDP process. If it doesn't go inside the if condition, when starting, it is a problem with RAY, otherwise it is something with PL. Please let me know these things asap.<|||||>I forgot to tell you something related: I have changed [https://github.com/huggingface/transformers/blob/master/examples/research_projects/rag/finetune_rag.py#L535](url) to the below (changed **and** to **or**) .
`if ("LOCAL_RANK" not in os.environ or os.environ["LOCAL_RANK"] == 0) or ( "NODE_RANK" not in os.environ or os.environ["NODE_RANK"] == 0 ):<!--EndFragment-->`
If use the original code with **and**, I got the bug as follows:
```
INFO worker.py:726 -- Connecting to existing Ray cluster at address: 10.34.64.3:6379
Traceback (most recent call last):
File "finetune_rag.py", line 790, in <module>
main(args)
File "finetune_rag.py", line 718, in main
os.environ["NODE_RANK"], os.environ["LOCAL_RANK"]
File "/opt/miniconda/envs/rag/lib/python3.8/os.py", line 675, in __getitem__
raise KeyError(key) from None
KeyError: 'LOCAL_RANK'
```
You may benefit from the above information. Now I am going to think your new suggestions.
<|||||>Actually, I think I can solve your problem. Please let me know once you have done the test (find whether the code goes inside the if condition in the initial process).<|||||>> I forgot to tell you something related: I have changed [https://github.com/huggingface/transformers/blob/master/examples/research_projects/rag/finetune_rag.py#L535](url) to the below (changed **and** to **or**) .
>
> `if ("LOCAL_RANK" not in os.environ or os.environ["LOCAL_RANK"] == 0) or ( "NODE_RANK" not in os.environ or os.environ["NODE_RANK"] == 0 ):<!--EndFragment-->`
>
> If use the original code with **and**, I got the bug as follows:
>
> ```
> INFO worker.py:726 -- Connecting to existing Ray cluster at address: 10.34.64.3:6379
> Traceback (most recent call last):
> File "finetune_rag.py", line 790, in <module>
> main(args)
> File "finetune_rag.py", line 718, in main
> os.environ["NODE_RANK"], os.environ["LOCAL_RANK"]
> File "/opt/miniconda/envs/rag/lib/python3.8/os.py", line 675, in __getitem__
> raise KeyError(key) from None
> KeyError: 'LOCAL_RANK'
> ```
>
> You may benefit from the above information. Now I am going to think your new suggestions.
Yes, this should give you an error. You have to use **and** operator. Because in the beginning there is no "Local_Rank" variable and it only checks the **""LOCAL_RANK" not in os. environ "** condition, prior to going into the next term with Nodes.
But if you remove the **and** operator between two conditions, it will try to check this "os.environ["LOCAL_RANK"] == 0". I know this is bit tricky :) <|||||>Newly:
These are about when I run latest version of original RAG on the clusters.
When using **and** operator, it faced _KeyError: 'LOCAL_RANK'_ as befor.
When changing **and** operator to **or**, it faced _ValueError: Failed to look up actor with name 'retrieval_worker_0'._ as before.
When changing **and** to **or** and add **train_params["num_nodes"] = 1**, it also faced _ValueError: Failed to look up actor with name 'retrieval_worker_0'._
Overall, it's the same problem as them when running END2END-RAG.<|||||>Can you find out during the initialization, the code enters the if the condition or not?
Try to print something after this [line](https://github.com/huggingface/transformers/blob/master/examples/research_projects/rag-end2end-retriever/finetune_rag.py#L709). I want to know whether your code fails before the DDP process. If it doesn't go inside the if condition, when starting, it is a problem with RAY, otherwise it is something with PL. Please let me know these things asap.<|||||>When I run as follows :
```
print("debug: LOCAL RANK in if = {}".format("LOCAL_RANK" not in os.environ or os.environ["LOCAL_RANK"] == 0))
print("debug: NODE RANK in if = {}".format("NODE_RANK" not in os.environ or os.environ["NODE_RANK"] == 0))
if ("LOCAL_RANK" not in os.environ or os.environ["LOCAL_RANK"] == 0) and (
"NODE_RANK" not in os.environ or os.environ["NODE_RANK"] == 0
):
print('Debug: 1 Go into successfully')
remote_cls = ray.remote(RayRetriever)
named_actors = [
remote_cls.options(name="retrieval_worker_{}".format(i)).remote()
for i in range(args.num_retrieval_workers)
]
print('Debug: 2 Initially successfully')
else:
print("Debug: 3 in else")
logger.info(
"Getting named actors for NODE_RANK {}, LOCAL_RANK {}".format(
os.environ["NODE_RANK"], os.environ["LOCAL_RANK"]
)
)
print('Debug: 4444444')
named_actors = [ray.get_actor("retrieval_worker_{}".format(i)) for i in range(args.num_retrieval_workers)]
print('Debug: 5555555')
```
Print like: ( I copy all as belows after accomplishing conda env)
```
2021-06-09 08:34:41,460 INFO worker.py:726 -- Connecting to existing Ray cluster at address: 10.36.112.5:6379
shzhang Debug NODE_RANK 0
shzhang Debug LOCAL_RANK Not exist.
debug: LOCAL RANK in if = True
debug: NODE RANK in if = False
Debug: 3 in else
Traceback (most recent call last):
File "finetune_rag.py", line 634, in <module>
main(args)
File "finetune_rag.py", line 561, in main
os.environ["NODE_RANK"], os.environ["LOCAL_RANK"]
File "/opt/miniconda/envs/rag/lib/python3.8/os.py", line 675, in __getitem__
raise KeyError(key) from None
KeyError: 'LOCAL_RANK'
Starting the daemon thread to refresh tokens in background for process with pid = 956
```
<|||||>Ok I think problem is with the if conditions. When you run the code it
never goes inside if condition. Which means your workers doesn't get
initialized. The main issue is you already have a Node variable in
os.envron. do you get me? If not I can explain.
On Wed, Jun 9, 2021, 20:44 Dopaminezsy ***@***.***> wrote:
> When I run as follows :
>
> print("debug: LOCAL RANK in if = {}".format("LOCAL_RANK" not in os.environ or os.environ["LOCAL_RANK"] == 0))
>
> print("debug: NODE RANK in if = {}".format("NODE_RANK" not in os.environ or os.environ["NODE_RANK"] == 0))
>
> if ("LOCAL_RANK" not in os.environ or os.environ["LOCAL_RANK"] == 0) and (
>
> "NODE_RANK" not in os.environ or os.environ["NODE_RANK"] == 0
>
> ):
>
> print('Debug: 1 Go into successfully')
>
> remote_cls = ray.remote(RayRetriever)
>
> named_actors = [
>
> remote_cls.options(name="retrieval_worker_{}".format(i)).remote()
>
> for i in range(args.num_retrieval_workers)
>
> ]
>
> print('Debug: 2 Initially successfully')
>
> else:
>
> print("Debug: 3 in else")
>
> logger.info(
>
> "Getting named actors for NODE_RANK {}, LOCAL_RANK {}".format(
>
> os.environ["NODE_RANK"], os.environ["LOCAL_RANK"]
>
> )
>
> )
>
> print('Debug: 4444444')
>
> named_actors = [ray.get_actor("retrieval_worker_{}".format(i)) for i in range(args.num_retrieval_workers)]
>
> print('Debug: 5555555')
>
>
> Print like: ( I copy all as belows after accomplishing conda env)
>
> 2021-06-09 08:34:35,151 INFO scripts.py:560 -- Local node IP: 10.36.112.5
>
> 2021-06-09 08:34:36,497 INFO services.py:1272 -- View the Ray dashboard at οΏ½[1mοΏ½[32mhttp://127.0.0.1:8265οΏ½[39mοΏ½[22m
>
> 2021-06-09 08:34:37,529 SUCC scripts.py:592 -- --------------------
>
> 2021-06-09 08:34:37,529 SUCC scripts.py:593 -- Ray runtime started.
>
> 2021-06-09 08:34:37,529 SUCC scripts.py:594 -- --------------------
>
> 2021-06-09 08:34:37,529 INFO scripts.py:596 -- Next steps
>
> 2021-06-09 08:34:37,530 INFO scripts.py:597 -- To connect to this Ray runtime from another node, run
>
> 2021-06-09 08:34:37,530 INFO scripts.py:601 -- ray start --address='10.36.112.5:6379' --redis-password='5241590000000000'
>
> 2021-06-09 08:34:37,530 INFO scripts.py:606 -- Alternatively, use the following Python code:
>
> 2021-06-09 08:34:37,530 INFO scripts.py:609 -- import ray
>
> 2021-06-09 08:34:37,530 INFO scripts.py:610 -- ray.init(address='auto', _redis_password='5241590000000000')
>
> 2021-06-09 08:34:37,530 INFO scripts.py:618 -- If connection fails, check your firewall settings and network configuration.
>
> 2021-06-09 08:34:37,530 INFO scripts.py:623 -- To terminate the Ray runtime, run
>
> 2021-06-09 08:34:37,530 INFO scripts.py:624 -- ray stop
>
> /home/t-shzhang/.local/lib/python3.8/site-packages/ray/autoscaler/_private/cli_logger.py:57: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
>
> warnings.warn(
>
> 2021-06-09 08:34:41,460 INFO worker.py:726 -- Connecting to existing Ray cluster at address: 10.36.112.5:6379
>
> shzhang Debug NODE_RANK 0
>
> shzhang Debug LOCAL_RANK Not exist.
>
> debug: LOCAL RANK in if = True
>
> debug: NODE RANK in if = False
>
> Debug: 3 in else
>
> Traceback (most recent call last):
>
> File "finetune_rag.py", line 634, in <module>
>
> main(args)
>
> File "finetune_rag.py", line 561, in main
>
> os.environ["NODE_RANK"], os.environ["LOCAL_RANK"]
>
> File "/opt/miniconda/envs/rag/lib/python3.8/os.py", line 675, in __getitem__
>
> raise KeyError(key) from None
>
> KeyError: 'LOCAL_RANK'
>
> Starting the daemon thread to refresh tokens in background for process with pid = 956
>
>
>
>
> β
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/12050#issuecomment-857508306>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGSIFGTZSNYBG6QCW23TR4SWZANCNFSM46HEVE4A>
> .
>
<|||||>I can't see the email for it's ***@***.***, so I will email you at your gmail on Linkedin.<|||||>sure.
On Wed, Jun 9, 2021 at 9:35 PM Dopaminezsy ***@***.***> wrote:
> I can't see the email for it's *@*.***, so I will email you at your gmail
> on Linkedin.
>
> β
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/12050#issuecomment-857544425>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGWK5AWQTW24OA62XD3TR4YUVANCNFSM46HEVE4A>
> .
>
--
[image: Augmented Human Lab] <http://www.ahlab.org/> [image: uni]
<https://www.auckland.ac.nz/en/abi.html>
Gayal Shamane
Ph.D. Candidate
Augmented Human Lab
Auckland Bioengineering Institute | The University of Auckland
<|||||>Hey did you manage to solve ?<|||||>There seem to be something surpring!
My solvement:
1. remove the condition after **and** operator in the below, remaining only the front condition.
`if ("LOCAL_RANK" not in os.environ or os.environ["LOCAL_RANK"] == 0) and ( "NODE_RANK" not in os.environ or os.environ["NODE_RANK"] == 0 )`
2. restrict the version of Ray to ray==1.3.0 not ray>=1.3.0 (it will install ray==1.4.0 on the cluster).
Then it works well as belows:
```
Validating: 100%|ββββββββββ| 2962/2964 [22:52<00:00, 2.22it/s][A
Epoch 0: 0%| | 2964/64358290 [23:10<8388:32:39, 2.13it/s, loss=nan, v_num=6]
Validating: 100%|ββββββββββ| 2963/2964 [22:52<00:00, 2.22it/s][A
Epoch 0: 0%| | 2965/64358290 [23:11<8388:26:04, 2.13it/s, loss=nan, v_num=6]
```
I'm going to check it for times, avoiding mistake.<|||||>Nice. Actually the problem was your cluster has a default node.
Now you are simply checking whether the DDP process has started or not.
BTW what happens when you are using latest RAY ?
On Wed, Jun 9, 2021, 23:45 Dopaminezsy ***@***.***> wrote:
> There seem to be something surpring!
>
> My solvement:
>
> 1. remove the condition after *and* operator in the below, remaining
> only the front condition.
> if ("LOCAL_RANK" not in os.environ or os.environ["LOCAL_RANK"] == 0)
> and ( "NODE_RANK" not in os.environ or os.environ["NODE_RANK"] == 0 )
> 2. restrict the version of Ray to ray==1.3.0 not ray>=1.3.0 (it will
> install ray==1.4.0 on the cluster).
>
> Then it works well as belows:
>
> Validating: 100%|ββββββββββ| 2962/2964 [22:52<00:00, 2.22it/s]οΏ½[A
>
> Epoch 0: 0%| | 2964/64358290 [23:10<8388:32:39, 2.13it/s, loss=nan, v_num=6]
>
>
>
> Validating: 100%|ββββββββββ| 2963/2964 [22:52<00:00, 2.22it/s]οΏ½[A
>
> Epoch 0: 0%| | 2965/64358290 [23:11<8388:26:04, 2.13it/s, loss=nan, v_num=6]
>
>
> I'm going to check it for times, avoiding mistake.
>
> β
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/12050#issuecomment-857625392>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGRLAAQ5BSVF4DUFQY3TR5H4HANCNFSM46HEVE4A>
> .
>
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>[updated]
@shamanez
Sorry to share my reproduced results later.
I got my result EM=40.31 in end2end way, just following the same setting of [rag-end2end-retriever](https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag-end2end-retriever).
<|||||>That's nice to hear. Thanks for letting me know.
On Thu, Jul 22, 2021, 22:28 Dopaminezsy ***@***.***> wrote:
> [updated]
>
> @shamanez <https://github.com/shamanez>
>
> Sorry to share my reproduced results later.
>
> I got my result EM=40.31 in end2end way, just following the same setting
> of rag-end2end-retriever
> <https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag-end2end-retriever>
> .
>
> β
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/12050#issuecomment-884809427>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGSXHGF4MBT3VBWVICDTY7XFPANCNFSM46HEVE4A>
> .
>
|
transformers | 12,049 | closed | fix past_key_values docs | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #12032
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-07-2021 06:16:19 | 06-07-2021 06:16:19 | |
transformers | 12,048 | closed | OpenAI GPT language modeling shape mismatch: 512 position embeddings, 1024 input emebddings | ## Environment info
- `transformers` version: 4.7.0.dev0
- Platform: Linux-4.19.0-16-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.10
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: below log is for cpu; also fails with gpu but cpu gives better error
- Using distributed or parallel set-up in script?: NA for cpu
### Who can help
- gpt2: @patrickvonplaten, @LysandreJik
- openai-gpt: @sgugger
## Information
Model I am using (Bert, XLNet ...): openai-gpt
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name) causal language modelling
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behaviour:
1. new environment, editable installation from source
2. CUDA_VISIBLE_DEVICES=, nice python transformers/examples/pytorch/language-modeling/run_clm.py --model_name_or_path openai-gpt --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-clm --per_device_train_batch_size 2 --gradient_accumulation_steps 4
```Shell
06/07/2021 05:58:13 - WARNING - __main__ - Process rank: -1, device: cpu, n_gpu: 0distributed training: False, 16-bits training: False
06/07/2021 05:58:13 - INFO - __main__ - Training/evaluation parameters TrainingArguments(
_n_gpu=0,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_find_unused_parameters=None,
debug=[],
deepspeed=None,
disable_tqdm=False,
do_eval=True,
do_predict=False,
do_train=True,
eval_accumulation_steps=None,
eval_steps=500,
evaluation_strategy=IntervalStrategy.NO,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
gradient_accumulation_steps=4,
greater_is_better=None,
group_by_length=False,
ignore_data_skip=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=5e-05,
length_column_name=length,
load_best_model_at_end=False,
local_rank=-1,
log_on_each_node=True,
logging_dir=runs/Jun07_05-58-13_fermi-debug,
logging_first_step=False,
logging_steps=500,
logging_strategy=IntervalStrategy.STEPS,
lr_scheduler_type=SchedulerType.LINEAR,
max_grad_norm=1.0,
max_steps=-1,
metric_for_best_model=None,
mp_parameters=,
no_cuda=False,
num_train_epochs=3.0,
output_dir=/tmp/test-clm,
overwrite_output_dir=False,
past_index=-1,
per_device_eval_batch_size=8,
per_device_train_batch_size=2,
prediction_loss_only=False,
push_to_hub=False,
remove_unused_columns=True,
report_to=[],
resume_from_checkpoint=None,
run_name=/tmp/test-clm,
save_steps=500,
save_strategy=IntervalStrategy.STEPS,
save_total_limit=None,
seed=42,
sharded_ddp=[],
skip_memory_metrics=True,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_legacy_prediction_loop=False,
warmup_ratio=0.0,
warmup_steps=0,
weight_decay=0.0,
)
06/07/2021 05:58:14 - WARNING - datasets.builder - Reusing dataset wikitext (/home/avit/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/aa5e094000ec7afeb74c3be92c88313cd6f132d564c7effd961c10fd47c76f20)
[INFO|configuration_utils.py:517] 2021-06-07 05:58:14,482 >> loading configuration file https://huggingface.co/openai-gpt/resolve/main/config.json from cache at /home/avit/.cache/huggingface/transformers/bebb46f5735701bc248ef9faa26f12577944fa7fc8e9be1a774b94d4cb8b79b6.ba6f10a5446f364b92311c09e55e49aa27024a4aeefc1ea50fd733b77bcd997d
[INFO|configuration_utils.py:553] 2021-06-07 05:58:14,483 >> Model config OpenAIGPTConfig {
"afn": "gelu",
"architectures": [
"OpenAIGPTLMHeadModel"
],
"attn_pdrop": 0.1,
"embd_pdrop": 0.1,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "openai-gpt",
"n_ctx": 512,
"n_embd": 768,
"n_head": 12,
"n_layer": 12,
"n_positions": 512,
"n_special": 0,
"predict_special_tokens": true,
"resid_pdrop": 0.1,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"task_specific_params": {
"text-generation": {
"do_sample": true,
"max_length": 50
}
},
"transformers_version": "4.7.0.dev0",
"vocab_size": 40478
}
[INFO|configuration_utils.py:517] 2021-06-07 05:58:14,766 >> loading configuration file https://huggingface.co/openai-gpt/resolve/main/config.json from cache at /home/avit/.cache/huggingface/transformers/bebb46f5735701bc248ef9faa26f12577944fa7fc8e9be1a774b94d4cb8b79b6.ba6f10a5446f364b92311c09e55e49aa27024a4aeefc1ea50fd733b77bcd997d
[INFO|configuration_utils.py:553] 2021-06-07 05:58:14,767 >> Model config OpenAIGPTConfig {
"afn": "gelu",
"architectures": [
"OpenAIGPTLMHeadModel"
],
"attn_pdrop": 0.1,
"embd_pdrop": 0.1,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "openai-gpt",
"n_ctx": 512,
"n_embd": 768,
"n_head": 12,
"n_layer": 12,
"n_positions": 512,
"n_special": 0,
"predict_special_tokens": true,
"resid_pdrop": 0.1,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"task_specific_params": {
"text-generation": {
"do_sample": true,
"max_length": 50
}
},
"transformers_version": "4.7.0.dev0",
"vocab_size": 40478
}
[INFO|tokenization_utils_base.py:1717] 2021-06-07 05:58:16,461 >> loading file https://huggingface.co/openai-gpt/resolve/main/vocab.json from cache at /home/avit/.cache/huggingface/transformers/918c57540c636a2a662770d208fcf20aa8c3faea78201fc612e5c84f052f1119.ac55819e76b0f8b0c32cbb407436947d090d98f8952f38376ee249ed382927ab
[INFO|tokenization_utils_base.py:1717] 2021-06-07 05:58:16,461 >> loading file https://huggingface.co/openai-gpt/resolve/main/merges.txt from cache at /home/avit/.cache/huggingface/transformers/a682e219a788dde0e4f77bc5a470d85a4d7e493420506ce7e3266f7be122cf9e.2150b9689fda7ca7c6224ff32672c004259f974e96934e8eb69d8dd546d682db
[INFO|tokenization_utils_base.py:1717] 2021-06-07 05:58:16,461 >> loading file https://huggingface.co/openai-gpt/resolve/main/tokenizer.json from cache at /home/avit/.cache/huggingface/transformers/325373fcbb0daa99905371727842a87ae9ca0f02f71db071720bb4d5a59076cf.b1810f3c6ed9fc0632664008484a9b569103559c04ac90321723cd808a3a96f9
[INFO|tokenization_utils_base.py:1717] 2021-06-07 05:58:16,461 >> loading file https://huggingface.co/openai-gpt/resolve/main/added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:1717] 2021-06-07 05:58:16,461 >> loading file https://huggingface.co/openai-gpt/resolve/main/special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:1717] 2021-06-07 05:58:16,461 >> loading file https://huggingface.co/openai-gpt/resolve/main/tokenizer_config.json from cache at None
[INFO|modeling_utils.py:1155] 2021-06-07 05:58:16,805 >> loading weights file https://huggingface.co/openai-gpt/resolve/main/pytorch_model.bin from cache at /home/avit/.cache/huggingface/transformers/3e867ce638da986403594a5acbb39846ecb9c3b360a3b526348dd54b06938e55.93527980a112896731f93175b7c1cbc6b0fd771fad85fcc777ff5d49d249782e
[INFO|modeling_utils.py:1339] 2021-06-07 05:58:18,886 >> All model checkpoint weights were used when initializing OpenAIGPTLMHeadModel.
[WARNING|modeling_utils.py:1341] 2021-06-07 05:58:18,886 >> Some weights of OpenAIGPTLMHeadModel were not initialized from the model checkpoint at openai-gpt and are newly initialized: ['lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
0%| | 0/5 [00:00<?, ?ba/s]
40%|ββββ | 2/5 [00:00<00:00, 18.62ba/s][WARNING|tokenization_utils_base.py:3170] 2021-06-07 05:58:19,096 >> Token indices sequence length is longer than the specified maximum sequence length for this model (535 > 512). Running this sequence through the model will result in indexing errors
[WARNING|run_clm.py:347] 2021-06-07 05:58:19,097 >> ^^^^^^^^^^^^^^^^ Please ignore the warning above - this long input will be chunked into smaller bits before being passed to the model.
100%|ββββββββββ| 5/5 [00:00<00:00, 24.33ba/s]
0%| | 0/37 [00:00<?, ?ba/s]
8%|β | 3/37 [00:00<00:01, 22.90ba/s]
16%|ββ | 6/37 [00:00<00:01, 23.70ba/s]
22%|βββ | 8/37 [00:00<00:01, 20.28ba/s]
30%|βββ | 11/37 [00:00<00:01, 21.11ba/s]
38%|ββββ | 14/37 [00:00<00:01, 21.90ba/s]
46%|βββββ | 17/37 [00:00<00:00, 22.32ba/s]
54%|ββββββ | 20/37 [00:00<00:00, 23.04ba/s]
62%|βββββββ | 23/37 [00:01<00:00, 23.13ba/s]
70%|βββββββ | 26/37 [00:01<00:00, 21.79ba/s]
78%|ββββββββ | 29/37 [00:01<00:00, 22.03ba/s]
86%|βββββββββ | 32/37 [00:01<00:00, 22.01ba/s]
95%|ββββββββββ| 35/37 [00:01<00:00, 22.39ba/s]
100%|ββββββββββ| 37/37 [00:01<00:00, 22.54ba/s]
0%| | 0/4 [00:00<?, ?ba/s]
75%|ββββββββ | 3/4 [00:00<00:00, 22.82ba/s]
100%|ββββββββββ| 4/4 [00:00<00:00, 24.22ba/s]
0%| | 0/5 [00:00<?, ?ba/s]
20%|ββ | 1/5 [00:00<00:01, 2.53ba/s]
40%|ββββ | 2/5 [00:00<00:01, 2.66ba/s]
60%|ββββββ | 3/5 [00:01<00:00, 2.74ba/s]
80%|ββββββββ | 4/5 [00:01<00:00, 2.91ba/s]
100%|ββββββββββ| 5/5 [00:01<00:00, 3.54ba/s]
0%| | 0/37 [00:00<?, ?ba/s]
3%|β | 1/37 [00:00<00:10, 3.30ba/s]
5%|β | 2/37 [00:00<00:11, 3.11ba/s]
8%|β | 3/37 [00:01<00:11, 3.05ba/s]
11%|β | 4/37 [00:01<00:10, 3.04ba/s]
14%|ββ | 5/37 [00:01<00:09, 3.22ba/s]
16%|ββ | 6/37 [00:01<00:09, 3.28ba/s]
19%|ββ | 7/37 [00:02<00:09, 3.02ba/s]
22%|βββ | 8/37 [00:02<00:09, 3.06ba/s]
24%|βββ | 9/37 [00:02<00:09, 3.03ba/s]
27%|βββ | 10/37 [00:03<00:08, 3.05ba/s]
30%|βββ | 11/37 [00:03<00:08, 3.01ba/s]
32%|ββββ | 12/37 [00:03<00:08, 2.97ba/s]
35%|ββββ | 13/37 [00:04<00:08, 2.91ba/s]
38%|ββββ | 14/37 [00:04<00:07, 3.04ba/s]
41%|ββββ | 15/37 [00:04<00:07, 3.05ba/s]
43%|βββββ | 16/37 [00:05<00:07, 2.97ba/s]
46%|βββββ | 17/37 [00:05<00:06, 2.95ba/s]
49%|βββββ | 18/37 [00:05<00:06, 3.00ba/s]
51%|ββββββ | 19/37 [00:06<00:05, 3.01ba/s]
54%|ββββββ | 20/37 [00:06<00:05, 3.09ba/s]
57%|ββββββ | 21/37 [00:06<00:05, 2.98ba/s]
59%|ββββββ | 22/37 [00:07<00:05, 2.89ba/s]
62%|βββββββ | 23/37 [00:07<00:04, 2.97ba/s]
65%|βββββββ | 24/37 [00:07<00:04, 3.11ba/s]
68%|βββββββ | 25/37 [00:08<00:03, 3.23ba/s]
70%|βββββββ | 26/37 [00:08<00:03, 3.21ba/s]
73%|ββββββββ | 27/37 [00:08<00:03, 3.04ba/s]
76%|ββββββββ | 28/37 [00:09<00:03, 2.91ba/s]
78%|ββββββββ | 29/37 [00:09<00:02, 3.10ba/s]
81%|ββββββββ | 30/37 [00:09<00:02, 3.07ba/s]
84%|βββββββββ | 31/37 [00:10<00:02, 2.93ba/s]
86%|βββββββββ | 32/37 [00:10<00:01, 2.96ba/s]
89%|βββββββββ | 33/37 [00:10<00:01, 2.93ba/s]
92%|ββββββββββ| 34/37 [00:11<00:01, 2.90ba/s]
95%|ββββββββββ| 35/37 [00:11<00:00, 2.98ba/s]
97%|ββββββββββ| 36/37 [00:11<00:00, 2.92ba/s]
100%|ββββββββββ| 37/37 [00:12<00:00, 3.44ba/s]
100%|ββββββββββ| 37/37 [00:12<00:00, 3.05ba/s]
0%| | 0/4 [00:00<?, ?ba/s]
25%|βββ | 1/4 [00:00<00:00, 3.37ba/s]
50%|βββββ | 2/4 [00:00<00:00, 3.17ba/s]
75%|ββββββββ | 3/4 [00:01<00:00, 3.06ba/s]
100%|ββββββββββ| 4/4 [00:01<00:00, 3.47ba/s]
100%|ββββββββββ| 4/4 [00:01<00:00, 3.31ba/s]
[INFO|trainer.py:1147] 2021-06-07 05:58:35,755 >> ***** Running training *****
[INFO|trainer.py:1148] 2021-06-07 05:58:35,755 >> Num examples = 2282
[INFO|trainer.py:1149] 2021-06-07 05:58:35,755 >> Num Epochs = 3
[INFO|trainer.py:1150] 2021-06-07 05:58:35,755 >> Instantaneous batch size per device = 2
[INFO|trainer.py:1151] 2021-06-07 05:58:35,755 >> Total train batch size (w. parallel, distributed & accumulation) = 8
[INFO|trainer.py:1152] 2021-06-07 05:58:35,755 >> Gradient Accumulation steps = 4
[INFO|trainer.py:1153] 2021-06-07 05:58:35,756 >> Total optimization steps = 855
0%| | 0/855 [00:00<?, ?it/s]Traceback (most recent call last):
File "transformers/examples/pytorch/language-modeling/run_clm.py", line 488, in <module>
main()
File "transformers/examples/pytorch/language-modeling/run_clm.py", line 438, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/avit/trial2/transformers/src/transformers/trainer.py", line 1263, in train
tr_loss += self.training_step(model, inputs)
File "/home/avit/trial2/transformers/src/transformers/trainer.py", line 1741, in training_step
loss = self.compute_loss(model, inputs)
File "/home/avit/trial2/transformers/src/transformers/trainer.py", line 1773, in compute_loss
outputs = model(**inputs)
File "/home/avit/miniconda3/envs/try2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/avit/trial2/transformers/src/transformers/models/openai/modeling_openai.py", line 581, in forward
transformer_outputs = self.transformer(
File "/home/avit/miniconda3/envs/try2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/avit/trial2/transformers/src/transformers/models/openai/modeling_openai.py", line 501, in forward
hidden_states = inputs_embeds + position_embeds + token_type_embeds
RuntimeError: The size of tensor a (1024) must match the size of tensor b (512) at non-singleton dimension 1
0%| | 0/855 [00:00<?, ?it/s]
```
## Expected behaviour
Should not have a mismatch in tensor shapes. Apparently, the max length of tokens do not match: position embeddings expect 512 but input embeddings are 1024. | 06-07-2021 06:15:05 | 06-07-2021 06:15:05 | Note that `openai-gpt` has a max_length of 512. See under `n_positions` in the config here: https://huggingface.co/openai-gpt/blob/main/config.json.
The `run_clm.py` script however sets `max_length` to 1024 by default. To fix your bug you should run:
```bash
python transformers/examples/pytorch/language-modeling/run_clm.py --model_name_or_path openai-gpt --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-clm --per_device_train_batch_size 2 --gradient_accumulation_steps 4 --block_size 512
```<|||||>Actually, it's weird that you get this error since:
```python
from transformers import OpenAIGPTTokenizer
tokenizer = OpenAIGPTTokenizer.from_pretrained("openai-gpt")
tokenizer.model_max_length # prints 512
```
=> so the block size should have automatically been correctly set <|||||>There is a small bug with a line not properly indented, fixing. |
transformers | 12,047 | closed | Question: Masked Loss for LukeForEntitySpanClassification | In transformers.LukeForEntitySpanClassification, the loss is calculated from labels of shape `(batch_size, entity_length)` and in range `[0, ..., config.num_labels - 1]`. I did not see in the source code you masked out the loss for padded tokens, nor did you make a special label for padded tokens. So how do you deal with the loss of padded sequences?
Many thanks! | 06-07-2021 03:14:31 | 06-07-2021 03:14:31 | Normally, in PyTorch, you have to set labels for padding tokens equal to -100, as -100 is the default ignore index that loss functions use.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,046 | closed | ImportError: cannot import name 'AutoTokenizer' from 'transformers' | transformers: 4.6.1
tokenizers: 0.10.3
I installed transformers with
`conda install -c huggingface transformers`
but when I `from transformers import AutoTokenizer`
Traceback (most recent call last):
File "D:/IIE/WorkSpace/Pycharm WorkSpace/HuggingfaceNER/tokenizers.py", line 1, in <module>
from transformers import AutoTokenizer
File "D:\Environment\Anaconda3\envs\huggingface\lib\site-packages\transformers\__init__.py", line 48, in <module>
from .data import (
File "D:\Environment\Anaconda3\envs\huggingface\lib\site-packages\transformers\data\__init__.py", line 6, in <module>
from .processors import (
File "D:\Environment\Anaconda3\envs\huggingface\lib\site-packages\transformers\data\processors\__init__.py", line 5, in <module>
from .glue import glue_convert_examples_to_features, glue_output_modes, glue_processors, glue_tasks_num_labels
File "D:\Environment\Anaconda3\envs\huggingface\lib\site-packages\transformers\data\processors\glue.py", line 25, in <module>
from ...tokenization_utils import PreTrainedTokenizer
File "D:\Environment\Anaconda3\envs\huggingface\lib\site-packages\transformers\tokenization_utils.py", line 26, in <module>
from .tokenization_utils_base import (
File "D:\Environment\Anaconda3\envs\huggingface\lib\site-packages\transformers\tokenization_utils_base.py", line 31, in <module>
from tokenizers import AddedToken
File "D:\IIE\WorkSpace\Pycharm WorkSpace\HuggingfaceNER\tokenizers.py", line 1, in <module>
from transformers import AutoTokenizer
ImportError: cannot import name 'AutoTokenizer' from 'transformers' (D:\Environment\Anaconda3\envs\huggingface\lib\site-packages\transformers\__init__.py)
It even worked well yesterday, and I didn't upgrade anything...
| 06-07-2021 02:07:49 | 06-07-2021 02:07:49 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,045 | closed | [wip] [deps] data_collator fails with older numpy, update numpy>=1.20.0 | Please ignore for now as this looks a pytorch 1.9.0-rc problem, filed an issue
https://github.com/pytorch/pytorch/issues/59533
--------------------
with pytorch-1.9.0/nighty 1.9.0a0+git2a178d3
data collator via Trainer fails with numpy==1.19.5 with:
```
RuntimeError: Could not infer dtype of numpy.float32
```
Additionally getting a warning:
```
../../../../../home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/package/_mock_zipreader.py:17
/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/package/_mock_zipreader.py:17: UserWarning: Failed to initialize NumPy: module compiled against API version 0xe but this version of numpy is 0xd (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:67.)
_dtype_to_storage = {data_type(0).dtype: data_type for data_type in _storages}
```
Full trace:
```
$ pip install numpy==1.19.5
$ pytest tests/test_trainer.py::TrainerIntegrationTest::test_fp16_full_eval
====================================================================== test session starts ======================================================================
platform linux -- Python 3.8.8, pytest-6.2.3, py-1.10.0, pluggy-0.13.1
rootdir: /mnt/nvme1/code/huggingface, configfile: pytest.ini
plugins: monitor-1.6.0, flakefinder-1.0.0, forked-1.3.0, instafail-0.4.2, xdist-2.2.1
collected 1 item
tests/test_trainer.py F [100%]
=========================================================================== FAILURES ============================================================================
__________________________________________________________ TrainerIntegrationTest.test_fp16_full_eval ___________________________________________________________
self = <tests.test_trainer.TrainerIntegrationTest testMethod=test_fp16_full_eval>
def setUp(self):
super().setUp()
args = TrainingArguments(".")
self.n_epochs = args.num_train_epochs
self.batch_size = args.train_batch_size
trainer = get_regression_trainer(learning_rate=0.1)
> trainer.train()
tests/test_trainer.py:333:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/trainer.py:1237: in train
for step, inputs in enumerate(epoch_iterator):
/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/utils/data/dataloader.py:521: in __next__
data = self._next_data()
/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/utils/data/dataloader.py:561: in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
/home/stas/anaconda3/envs/py38-pt18/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py:47: in fetch
return self.collate_fn(data)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
features = [{'input_x': -0.54438275, 'labels': 1.8582585}, {'input_x': 0.64768857, 'labels': 4.288176}, {'input_x': 1.5792128, 'l...abels': 0.12356561}, {'input_x': -0.46947438, 'labels': 2.0574687}, {'input_x': -0.46572974, 'labels': 2.1507308}, ...]
def default_data_collator(features: List[InputDataClass]) -> Dict[str, torch.Tensor]:
"""
Very simple data collator that simply collates batches of dict-like objects and performs special handling for
potential keys named:
- ``label``: handles a single value (int or float) per object
- ``label_ids``: handles a list of values per object
Does not do any additional preprocessing: property names of the input object will be used as corresponding inputs
to the model. See glue and ner for example of how it's useful.
"""
# In this function we'll make the assumption that all `features` in the batch
# have the same attributes.
# So we will look at the first element as a proxy for what attributes exist
# on the whole batch.
if not isinstance(features[0], (dict, BatchEncoding)):
features = [vars(f) for f in features]
first = features[0]
batch = {}
# Special handling for labels.
# Ensure that tensor is created with the correct type
# (it should be automatically the case, but let's make sure of it.)
if "label" in first and first["label"] is not None:
label = first["label"].item() if isinstance(first["label"], torch.Tensor) else first["label"]
dtype = torch.long if isinstance(label, int) else torch.float
batch["labels"] = torch.tensor([f["label"] for f in features], dtype=dtype)
elif "label_ids" in first and first["label_ids"] is not None:
if isinstance(first["label_ids"], torch.Tensor):
batch["labels"] = torch.stack([f["label_ids"] for f in features])
else:
dtype = torch.long if type(first["label_ids"][0]) is int else torch.float
batch["labels"] = torch.tensor([f["label_ids"] for f in features], dtype=dtype)
# Handling of all other possible keys.
# Again, we will use the first element to figure out which key/values are not None for this model.
for k, v in first.items():
if k not in ("label", "label_ids") and v is not None and not isinstance(v, str):
if isinstance(v, torch.Tensor):
batch[k] = torch.stack([f[k] for f in features])
else:
> batch[k] = torch.tensor([f[k] for f in features])
E RuntimeError: Could not infer dtype of numpy.float32
src/transformers/data/data_collator.py:80: RuntimeError
```
The error goes away after installing the next release `numpy==1.20.0`.
Perhaps it can be fixed in the collator to support older numpy.
This PR is one way to approach it. Not sure if we have other dependencies that perhaps require numpy<=1.20.
| 06-06-2021 23:39:04 | 06-06-2021 23:39:04 | OK, so as I was concerned some dependency fixes numpy at 1.19.5 so things fail:
```
ERROR: Could not find a version that satisfies the requirement numpy>=1.20.0 (from transformers[all,quality]) (from versions: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 1.10.4, 1.11.0, 1.11.1, 1.11.2, 1.11.3, 1.12.0, 1.12.1, 1.13.0rc1, 1.13.0rc2, 1.13.0, 1.13.1, 1.13.3, 1.14.0rc1, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5, 1.14.6, 1.15.0rc1, 1.15.0rc2, 1.15.0, 1.15.1, 1.15.2, 1.15.3, 1.15.4, 1.16.0rc1, 1.16.0rc2, 1.16.0, 1.16.1, 1.16.2, 1.16.3, 1.16.4, 1.16.5, 1.16.6, 1.17.0rc1, 1.17.0rc2, 1.17.0, 1.17.1, 1.17.2, 1.17.3, 1.17.4, 1.17.5, 1.18.0rc1, 1.18.0, 1.18.1, 1.18.2, 1.18.3, 1.18.4, 1.18.5, 1.19.0rc1, 1.19.0rc2, 1.19.0, 1.19.1, 1.19.2, 1.19.3, 1.19.4, 1.19.5)
ERROR: No matching distribution found for numpy>=1.20.0
Exited with code exit status 1
```
This is so not user-friendly as it could have said which dependency causes this conflict.
`pip check` comes to help:
```
$ pip check
WARNING: Ignoring invalid distribution -orch (/mnt/nvme1/anaconda3/envs/py38-pt18/lib/python3.8/site-packages)
tensorflow 2.5.0 has requirement numpy~=1.19.2, but you have numpy 1.20.0.
```
so `tensorflow 2.5.0` is the limiting culprit.<|||||>OK, that was a faulty pytorch build. Seems to work fine with the latest nightly or 1.9.0-rc. |
transformers | 12,044 | closed | Electra model vocabulary | 1. Electra model vocabulary doesn't show the vocabulary words unlike other models where vocabulary words can be clearly seen.
2. In this link (https://huggingface.co/google/electra-base-discriminator/resolve/main/vocab.txt) strangely all words are [unused0] barring [PAD], [CLS] and few other special tokens.
3. How can I seen vocabulary words for Electra tokenizer?? | 06-06-2021 16:50:10 | 06-06-2021 16:50:10 | |
transformers | 12,043 | closed | [Draft] Wav2Vec2 - Save intermediate PR verifying that implementation matches fairseq ones | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Using this branch of fairseq: https://github.com/patrickvonplaten/fairseq/pull/1
running this code should work as expected:
```python
#!/usr/bin/env python3
import datasets
import fairseq
import torch
import soundfile as sf
import sys
from fairseq.criterions.wav2vec_criterion import Wav2VecCriterionConfig, Wav2vecCriterion
from fairseq.tasks.audio_pretraining import AudioPretrainingConfig, AudioPretrainingTask
from transformers import Wav2Vec2ForPreTraining, Wav2Vec2FeatureExtractor
hf_path = str(sys.argv[1])
fairseq_wav2vec2_path = str(sys.argv[2])
model, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task([fairseq_wav2vec2_path])
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(hf_path, do_normalize=False)
hf_model = Wav2Vec2ForPreTraining.from_pretrained(hf_path)
model = model[0]
model.eval()
dummy_speech_data = datasets.load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
def map_to_array(batch):
speech_array, _ = sf.read(batch["file"])
batch["speech"] = speech_array
return batch
dummy_speech_data = dummy_speech_data.map(map_to_array, remove_columns=["file"])
inputs = feature_extractor(dummy_speech_data[:3]["speech"], return_tensors="pt", padding="longest", return_attention_mask=True)
input_values = inputs.input_values
attention_mask = inputs.attention_mask
audio_cfg = AudioPretrainingConfig(labels="ltr", data="./data")
task = AudioPretrainingTask.setup_task(audio_cfg)
criterion = Wav2vecCriterion(Wav2VecCriterionConfig(infonce=True, log_keys=["prob_perplexity", "code_perplexity", "temp"], loss_weights=[0.1, 10]), task)
sample = {
"net_input": {
"source": input_values,
"padding_mask": attention_mask.ne(1),
},
"id": torch.zeros((1,)),
}
torch.manual_seed(0)
loss, sample_size, log, result = criterion(model, sample)
torch.manual_seed(0)
hf_result = hf_model(input_values, attention_mask=attention_mask, mask_time_indices=result["mask_indices"], fsq_negs=result["negs"])
hf_logits = hf_result.logits.permute(1, 2, 0)[result["mask_indices"]]
hf_logits = hf_logits.reshape(result['x'].shape[1:] + (-1,)).permute(2, 0, 1)
assert torch.allclose(hf_logits, result['x'], atol=1e-3), "wrong logits"
print("Loss diff %", 100 * (loss.detach().item() - hf_result.loss.detach().item()) / hf_result.loss.detach())
print("perplexity diff %", 100 * (hf_result.prob_perplexity.detach().item() -result["prob_perplexity"].detach().item()) / hf_result.prob_perplexity.detach())
```
and using [this](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small.pt) as the fairseq checkpoint and [this](https://huggingface.co/patrickvonplaten/wav2vec2-base) model as the HF model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-06-2021 14:52:22 | 06-06-2021 14:52:22 | Delete after successful run of Wav2Vec2 PreTraining<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,042 | closed | Add optional grouped parsers description to HfArgumentParser | # What does this PR do?
This PR adds optional grouping to the argument parser of `HfArgumentParser` when multiple dataclasses are used (with different sub-grouped parameters, such as optimizer setup, model config, etc.) so that the displayed `-h` will print multiple grouped arguments in a more semantically organized way.
Uses optional attribute `_argument_group_name=<some string>` in the dataclass. If exists in the dataclass a sub parser will be used instead of the root `HfArgumentParser`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
@peteriz: Updated docstring inline
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-06-2021 14:00:47 | 06-06-2021 14:00:47 | Thanks! |
transformers | 12,041 | closed | Why my simple Bert model for text classification could not learn anything? | Hello, I try `transformers.BertModel` to deal with a simple text classification, but the result makes me puzzled.
the code is simple,I implement the model with pytorch.
they are...
```
# a Dataset class for BertModel
class BertDataset(Dataset):
def __init__(self, train_file, tokenizer):
super(BertDataset, self).__init__()
self.train_file = train_file
self.data = []
self.label2id = {}
self.id2label = {}
self.tokenizer = tokenizer
self.init()
def init(self):
with open(self.train_file, 'r', encoding='utf-8') as f:
for line in f:
blocks = line.strip().split('\t')
if blocks[1] not in self.label2id:
self.label2id[blocks[1]] = len(self.label2id)
self.id2label[len(self.id2label)] = blocks[1]
self.data.append({'token': self.tokenizer(blocks[0], add_special_tokens=True, max_length=100,
padding='max_length', return_tensors='pt',
truncation=True),
'label': self.label2id[blocks[1]]})
def __getitem__(self, item):
return self.data[item]
def __len__(self):
return len(self.data)
# a collate function for torch.utils.data.DataLoader
def bert_collate_fn(batch_data):
input_ids, token_type_ids, attention_mask, labels = [], [], [], []
for instance in copy.deepcopy(batch_data):
input_ids.append(instance['token']['input_ids'][0].squeeze(0))
token_type_ids.append(instance['token']['token_type_ids'][0].squeeze(0))
attention_mask.append(instance['token']['attention_mask'][0].squeeze(0))
labels.append(instance['label'])
return torch.stack(input_ids), torch.stack(token_type_ids), \
torch.stack(attention_mask), torch.tensor(labels)
# Model
class PTModel(nn.Module):
def __init__(self, model, n_class):
super(PTModel, self).__init__()
self.n_class = n_class
self.model = model
self.linear = nn.Linear(768, self.n_class)
self.softmax = nn.Softmax(dim=-1)
def forward(self, input_ids, token_type_ids=None, attention_mask=None):
cls_emb = self.model(input_ids=input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask)
cls_emb = cls_emb[0][:, 0, :].squeeze(1)
logits = self.linear(cls_emb)
# logits = self.softmax(logits)
return logits
# train code
def train1():
# data
batch_size = 16
tokenizer = BertTokenizer.from_pretrained(pretrained_path)
dataset = BertDataset('../data/dataset/data.txt', tokenizer)
train_len = int(len(dataset)*0.8)
train_dataset, dev_dataset = random_split(dataset=dataset, lengths=[train_len, len(dataset)-train_len])
train_dataloader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True, collate_fn=bert_collate_fn)
dev_dataloader = DataLoader(dev_dataset, batch_size=batch_size, shuffle=True, collate_fn=bert_collate_fn)
# model
device = torch.device('cuda:{}'.format(args.cuda))
bert_model = BertModel.from_pretrained(pretrained_path)
model = PTModel(model=bert_model, n_class=len(dataset.label2id)).to(device)
optimizer = torch.optim.Adam(params=model.parameters(), lr=args.lr)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer=optimizer, milestones=[30, 40], gamma=0.1)
loss_func = torch.nn.CrossEntropyLoss()
# train
for i in range(args.epoch):
model.train()
train_loss, dev_loss, f1_train, f1_dev = [], [], [], []
dev_pred_list, dev_gold_list = [], []
for input_ids, token_type_ids, attention_mask, label in tqdm(train_dataloader):
input_ids, token_type_ids, attention_mask, label = input_ids.to(device), token_type_ids.to(device), \
attention_mask.to(device), label.to(device),
outputs = model(input_ids=input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask)
array_outputs = np.array(outputs.cuda().data.cpu())
optimizer.zero_grad()
loss = loss_func(outputs, label)
results = outputs.cuda().data.cpu().argmax(dim=1)
score = f1_score(label.cuda().data.cpu(), results, average='micro')
train_loss.append(loss.item())
f1_train.append(score)
# optim
loss.backward()
optimizer.step()
scheduler.step()
print('epoch {}'.format(i))
print('train_loss:{}'.format(np.mean(train_loss)))
print('train_f1:{}'.format(np.mean(f1_train)))
```
The train log is following(only 10 epoches). And the result was already clear: The model could not learn anything!!!!
PS: the learning rate was 1e-3.
```
100%|βββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:43<00:00, 5.72it/s]
epoch 0
train_loss:4.217772917747498
train_f1:0.081
100%|βββββββββββββββββββββββββββββββββββββββββββ| 63/63 [00:03<00:00, 19.52it/s]
dev_f1:0.08928571428571429
dev_loss:4.111690880760314
100%|βββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:43<00:00, 5.71it/s]
epoch 1
train_loss:4.094675525665283
train_f1:0.084
100%|βββββββββββββββββββββββββββββββββββββββββββ| 63/63 [00:03<00:00, 19.16it/s]
dev_f1:0.0882936507936508
dev_loss:4.1316274839734275
100%|βββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:43<00:00, 5.71it/s]
epoch 2
train_loss:4.084259546279907
train_f1:0.08525
100%|βββββββββββββββββββββββββββββββββββββββββββ| 63/63 [00:03<00:00, 19.37it/s]
dev_f1:0.08928571428571429
dev_loss:4.108004717599778
100%|βββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:44<00:00, 5.62it/s]
epoch 3
train_loss:4.0770455904006955
train_f1:0.09425
100%|βββββββββββββββββββββββββββββββββββββββββββ| 63/63 [00:03<00:00, 19.07it/s]
dev_f1:0.08928571428571429
dev_loss:4.1077501395392035
100%|βββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:45<00:00, 5.54it/s]
epoch 4
train_loss:4.070150758743286
train_f1:0.086
100%|βββββββββββββββββββββββββββββββββββββββββββ| 63/63 [00:03<00:00, 19.41it/s]
dev_f1:0.09027777777777778
dev_loss:4.103204295748756
100%|βββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:45<00:00, 5.52it/s]
epoch 5
train_loss:4.064209712982178
train_f1:0.0895
100%|βββββββββββββββββββββββββββββββββββββββββββ| 63/63 [00:03<00:00, 19.31it/s]
dev_f1:0.08928571428571429
dev_loss:4.117827377622089
100%|βββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:43<00:00, 5.70it/s]
epoch 6
train_loss:4.065111406326294
train_f1:0.08425
100%|βββββββββββββββββββββββββββββββββββββββββββ| 63/63 [00:03<00:00, 19.34it/s]
dev_f1:0.0882936507936508
dev_loss:4.099656305615864
100%|βββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:44<00:00, 5.58it/s]
epoch 7
train_loss:4.0547873935699466
train_f1:0.09175
100%|βββββββββββββββββββββββββββββββββββββββββββ| 63/63 [00:03<00:00, 19.30it/s]
dev_f1:0.08928571428571429
dev_loss:4.105985126798115
100%|βββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:43<00:00, 5.76it/s]
epoch 8
train_loss:4.0595885887145995
train_f1:0.08875
100%|βββββββββββββββββββββββββββββββββββββββββββ| 63/63 [00:03<00:00, 19.26it/s]
dev_f1:0.09027777777777778
dev_loss:4.121003010916332
100%|βββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:45<00:00, 5.46it/s]
epoch 9
train_loss:4.054850312232971
train_f1:0.08825
100%|βββββββββββββββββββββββββββββββββββββββββββ| 63/63 [00:03<00:00, 18.86it/s]
dev_f1:0.08928571428571429
dev_loss:4.12501887669639
100%|βββββββββββββββββββββββββββββββββββββββββ| 250/250 [00:45<00:00, 5.46it/s]
epoch 10
train_loss:4.0566882238388065
train_f1:0.08525
100%|βββββββββββββββββββββββββββββββββββββββββββ| 63/63 [00:03<00:00, 18.85it/s]
dev_f1:0.09126984126984126
dev_loss:4.103033436669244
```
Before this BertModel, I have tried LSTM, and the LSTM worked well. the dev f1 reached 0.96.
```
# LSTM
class SimpleModel(nn.Module):
def __init__(self, **kwargs):
super(SimpleModel, self).__init__()
self.embedding = nn.Embedding.from_pretrained(kwargs['pretrained_embedding'], freeze=False)
self.lstm = nn.LSTM(kwargs['pretrained_embedding'].shape[1],
kwargs['hidden_size'],
batch_first=True,
bidirectional=True)
self.linear = nn.Linear(kwargs['hidden_size']*2, kwargs['n_class'])
def forward(self, inputs, lens):
inputs = self.embedding(inputs)
_, (h, _) = self.lstm(pack_padded_sequence(inputs, lens, batch_first=True, enforce_sorted=False))
h = h.permute(1, 0, 2).contiguous().view(h.shape[1], -1)
logits = self.linear(h)
logits = logits.softmax(dim=-1)
return logits
```
Could any good man tell me why this code can't work.
Is there something wrong with my writing?
I have been confused for days....
Thank you very much!
| 06-06-2021 08:27:24 | 06-06-2021 08:27:24 | Hi there!
Please use the forum https://discuss.huggingface.co/ to ask such questions :) We use issues to report bugs and for feature requests.<|||||>> Hi there!
>
> Please use the forum https://discuss.huggingface.co/ to ask such questions :) We use issues to report bugs and for feature requests.
I am sorry<|||||>I find the key of this problem, only need to change the learning rate to 1e-5.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,040 | closed | Add torch to requirements.txt in language-modeling | it seems the requirements.txt was missing `torch`, which is seemingly required. Just adding it.
@sgugger | 06-05-2021 17:29:16 | 06-05-2021 17:29:16 | Thanks again! |
transformers | 12,039 | closed | pipelines should allow passing in tokenizer arguments | # π Feature request
It should be possible to pass in additional arguments for the tokenizer in the pipeline constructor.
Something like this:
```
from transformers import pipeline
classifier = pipeline('sentiment-analysis', padding=True, truncation=True, max_length=512, device=0)
```
## Motivation
For example for a sentiment-analysis pipeline, if the model has a maximum number of tokens and you pass-in larger text than that to the pipeline it will make the pipeline crash. It would be really nice to be able to provide additional arguments for the tokenizer like padding=True, truncation=True, max_length=512 for example. The only workaround I found was to create the tokenizer and model separately and provide the arguments to the tokenizer directly.
Here is how I do it right now:
```
pt_batch = tokenizer(texts, padding=True, truncation=True, max_length=512, return_tensors="pt")
for x in pt_batch.keys():
pt_batch[x] = pt_batch[x].to('cuda')
pt_outputs = model(**pt_batch)
```
and how I would prefer to be able to do it instead:
```
from transformers import pipeline
classifier = pipeline('sentiment-analysis', padding=True, truncation=True, max_length=512, device=0)
preds = classifier(texts)
``` | 06-05-2021 15:01:13 | 06-05-2021 15:01:13 | Hi,
you can pass a 2-tuple `(tokenizer_name, tokenizer_kwargs)` to achieve this. E.g.:
```python
from transformers import pipeline
classifier = pipeline('sentiment-analysis', tokenizer=(tokenizer_name, {"padding": True, "truncation": True, "max_length": 512}), device=0)
```
@patrickvonplaten Any particular reason why this is not documented? <|||||>@mariosasko thanks for your response! I tried this, but unfortunately I get the same error.
```
from transformers import pipeline
model_name = 'distilbert-base-uncased-finetuned-sst-2-english'
classifier = pipeline('sentiment-analysis', model=model_name, tokenizer=(model_name, {"padding": True, "truncation": True, "max_length": 512}), device=0)
```
The error:
> Token indices sequence length is longer than the specified maximum sequence length for this model (1055 > 512). Running this sequence through the model will result in indexing errors
Thanks,<|||||>My bad. Just checked the source. This should work:
```python
from transformers import pipeline
classifier = pipeline('sentiment-analysis', device=0)
classifier(texts, padding=True, truncation=True, max_length=512)
```<|||||>This works! I was sure I had tried that, but it seems not.
Thank you! |
transformers | 12,038 | closed | Support for pointer-generator architectures. | # π Feature request
Is there interest in adding pointer generator architecture support to huggingface? These are currently supported in [fairseq](https://github.com/pytorch/fairseq/blob/master/examples/pointer_generator/README.md), and in general should not be terrible to add for most encoder-decoder seq2seq tasks and modeks.
## Motivation
Pointer-generator architectures generally give SOTA results for extractive summarization, as well as for semantic parsing. (See for instance this paper [https://arxiv.org/pdf/2001.11458.pdf](url)).
## Your contribution
If there is interest but not bandwidth from huggingface members, I could try to add pointer-generator support for a specific architecture such as the T5 and see how hard it would be to port over fairseq's implementation for instance.
Note: apologies also if I've missed where huggingface supports it already.
| 06-05-2021 13:30:06 | 06-05-2021 13:30:06 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,037 | closed | Using the latest DPR checkpoint available with HuggingFace DPR class | Hi,
There's a new DPR checkpoint available in [DPR Orignal repositary](https://github.com/facebookresearch/DPR#new-march-2021-retrieval-model), which shows nice improvements. I did convert the DPR checkpoint using the conver_dpr.py (did a minor modification) and it is working fine.
I have one question regarding the correct tokenizer that should use for the question_encder and context_enocder. Since I can load the tokenizer from the following paths (AutoTokenizer.from_pretrained), I assumed both of these tokenizers behave in the same way (PreTrainedTokenizerFast).
1. **facebook/dpr-question_encoder-multiset-base**
2. **facebook/dpr-question_encoder-single-nq-base**
So, there won't be any problem if I use a tokenizer loaded from any of the above paths with the new checkpoint right (since DPR uses HuggingFace Tokenizers)?
@lhoestq
| 06-05-2021 13:01:49 | 06-05-2021 13:01:49 | Hi ! Yes you should be fined. Under the hood it's actually the same tokenizer as `bert-base-uncased`.
Also it would be nice to add the new DPR checkpoints on the Hub as well.
What changes did you have to do in the convert_dpr.py file ?<|||||>Perfect and thanks @lhoestq
Actually, it was a very minor change. With the current transformer version, it gives an error [in this line](https://github.com/huggingface/transformers/blob/finalize_rag/src/transformers/convert_dpr_original_checkpoint_to_pytorch.py#L71), saying no positional_id key in the state_dict. It simply worked with setting the **strict= False**.
But for clarity I changed as follows (check newly added line :)):
```
class DPRQuestionEncoderState(DPRState):
def load_dpr_model(self):
model = DPRQuestionEncoder(DPRConfig(**BertConfig.get_config_dict("bert-base-uncased")[0]))
print("Loading DPR biencoder from {}".format(self.src_file))
saved_state = load_states_from_checkpoint(self.src_file)
encoder, prefix = model.question_encoder, "question_model."
model_state_dict = encoder.state_dict()
state_dict = {}
for key, value in saved_state.model_dict.items():
if key.startswith(prefix):
key = key[len(prefix) :]
if not key.startswith("encode_proj."):
key = "bert_model." + key
state_dict[key] = value
#newly added
for k , v in model_state_dict.items():
if not k in state_dict:
print("warnning can't find key:",k)
state_dict[k]=v
#encoder.save_pretrained(save_directory='./', state_dict=state_dict) #no need to modify
encoder.load_state_dict(state_dict)
return model
```
|
transformers | 12,036 | closed | I cannot import deepsepped | ```
from transformers.file_utils import CONFIG_NAME
from transformers.deepspeed import deepspeed_config, is_deepspeed_zero3_enabled
```
why I am getting this error ? even the file_utils and deepspeed are in the same directory I can import the first one but not the second which is not understandable for me
> ---------------------------------------------------------------------------
> ModuleNotFoundError Traceback (most recent call last)
> File /data/home/admin/.conda/envs/cmr_env/lib/python3.8/site-packages/IPython/core/interactiveshell.py, in run_code:
> Line 3437: exec(code_obj, self.user_global_ns, self.user_ns)
>
> In [12]:
> Line 1: from transformers.deepspeed import deepspeed_config, is_deepspeed_zero3_enabled
>
> ModuleNotFoundError: No module named 'transformers.deepspeed'
> --------------------------------------------------------------------------- | 06-05-2021 10:11:54 | 06-05-2021 10:11:54 | Sorry my bad I have not noticed that thee was changes on transformers and should upgrade the installation |
transformers | 12,035 | closed | Fixed Typo in modeling_bart.py | # What does this PR do?
Fixes #11895
Fixed Typo `(seq_len, batch, embed_dim)` to `(batch, seq_len, embed_dim)` in line
[373](https://github.com/huggingface/transformers/blob/996a315e76f6c972c854990e6114226a91bc0a90/src/transformers/models/bart/modeling_bart.py#L373) and [376](https://github.com/huggingface/transformers/blob/996a315e76f6c972c854990e6114226a91bc0a90/src/transformers/models/bart/modeling_bart.py#L376) as discussed [here](https://github.com/huggingface/transformers/issues/11895).
@NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-05-2021 09:25:32 | 06-05-2021 09:25:32 | Thanks a lot for fixing this!
Could you run `make fix-copies` and then push again? |
transformers | 12,034 | closed | After loading fine-tuned model from local and use it for prediction, it continue training from scratch again! | transformers: 4.6.1
torch: 1.3
gpu: k40m * 2
datasets: msra_ner
model: hfl/chinese-bert-wwm
I'm fine-tuning a model for token classification, and after training, I save the model:
`trainer.save_model('./model')
trainer.save_metrics('./model')`
Now I load the saved model :
`
tokenizer = AutoTokenizer.from_pretrained("./model")
config = transformers.AutoConfig.from_pretrained("./model")
model = AutoModelForTokenClassification.from_pretrained("./model", config=config)
args = TrainingArguments(
output_dir='./results'
)
trainer = Trainer(
model,
args,
data_collator=data_collator,
tokenizer=tokenizer,
)
`
And then predict with test dataset:
`
predictions, labels, metrics = trainer.predict(tokenized_datasets)
`
But in terminal, I get:
Some weights of the model checkpoint at hfl/chinese-bert-wwm were not used when initializing BertForTokenClassification: ['cls.predictions.transform.dense.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.bias', 'cls.predictions.decoder.weight']
- This IS expected if you are initializing BertForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BertForTokenClassification were not initialized from the model checkpoint at hfl/chinese-bert-wwm and are newly initialized: ['classifier.bias', 'classifier.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
0%| | 0/28130 [00:00<?, ?it/s]
It seems like that the model starts training again?
@sgugger
| 06-05-2021 03:09:54 | 06-05-2021 03:09:54 | No, this means you are loading a model that hasn't been fine-tuned. The model present in in your `./model` directory seems to have a pretraining head which is discarded, and a new token classification head is instantiated.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,033 | closed | A bug of modeling_wav2vec2.py:1033 line | transformers/models/wav2vec2/modeling_wav2vec2.py
now is :
>>> import torch
>>> from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
>>> from datasets import load_dataset
>>> import soundfile as sf
>>> processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
>>> model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
>>> def map_to_array(batch):
>>> speech, _ = sf.read(batch["file"])
>>> batch["speech"] = speech
>>> return batch
>>> ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
>>> ds = ds.map(map_to_array)
>>> input_values = processor(ds["speech"][0], return_tensors="pt").input_values # Batch size 1
>>> logits = model(input_values).logits
>>> predicted_ids = torch.argmax(logits, dim=-1)
>>> transcription = processor.decode(predicted_ids[0])
>>> # compute loss
>>> target_transcription = "A MAN SAID TO THE UNIVERSE SIR I EXIST"
>>> # wrap processor as target processor to encode labels
>>> with processor.as_target_processor():
>>> labels = processor(transcription, return_tensors="pt").input_ids
>>> loss = model(input_values, labels=labels).loss
it should be
>>> with processor.as_target_processor():
>>> labels = processor(target_transcription , return_tensors="pt").input_ids
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
@patrickvonplaten | 06-05-2021 02:11:42 | 06-05-2021 02:11:42 | Great catch @zhangbo2008!
Would you like to open a PR to fix it?<|||||>ok
i see you have fixed it in the latest version thanks for your works. |
transformers | 12,032 | closed | Documents of `past_key_values` in input and output for `PegasusModel` are not aligned | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-4.15.0-144-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
Pegasus @patrickvonplaten, @patil-suraj
## Information
According to the documentation of `PegasusModel`, the `past_key_values` for input and output have different shape,
```
past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers with each tuple having 2 tuples each of which has 2 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) β
Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding.
```
```
past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) β
Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
```
I'm trying to reproduce the behaviour of passing `past_key_values` as inputs, so I construct a `dummy_past_key_values` to feed into the model,
```
decoder_seq_length = decoder_input_ids.shape[1]
dummy_past_value_keys = torch.ones(size=[1, model.config.num_attention_heads, decoder_seq_length-1, int(model.config.d_model / model.config.num_attention_heads)], dtype=torch.float32)
pkv_tuple = ((dummy_past_value_keys, dummy_past_value_keys), (dummy_past_value_keys, dummy_past_value_keys))
pkv_tuple = tuple([pkv_tuple] * model.config.num_hidden_layers)
outputs = model(input_ids, decoder_input_ids=decoder_input_ids, past_key_values=pkv_tuple)
```
Then I got the following error,
```
AttributeError: 'tuple' object has no attribute 'shape'
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-22-be142139ae01> in <module>
----> 1 outputs = model(input_ids, decoder_input_ids=decoder_input_ids, past_key_values=pkv_tuple)
~/anaconda3/envs/wga/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
~/anaconda3/envs/wga/lib/python3.8/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
1267 )
1268
-> 1269 outputs = self.model(
1270 input_ids,
1271 attention_mask=attention_mask,
~/anaconda3/envs/wga/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
~/anaconda3/envs/wga/lib/python3.8/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
1151
1152 # decoder outputs consists of (dec_features, past_key_value, dec_hidden, dec_attn)
-> 1153 decoder_outputs = self.decoder(
1154 input_ids=decoder_input_ids,
1155 attention_mask=decoder_attention_mask,
~/anaconda3/envs/wga/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
~/anaconda3/envs/wga/lib/python3.8/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, head_mask, cross_attn_head_mask, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
941
942 # past_key_values_length
--> 943 past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0
944
945 if inputs_embeds is None:
AttributeError: 'tuple' object has no attribute 'shape'
```
And if I make a `dummy_past_value_keys` using the shape described by the output document,
```
pkv_tuple = (dummy_past_value_keys,) * 4
pkv_tuple = tuple([pkv_tuple] * model.config.num_hidden_layers)
outputs = model(input_ids, decoder_input_ids=decoder_input_ids, past_key_values=pkv_tuple)
```
No error shows up. Maybe I misunderstood the documentation, but from my perspective `Tuple of length config.n_layers with each tuple having 2 tuples each of which has 2 tensors` seems more like a `Tuple[Tuple[Tuple[torch.Tensor]]]`, and the description itself is quite confusing. The code that throws the exception `past_key_values_length = past_key_values[0][0].shape[2] ` tries to access the tensor's shape, which can be only done with `Tuple[Tuple[torch.Tensor]]`, and it makes more sense that the input and output `past_key_values` have the same shape. I wonder if the description of input `past_key_values` is the old version and hasn't been updated? Thank you.
| 06-05-2021 00:52:04 | 06-05-2021 00:52:04 | Hi @AlfredWGA , you are right ! The description of input `past_key_values` is old and should be updated.
The correct shape is as described by the output docstring. Thanks for reporting! |
transformers | 12,031 | closed | Layoutlmv2 port with testing | # What does this PR do?
Trying to open up my own PR to figure out what's wrong with the install on https://github.com/huggingface/transformers/pull/11933, as it seems to be working locally but not on CircleCI
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
#11932
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. [microsoft/unilm#325](https://github.com/microsoft/unilm/issues/325)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). Not yet
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
@inproceedings{Xu2020LayoutLMv2MP,
title = {LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding},
author = {Yang Xu and Yiheng Xu and Tengchao Lv and Lei Cui and Furu Wei and Guoxin Wang and Yijuan Lu and Dinei Florencio and Cha Zhang and Wanxiang Che and Min Zhang and Lidong Zhou},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL) 2021},
year = {2021},
month = {August},
} | 06-05-2021 00:24:00 | 06-05-2021 00:24:00 | Looking good! Feel free to ping me when you want a review :)<|||||>@LysandreJik so, the logic in this PR is ready to be reviewed (mostly from https://github.com/microsoft/unilm with a few bug fixes). The one problem I'm having with this PR is that the detectron2 library requires torch to be **already installed** to build the wheel - and doesn't list it as a dependency!
I have searched through a bunch of python documentation, but I'm still not sure how we can force the torch install to occur before the detectron2 one in setup.py, so any help here would be appreciated if you've seen something like this before. I have also filed an issue (https://github.com/facebookresearch/detectron2/issues/3124) but I'm not sure it's in the scope of the library to support install without torch already on the system.
I could edit the CI instead to add torch before running setup.py, but that seems like it would be error-prone down the road if people are trying to install without pytorch on their system and transformers fails to build. What are your thoughts on how to best solve this issue?
Copying the parts of the detectron library I needed to make layoutlmv2 work was something else I considered, but it is a substantial chunk of the detectron2 code, so I think it's better to just use the library.<|||||>@NielsRogge, you have played with LayoutLM in the past, do you want to give this PR a look?
If the install needs to be done as a two-step process (first all deps, then `detection2`) then I would advocate for not putting it in the `setup.py`, and thoroughly document the behavior, both in the documentation and in the code with the appropriate errors raised when a detectron-less install is detected.<|||||>Any idea when this PR is going to be merged? I am working on TF version of layoutlm 2 and I'd like for this to be merged before I create a branch for layoutlm 2 in TF<|||||>Hi @atahmasb, the author of this PR did not reply yet regarding my review, but maybe I could work on this in a new PR.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,030 | closed | xla_spawn.py: Cannot load large (~1GB) optimizer.pt from checkpoint | ## Environment info
- `transformers` version: 4.7.0.dev0
- Platform: Linux-4.19.0-14-cloud-amd64-x86_64-with-debian-10.9
- Python version: 3.7.3
- PyTorch version (GPU?): 1.8.1+cu102 (False, using TPU)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no, using TPU
- Using distributed or parallel set-up in script?: yes, v3-8 TPU
### Who can help
@sgugger
## Information
Model I am using (BertForMaskedML):
The tasks I am working on is:
* [ ] Custom Bert Pretraining (MLM only)
The problem arises when using:
* [ ] my own modified scripts: (scroll at the bottom for full script)
I am using a modified version of xla_spawn.py, written following:
https://wandb.ai/darshandeshpande/marathi-distilbert/reports/Training-Devanagari-Language-Models-on-TPU-using-Hugging-Face-and-PyTorch--Vmlldzo1MDgyMDQ
The goals are:
1. On-the-fly tokenization (working)
2. Avoid memory waste by wrapping the model with a xmp.MpModelWrapper (not really sure about the actual efficiency of this, but at least no errors result from this modification alone)
**Training without resuming from checkpoint works fine.**
**Also loading checkpoint for a small-bert version (optimizer size ~35MB) works fine.**
**When trying to load a checkpoint for bert-base (optimizer size ~1GB) the program crashes at the line**:
```ruby
optimizer_state = torch.load(os.path.join(checkpoint, "optimizer.pt"), map_location="cpu")
```
of Trainer.py
It is possible it is only a RAM issue (?), but in that case maybe it could be memory optimized.
I am working with an e2-highmem-4 (4 vCPUs, 32 GB memory 1TB persistent disk), accelerated by a v3-8 TPU on GCP.
If torch_load(map_location="cpu") is called 8 times (one per core), it takes around 1.5 x 8 = 12GB, so this should not be a problem, unless a significant amount of RAM is already used or something weird happens.
However, in the small-bert case the same code works.
**If memory is actually the case, would it be possible to store the optimizer (and the remaining checkpoint data) only once?**
**(I guess loading directly to TPU is not an option?)**
## To reproduce
Steps to reproduce the behavior:
1. run xla_spawn with run_mlm.py for bert-base pretraining:
```ruby
python xla_spawn.py \
--num_cores=8 \
language-modeling/run_mlm.py \
--train_file $TRAIN_FILE \
--model_name_or_path bert-base-uncased \
--output_dir $OUTPUT_DIR \
--overwrite_output_dir False \
--do_train True \
--do_eval False \
--save_steps 10 \
```
2. interrupt the training after at least one checkpoint has been created
3. run xla_spawn with run_mlm.py for bert-base pretraining resuming checkpoint:
```ruby
python xla_spawn.py \
--num_cores=8 \
language-modeling/run_mlm.py \
--train_file $TRAIN_FILE \
--model_name_or_path bert-base-uncased \
--output_dir $NEW_OUTPUT_DIR \
--overwrite_output_dir False \
--do_train True \
--do_eval False \
--save_steps 10 \
--resume_from_checkpoint $CHECKPOINT_DIR \
```
## Expected behavior
resume_from_checkpoint works in the bert_base case as in the small-bert case.
## My Script
```ruby
import torch_xla.core.xla_model as xm
import torch_xla.distributed.parallel_loader as pl
import torch_xla.distributed.xla_multiprocessing as xmp
import logging
import math
import os
import sys
import json
import pickle
from pathlib import Path
from dataclasses import dataclass, field
from typing import Optional
import datasets
from datasets import load_dataset
import transformers
from transformers import (
CONFIG_MAPPING,
MODEL_FOR_MASKED_LM_MAPPING,
AutoConfig,
AutoModelForMaskedLM,
AutoTokenizer,
DataCollatorForLanguageModeling,
HfArgumentParser,
Trainer,
TrainingArguments,
set_seed,
)
from transformers.trainer_utils import get_last_checkpoint, is_main_process
from transformers.utils import check_min_version
from transformers import BertConfig, BertTokenizerFast, BertForMaskedLM
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
check_min_version("4.7.0.dev0")
# Set up logger: writing both to file and to std output
file_handler = logging.FileHandler(filename='tpu_training_logger')
file_handler.setLevel(logging.INFO)
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setLevel(logging.INFO)
handlers = [file_handler, stdout_handler]
logging.basicConfig(
format='%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s',
datefmt='%H:%M:%S',
level=logging.INFO,
handlers=handlers
)
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
# avoid creating useless and space-consuming copies of the data for each tpu-core
SERIAL_EXEC = xmp.MpSerialExecutor()
@dataclass
class ModelArguments:
"""
Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch.
"""
model_type: Optional[str] = field(
default="uncased_baseline",
metadata={"help" : "uncased_baseline, cased_baseline, or model"},
)
cache_dir: Optional[str] = field(
default=None,
metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"},
)
use_fast_tokenizer: bool = field(
default=True,
metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."},
)
model_revision: str = field(
default="main",
metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
)
use_auth_token: bool = field(
default=False,
metadata={
"help": "Will use the token generated when running `transformers-cli login` (necessary to use this script "
"with private models)."
},
)
@dataclass
class DataTrainingArguments:
"""
Arguments pertaining to what data we are going to input our model for training and eval.
"""
train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a text file)."})
validation_file: Optional[str] = field(
default=None,
metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."},
)
overwrite_cache: bool = field(
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
)
validation_split_percentage: Optional[int] = field(
default=5,
metadata={
"help": "The percentage of the train set used as validation set in case there's no validation split"
},
)
max_seq_length: Optional[int] = field(
default=512,
metadata={
"help": "The maximum total input sequence length after tokenization. Sequences longer "
"than this will be truncated."
},
)
preprocessing_num_workers: Optional[int] = field(
default=None,
metadata={"help": "The number of processes to use for the preprocessing."},
)
mlm_probability: float = field(
default=0.15, metadata={"help": "Ratio of tokens to mask for masked language modeling loss"}
)
line_by_line: bool = field(
default=True,
metadata={"help": "Whether distinct lines of text in the dataset are to be handled as distinct sequences."},
)
pad_to_max_length: bool = field(
default=True,
metadata={
"help": "Whether to pad all samples to `max_seq_length`. "
"If False, will pad the samples dynamically when batching to the maximum length in the batch."
},
)
max_train_samples: Optional[int] = field(
default=None,
metadata={
"help": "For debugging purposes or quicker training, truncate the number of training examples to this "
"value if set."
},
)
max_eval_samples: Optional[int] = field(
default=None,
metadata={
"help": "For debugging purposes or quicker training, truncate the number of evaluation examples to this "
"value if set."
},
)
def __post_init__(self):
if self.train_file is None and self.validation_file is None:
raise ValueError("Need a training/validation file.")
else:
if self.train_file is not None:
extension = self.train_file.split(".")[-1]
assert extension in ["csv", "json", "txt"], "`train_file` should be a csv, a json or a txt file."
if self.validation_file is not None:
extension = self.validation_file.split(".")[-1]
assert extension in ["csv", "json", "txt"], "`validation_file` should be a csv, a json or a txt file."
def add_custom_args(hf_parser):
hf_parser.add_argument(
'--icebert_folder',
type=str,
default="/home/riccardobassani17/bucket/transformers/examples/pytorch/language-modeling/icebert",
help="Path to folder containing icebert utils and files"
)
hf_parser.add_argument(
'--config_file',
type=str,
default="/home/riccardobassani17/bucket/transformers/examples/pytorch/language-modeling/icebert/config_files/small_bert.json",
help="Path of the BertConfig json file, relative to the icebert folder"
)
return hf_parser
def get_tokenized_dataset():
tokenized_datasets = datasets.load_dataset('text', data_files=data_files, cache_dir=cache_dir)
def tokenize_function(examples):
# Remove empty lines
examples["text"] = [line for line in examples["text"] if len(line) > 0 and not line.isspace()]
return tokenizer(
examples["text"],
padding="max_length",
truncation=True,
max_length=max_len,
return_special_tokens_mask=True,
)
return tokenized_datasets.with_transform(tokenize_function)
def get_data_collator():
return DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15)
def map_fn(index):
# See all possible arguments in src/transformers/training_args.py
# or by passing the --help flag to this script.
# We now keep distinct sets of args, for a cleaner separation of concerns.
parser = add_custom_args( HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments)) )
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
# If we pass only one argument to the script and it's the path to a json file,
# let's parse it to get our arguments.
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
else:
model_args, data_args, training_args, args = parser.parse_args_into_dataclasses()
logger.info(f"parser built")
# load and instantiate tokenizer
global tokenizer
tokenizer = BertTokenizerFast.from_pretrained( (Path(args.icebert_folder) / (str(data_args.max_seq_length) + "_tokenizers") / (model_args.model_type + "_tokenizer")))
# load and instantiate configuration file
with open(args.config_file, 'r') as fp:
config_dict = json.load(fp)
config_kwargs = {
"cache_dir": model_args.cache_dir,
"revision": model_args.model_revision,
"use_auth_token": True if model_args.use_auth_token else None,
}
config = BertConfig(vocab_size=tokenizer.vocab_size, max_position_embeddings=data_args.max_seq_length, \
hidden_size=config_dict["hidden_size"], num_hidden_layers=config_dict["num_hidden_layers"], \
num_attention_heads=config_dict["num_attention_heads"], intermediate_size=config_dict["intermediate_size"], \
hidden_act=config_dict["hidden_act"], hidden_dropout_prob=config_dict["hidden_dropout_prob"], \
attention_probs_dropout_prob=config_dict["attention_probs_dropout_prob"], type_vocab_size=config_dict["type_vocab_size"], \
initializer_range=config_dict["initializer_range"], layer_norm_eps=config_dict["layer_norm_eps"], **config_kwargs)
# load and instantiate model
# IMPORTANT: the model is wrapped using the xmp.MpModelWrapper, which loads the model only once, in the global scope
model = xmp.MpModelWrapper(BertForMaskedLM(config))
logger.info(f"tokenizer and model instantiated")
# move model to device
device = xm.xla_device()
model.to(device)
xm.rendezvous("Model moved to device")
# prepare dataset and datacollator for on-the-fly tokenization and masking
global data_files
data_files = {"train": data_args.train_file}
global max_len
max_len = data_args.max_seq_length
global cache_dir
cache_dir = model_args.cache_dir
tokenized_datasets = SERIAL_EXEC.run(get_tokenized_dataset)
xm.rendezvous("Tokenized dataset loaded")
data_collator = SERIAL_EXEC.run(get_data_collator)
xm.rendezvous("DataCollator loaded")
# handle possible checkpoints
last_checkpoint = None
if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:
last_checkpoint = get_last_checkpoint(training_args.output_dir)
if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty. "
"Use --overwrite_output_dir to overcome."
)
elif last_checkpoint is not None and training_args._fr_fr_from_checkpoint is None:
logger.info(
f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
"the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
)
# select and optionally sample the train_dataset
if training_args.do_train:
if "train" not in tokenized_datasets:
raise ValueError("--do_train requires a train dataset")
train_dataset = tokenized_datasets["train"]
if data_args.max_train_samples is not None:
train_dataset = train_dataset.select(range(data_args.max_train_samples))
# setup training parameters
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
tokenizer=tokenizer,
data_collator=data_collator,
)
# start training
if training_args.do_train:
checkpoint = None
if training_args.resume_from_checkpoint is not None:
checkpoint = training_args.resume_from_checkpoint
elif last_checkpoint is not None:
checkpoint = last_checkpoint
logger.info("*** Starting training ***")
train_result = trainer.train(resume_from_checkpoint=checkpoint)
trainer.save_model() # Saves the tokenizer too for easy upload
logger.info("*** Model saved ***")
metrics = train_result.metrics
max_train_samples = (
data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)
)
metrics["train_samples"] = min(max_train_samples, len(train_dataset))
trainer.log_metrics("train", metrics)
trainer.save_metrics("train", metrics)
trainer.save_state()
if __name__ == "__main__":
xmp.spawn(map_fn, args=(), nprocs=8, start_method='fork')
```
| 06-04-2021 18:00:01 | 06-04-2021 18:00:01 | That is a weird error. I can't reproduce on my side, using an n2-standard-8 (8 vCPUs, 32 GB memory) with the TPUs.
There is no alternative to load the optimizer state in each process since each of the TPU cores will need it, and it needs to pass through the CPU sadly because PyTorch XLA does not handle loading it directly on an XLA device.<|||||>Thank you very much for the quick response! This seems to suggest it cannot be a RAM issue. I don't know how to check what happens exactly, the only error message I get is:
torch.multiprocessing.spawn.ProcessExitedException: process 6 terminated with signal SIGKILL
I am trying to load from "more advanced" checkpoints (not after 10 steps but after 10k, 30k), but that should not make any difference.
For the rest, there are some differences between the original xla_spawn.py code and my script, but the fact that it works with a smaller model puzzles me.
Any idea about what else could go wrong when loading the optimizer and/or how to get a more specific error?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,029 | closed | Seq2SeqTrainer: cannot set max length when we evaluate/(generate) during training | ## Environment inf
- `transformers` version: 4.6.1
### Who can help
- trainer: @sgugger
## Information
Seq2SeqTrainer: cannot set max length when we evaluate/(generate) during training.
I know we can set the max length during the actual evaluation here: https://github.com/huggingface/transformers/blob/cbe63949d7/examples/seq2seq/finetune_trainer.py#L321
But if we want to set the max length during the evaluation in training:
https://github.com/huggingface/transformers/blob/cbe63949d7/src/transformers/trainer.py#L924
I can see, currently, there is nothing I can pass in.
I found the solution can be `model.config.max_length`, but is there a more explicit argument that I can pass in?
Also, if I want to get the logits as well during training, is there anything I can do here?
https://github.com/huggingface/transformers/blob/cbe63949d7/src/transformers/trainer.py#L1505
## Expected behavior
Set max length, and output logits, during the evaluation in training
| 06-04-2021 17:17:40 | 06-04-2021 17:17:40 | The Seq2SeqTrainer also accepts the `max_length` argument in its [evaluate method](https://github.com/huggingface/transformers/blob/1f335aef3bb5382b5cfd7adbe5861ed4979dd98d/src/transformers/trainer_seq2seq.py#L41).<|||||>Yeah. But the `Seq2SeqTrainer` extends `Trainer`, which implements the actual`train` function.
https://github.com/huggingface/transformers/blob/cbe63949d7/src/transformers/trainer.py#L924
And it is "fixed" as in no argument will be passed in.<|||||>Ah yes, for this you need to set the parameters in the config then. |
transformers | 12,028 | closed | New TF GLUE example | This is the PR for the new-style TF GLUE example. I'd like to run a few more tests, especially on the weirder datasets like MNLI and STSB, before I merge it, but it's almost ready! | 06-04-2021 17:16:32 | 06-04-2021 17:16:32 | |
transformers | 12,027 | closed | Replace legacy tensor.Tensor with torch.tensor/torch.empty | Motivated by [this issue](https://github.com/pytorch/pytorch/issues/53146) in the PyTorch repo. | 06-04-2021 16:55:24 | 06-04-2021 16:55:24 | |
transformers | 12,026 | closed | Fixes bug that appears when using QA bert and distilation. | This is a fix for
https://github.com/huggingface/transformers/issues/11626
and this is a somewhat related to:
https://github.com/huggingface/transformers/issues/11941
During backward pass Pytorch complains with:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
This happens because the QA model code modifies the start_positions and end_positions input tensors, using clamp_ function: as a consequence the teacher and the student both modifies the inputs, and backward pass fails.
From a quick check it looks like this is used in at least all QA code, like in:
```
cd transformers/src/transformers/models
grep -nr '[a-z]_(' . | grep clamp
...
./xlnet/modeling_xlnet.py:1877: start_positions.clamp_(0, ignored_index)
...
```
(and maybe for other models use.)
This may intended, but it's quite hard for the end user to track down these bugs as the issues reveal.
(And maybe PyTorch changed something and made this apparent)
| 06-04-2021 13:07:04 | 06-04-2021 13:07:04 | It looks like I should also modify a bunch of other models (utils/check_copies.py fails) to match the same change I did in BERT, I will wait to hear from you what is the process to do so.
<|||||>It should be ok now, only non related tests are failing in run_tests_torch.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.