repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
โ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 8,202 | closed | 'SummaryWriter' object has no attribute 'add_hparams' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Linux-4.15.0-72-generic-x86_64-with-debian-buster-sid
- Python version: 3.6.12
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Tried both 1 gpu and 2 gpus. Got the same result.
Additional env information from `pip freeze`:
- tensorboardX==1.6
- tensorflow==2.2.0 (I did not include tensorflow in this current conda environment, but do have that in the system, so I think pip reads from that. `import tensorflow` in a python script would cause `ImportError`, so tensorflow should be considered uninstalled here).
### Who can help
@sgugger
## Information
Model I am using (Bert, XLNet ...): `bert-base-cased`
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below; in steps to reproduce the situation)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name) MNLI
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Copy the `run_glue.py` from [cdc48ce](https://github.com/huggingface/transformers/commit/cdc48ce92ddf50e7ad871376be651638268b2e9a) (the newest version up till now).
2. Comment out the `from transformers.trainer_utils import is_main_process` line, and insert below (because this importing throws some exception. Pasting this code circumvents the problem):
```
def is_main_process(local_rank):
"""
Whether or not the current process is the local process,basedon`local_rank`.
"""
return local_rank in [-1, 0]
```
3. Run the following scripts.
```
export GLUE_DIR=../../data/glue_data
export TASK_NAME=MNLI
python run_glue.py \
--model_name_or_path bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--do_predict \
--max_seq_length 128 \
--per_device_train_batch_size 8 \
--learning_rate 2e-5 \
--num_train_epochs 2 \
--output_dir $TASK_NAME/
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
The error message is:
```
Traceback (most recent call last):
File "run_glue.py", line 421, in <module>
main()
File "run_glue.py", line 356, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/h/zining/.conda/envs/myenv/lib/python3.6/site-packages/transformers/trainer.py", line 717, in train
self.control = self.callback_handler.on_train_begin(self.args, self.state, self.control)
File "/h/zining/.conda/envs/myenv/lib/python3.6/site-packages/transformers/trainer_callback.py", line 329, in on_train_begin
return self.call_event("on_train_begin", args, state, control)
File "/h/zining/.conda/envs/myenv/lib/python3.6/site-packages/transformers/trainer_callback.py", line 376, in call_event
**kwargs,
File "/h/zining/.conda/envs/myenv/lib/python3.6/site-packages/transformers/integrations.py", line 218, in on_train_begin
self.tb_writer.add_hparams(args.to_sanitized_dict(), metric_dict={})
AttributeError: 'SummaryWriter' object has no attribute 'add_hparams'
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I think running the `run_glue.py` will finetune on some GLUE tasks.
Note: Issue #4511 is similar, but was threw in `trainer.py`. My issue is thrown in `trainer_callback.py`. I think these two issues are caused by different reasons. | 10-31-2020 01:04:52 | 10-31-2020 01:04:52 | |
transformers | 8,201 | closed | New model addition | # ๐ New model addition
## Model description
<!-- Important information -->
## Open source status
* [x] the model implementation is available: (give details)
* [ ] the model weights are available: (give details)
* [x] who are the authors: (mention them, if possible by @gh-username)
| 10-31-2020 00:41:40 | 10-31-2020 00:41:40 | ##<|||||>P#86<|||||>user blocked |
transformers | 8,200 | closed | Mmmmianam | # ๐ Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 10-31-2020 00:12:38 | 10-31-2020 00:12:38 | ๐<|||||>#876 |
transformers | 8,199 | closed | Sentencepiece dependency causing docker build to fail | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:3.4.0
- Platform:Ubuntu 18.04
- Python version: 3.7
- PyTorch version (GPU?): 1.7.0 no gpu
- Tensorflow version (GPU?): N/A
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
## Information
Question answering pipelines is the feature I am using. There is a problem with the sentencepiece dependency of transformers. It cannot find the correct package when installing and causing the build process to fail
The problem arises when using:
Downloading the transformers library onto a docker container running ubuntu.
The tasks I am working on is:
Uploadding a transformers script to AWS fargate
## To reproduce
Steps to reproduce the behavior:
1. Create a project
2. Try to create docker container using dockerfile attached below
I have attached the relevant parts of my dockerfile below
Dockerfile
```
FROM ubuntu:18.04
RUN mkdir /usr/app
WORKDIR /usr/app
# Add and install Python modules
COPY requirements.txt ./
RUN apt-get update
RUN apt-get -y install python3
RUN apt-get -y install python3-pip
RUN pip3 install virtualenv
ENV VIRTUAL_ENV=/venv
RUN virtualenv venv -p python3
ENV PATH="VIRTUAL_ENV/bin:$PATH"
RUN pip3 install transformers[torch]
RUN pip3 install torch==1.7.0+cpu torchvision==0.8.1+cpu torchaudio==0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
RUN pip3 install -r requirements.txt
# Bundle app source
COPY . ./
# Expose
EXPOSE 6000
# Run
CMD ["python", "app.py"]
```
This is the stack trace that comes back from running and trying to build using this dockerfile
```
[+] Building 571.8s (14/17)
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 835B 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:18.04 0.8s
=> [internal] load build context 0.1s
=> => transferring context: 139.29kB 0.1s
=> [1/13] FROM docker.io/library/ubuntu:18.04@sha256:646942475da61b4ce9cc5b3fadb42642ea90e5d0de46111458e100ff2c7031e6 0.0s
=> CACHED [2/13] RUN mkdir /usr/app 0.0s
=> CACHED [3/13] WORKDIR /usr/app 0.0s
=> [4/13] COPY requirements.txt ./ 0.0s
=> [5/13] RUN apt-get update 30.8s
=> [6/13] RUN apt-get -y install python3 15.6s
=> [7/13] RUN apt-get -y install python3-pip 214.0s
=> [8/13] RUN pip3 install virtualenv 4.5s
=> [9/13] RUN virtualenv venv -p python3 0.8s
=> ERROR [10/13] RUN pip3 install transformers[torch] 305.1s
------
> [10/13] RUN pip3 install transformers[torch]:
#14 1.095 Collecting transformers[torch]
#14 1.410 Downloading https://files.pythonhosted.org/packages/2c/4e/4f1ede0fd7a36278844a277f8d53c21f88f37f3754abf76a5d6224f76d4a/
transformers-3.4.0-py3-none-any.whl (1.3MB)
#14 1.897 Collecting numpy (from transformers[torch])
#14 2.507 Downloading https://files.pythonhosted.org/packages/8f/40/ddb5109614aabad67e6fe426b3579a879b7b3cdd375eb27af467c4367ae0/
numpy-1.19.3-cp36-cp36m-manylinux1_x86_64.whl (13.4MB)
#14 5.947 Requirement already satisfied: filelock in /usr/local/lib/python3.6/dist-packages (from transformers[torch])
#14 5.953 Collecting sentencepiece!=0.1.92 (from transformers[torch])
#14 6.174 Downloading https://files.pythonhosted.org/packages/72/e0/57edbab017a204e9f39448c1717292437a45b5f7cf3a9dbf4a9c026b03c5/
sentencepiece-0.1.94.tar.gz (507kB)
#14 6.575 Collecting sacremoses (from transformers[torch])
#14 6.714 Downloading https://files.pythonhosted.org/packages/7d/34/09d19aff26edcc8eb2a01bed8e98f13a1537005d31e95233fd48216eed10/
sacremoses-0.0.43.tar.gz (883kB)
#14 7.158 Collecting requests (from transformers[torch])
#14 7.341 Downloading https://files.pythonhosted.org/packages/45/1e/0c169c6a5381e241ba7404532c16a21d86ab872c9bed8bdcd4c423954103/
requests-2.24.0-py2.py3-none-any.whl (61kB)
#14 7.404 Collecting packaging (from transformers[torch])
#14 7.551 Downloading https://files.pythonhosted.org/packages/46/19/c5ab91b1b05cfe63cccd5cfc971db9214c6dd6ced54e33c30d5af1d2bc43/
packaging-20.4-py2.py3-none-any.whl
#14 7.579 Collecting dataclasses; python_version < "3.7" (from transformers[torch])
#14 7.708 Downloading https://files.pythonhosted.org/packages/e1/d2/6f02df2616fd4016075f60157c7a0452b38d8f7938ae94343911e0fb0b09/
dataclasses-0.7-py3-none-any.whl
#14 7.725 Collecting tokenizers==0.9.2 (from transformers[torch])
#14 8.019 Downloading https://files.pythonhosted.org/packages/7c/a5/78be1a55b2ac8d6a956f0a211d372726e2b1dd2666bb537fea9b03abd62c/
tokenizers-0.9.2-cp36-cp36m-manylinux1_x86_64.whl (2.9MB)
#14 8.732 Collecting regex!=2019.12.17 (from transformers[torch])
#14 9.570 Downloading https://files.pythonhosted.org/packages/87/9f/aad666560082cb11331167cbb31cf0e8bd90af8ea4951436d1fcb2ddde44/
regex-2020.10.28-cp36-cp36m-manylinux1_x86_64.whl (666kB)
#14 9.756 Collecting protobuf (from transformers[torch])
#14 10.02 Downloading https://files.pythonhosted.org/packages/30/79/510974552cebff2ba04038544799450defe75e96ea5f1675dbf72cc8744f/
protobuf-3.13.0-cp36-cp36m-manylinux1_x86_64.whl (1.3MB)
#14 10.36 Collecting tqdm>=4.27 (from transformers[torch])
#14 10.57 Downloading https://files.pythonhosted.org/packages/93/3a/96b3dc293aa72443cf9627444c3c221a7ba34bb622e4d8bf1b5d4f2d9d08/
tqdm-4.51.0-py2.py3-none-any.whl (70kB)
#14 10.60 Collecting torch>=1.0; extra == "torch" (from transformers[torch])
#14 10.79 Downloading https://files.pythonhosted.org/packages/80/2a/58f8078744e0408619c63148f7a2e8e48cf007e4146b74d4bb67c56d161b/
torch-1.7.0-cp36-cp36m-manylinux1_x86_64.whl (776.7MB)
#14 285.6 Collecting click (from sacremoses->transformers[torch])
#14 292.4 Downloading https://files.pythonhosted.org/packages/d2/3d/fa76db83bf75c4f8d338c2fd15c8d33fdd7ad23a9b5e57eb6c5de26b430e/
click-7.1.2-py2.py3-none-any.whl (82kB)
#14 292.4 Collecting joblib (from sacremoses->transformers[torch])
#14 292.6 Downloading https://files.pythonhosted.org/packages/fc/c9/f58220ac44a1592f79a343caba12f6837f9e0c04c196176a3d66338e1ea8/
joblib-0.17.0-py3-none-any.whl (301kB)
#14 292.8 Requirement already satisfied: six in /usr/lib/python3/dist-packages (from sacremoses->transformers[torch])
#14 292.8 Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 (from requests->transformers[torch])
#14 293.0 Downloading https://files.pythonhosted.org/packages/56/aa/4ef5aa67a9a62505db124a5cb5262332d1d4153462eb8fd89c9fa41e5d92/
urllib3-1.25.11-py2.py3-none-any.whl (127kB)
#14 293.0 Collecting chardet<4,>=3.0.2 (from requests->transformers[torch])
#14 293.2 Downloading https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/
chardet-3.0.4-py2.py3-none-any.whl (133kB)
#14 293.2 Collecting certifi>=2017.4.17 (from requests->transformers[torch])
#14 293.4 Downloading https://files.pythonhosted.org/packages/5e/c4/6c4fe722df5343c33226f0b4e0bb042e4dc13483228b4718baf286f86d87/
certifi-2020.6.20-py2.py3-none-any.whl (156kB)
#14 293.4 Requirement already satisfied: idna<3,>=2.5 in /usr/lib/python3/dist-packages (from requests->transformers[torch])
#14 293.4 Collecting pyparsing>=2.0.2 (from packaging->transformers[torch])
#14 293.7 Downloading https://files.pythonhosted.org/packages/8a/bb/488841f56197b13700afd5658fc279a2025a39e22449b7cf29864669b15d/
pyparsing-2.4.7-py2.py3-none-any.whl (67kB)
#14 293.7 Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from protobuf->transformers[torch])
#14 293.7 Collecting future (from torch>=1.0; extra == "torch"->transformers[torch])
#14 293.9 Downloading https://files.pythonhosted.org/packages/45/0b/38b06fd9b92dc2b68d58b75f900e97884c45bedd2ff83203d933cf5851c9/
future-0.18.2.tar.gz (829kB)
#14 294.7 Collecting typing-extensions (from torch>=1.0; extra == "torch"->transformers[torch])
#14 294.9 Downloading https://files.pythonhosted.org/packages/60/7a/e881b5abb54db0e6e671ab088d079c57ce54e8a01a3ca443f561ccadb37e/
typing_extensions-3.7.4.3-py3-none-any.whl
#14 294.9 Building wheels for collected packages: sentencepiece, sacremoses, future
#14 294.9 Running setup.py bdist_wheel for sentencepiece: started
#14 295.6 Running setup.py bdist_wheel for sentencepiece: finished with status 'error'
#14 295.6 Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-o837wqyj/sent
encepiece/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __
file__, 'exec'))" bdist_wheel -d /tmp/tmprwjlzrwrpip-wheel- --python-tag cp36:
#14 295.6 /usr/lib/python3.6/distutils/dist.py:261: UserWarning: Unknown distribution option: 'long_description_content_type'
#14 295.6 warnings.warn(msg)
#14 295.6 running bdist_wheel
#14 295.6 running build
#14 295.6 running build_py
#14 295.6 creating build
#14 295.6 creating build/lib.linux-x86_64-3.6
#14 295.6 creating build/lib.linux-x86_64-3.6/sentencepiece
#14 295.6 copying src/sentencepiece/__init__.py -> build/lib.linux-x86_64-3.6/sentencepiece
#14 295.6 copying src/sentencepiece/sentencepiece_model_pb2.py -> build/lib.linux-x86_64-3.6/sentencepiece
#14 295.6 copying src/sentencepiece/sentencepiece_pb2.py -> build/lib.linux-x86_64-3.6/sentencepiece
#14 295.6 running build_ext
#14 295.6 /bin/sh: 1: pkg-config: not found
#14 295.6 ./build_bundled.sh: 8: ./build_bundled.sh: git: not found
#14 295.6 ./build_bundled.sh: 10: ./build_bundled.sh: git: not found
#14 295.6 ./build_bundled.sh: 12: cd: can't cd to sentencepiece
#14 295.6 ./build_bundled.sh: 15: ./build_bundled.sh: cmake: not found
#14 295.6 make: *** No targets specified and no makefile found. Stop.
#14 295.6 make: *** No rule to make target 'install'. Stop.
#14 295.6 env: 'pkg-config': No such file or directory
#14 295.6 Failed to find sentencepiece pkg-config
#14 295.6
#14 295.6 ----------------------------------------
#14 295.6 Failed building wheel for sentencepiece
#14 295.6 Running setup.py clean for sentencepiece
#14 295.8 Running setup.py bdist_wheel for sacremoses: started
#14 296.3 Running setup.py bdist_wheel for sacremoses: finished with status 'done'
#14 296.3 Stored in directory: /root/.cache/pip/wheels/29/3c/fd/7ce5c3f0666dab31a50123635e6fb5e19ceb42ce38d4e58f45
#14 296.4 Running setup.py bdist_wheel for future: started
#14 297.2 Running setup.py bdist_wheel for future: finished with status 'done'
#14 297.2 Stored in directory: /root/.cache/pip/wheels/8b/99/a0/81daf51dcd359a9377b110a8a886b3895921802d2fc1b2397e
#14 297.3 Successfully built sacremoses future
#14 297.3 Failed to build sentencepiece
#14 297.3 Installing collected packages: numpy, sentencepiece, click, joblib, regex, tqdm, sacremoses, urllib3, chardet, certifi, r
equests, pyparsing, packaging, dataclasses, tokenizers, protobuf, future, typing-extensions, torch, transformers
#14 301.3 Running setup.py install for sentencepiece: started
#14 301.7 Running setup.py install for sentencepiece: finished with status 'error'
#14 301.7 Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-o837wqyj/se
ntencepiece/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code,
__file__, 'exec'))" install --record /tmp/pip-7ji4iyud-record/install-record.txt --single-version-externally-managed --compile:
#14 301.7 /usr/lib/python3.6/distutils/dist.py:261: UserWarning: Unknown distribution option: 'long_description_content_type'
#14 301.7 warnings.warn(msg)
#14 301.7 running install
#14 301.7 running build
#14 301.7 running build_py
#14 301.7 creating build
#14 301.7 creating build/lib.linux-x86_64-3.6
#14 301.7 creating build/lib.linux-x86_64-3.6/sentencepiece
#14 301.7 copying src/sentencepiece/__init__.py -> build/lib.linux-x86_64-3.6/sentencepiece
#14 301.7 copying src/sentencepiece/sentencepiece_model_pb2.py -> build/lib.linux-x86_64-3.6/sentencepiece
#14 301.7 copying src/sentencepiece/sentencepiece_pb2.py -> build/lib.linux-x86_64-3.6/sentencepiece
#14 301.7 running build_ext
#14 301.7 /bin/sh: 1: pkg-config: not found
#14 301.7 mkdir: cannot create directory 'bundled': File exists
#14 301.7 ./build_bundled.sh: 8: ./build_bundled.sh: git: not found
#14 301.7 ./build_bundled.sh: 10: ./build_bundled.sh: git: not found
#14 301.7 ./build_bundled.sh: 12: cd: can't cd to sentencepiece
#14 301.7 mkdir: cannot create directory 'build': File exists
#14 301.7 ./build_bundled.sh: 15: ./build_bundled.sh: cmake: not found
#14 301.7 make: *** No targets specified and no makefile found. Stop.
#14 301.7 make: *** No rule to make target 'install'. Stop.
#14 301.7 env: 'pkg-config': No such file or directory
#14 301.7 Failed to find sentencepiece pkg-config
#14 301.7
#14 301.7 ----------------------------------------
#14 302.4 Command "/usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-o837wqyj/sentencepiece/setup.py';f=
getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" inst
all --record /tmp/pip-7ji4iyud-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in
/tmp/pip-build-o837wqyj/sentencepiece/
------
failed to solve with frontend dockerfile.v0: failed to build LLB: executor failed running [/bin/sh -c pip3 install transformers[tor
ch]]: runc did not terminate sucessfully
```
## Expected behavior
I expect transformers to be downloaded and allow me to access it from the docker container.
| 10-30-2020 20:50:50 | 10-30-2020 20:50:50 | This will be fixed when #8073 is merged.<|||||>Is there any help in terms of what version to pin to in order to avoid this? This is currently a huge blocker on my end.<|||||>On the sentencepiece side I donโt know (you can open an issue on their side to ask) but on the `transformers` side we are actively working on removing the hard dependency on sentencepiece and we estimate we should have a new release removing this dependency around the end of next week.
Cc @n1t0 and @Narsil whose work on `tokenizers` is essential to unlock this.<|||||>Great! Thanks for the info!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,198 | closed | Added 12 model cards for Indian Language Models | # What does this PR do?
This PR adds model cards for 12 language models which have been uploaded to the model hub recently over [here](https://huggingface.co/neuralspace-reverie). These cover 3 Indian languages and for each language there are 4 model variants namely: BERT, DistilBERT, RoBERTa and XLM-R.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
| 10-30-2020 20:12:11 | 10-30-2020 20:12:11 | Wow, so cool! Thanks for your contribution. |
transformers | 8,197 | closed | Remove deprecated arguments from new run_clm | # What does this PR do?
Fix a deprecated warning by replacing `tokenizer.max_len` with `tokenizer.model_max_length`.
| 10-30-2020 19:23:50 | 10-30-2020 19:23:50 | |
transformers | 8,196 | closed | pytest Errors | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
```
ai) ubuntu@ip-10-0-1-82:~/transformers$ transformers-cli env
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Traceback (most recent call last):
File "/home/ubuntu/anaconda2/envs/ai/bin/transformers-cli", line 33, in <module>
sys.exit(load_entry_point('transformers==3.4.0', 'console_scripts', 'transformers-cli')())
File "/home/ubuntu/anaconda2/envs/ai/bin/transformers-cli", line 25, in importlib_load_entry_point
return next(matches).load()
File "/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/importlib_metadata/__init__.py", line 105, in load
module = import_module(match.group('module'))
File "/home/ubuntu/anaconda2/envs/ai/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 941, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 941, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/transformers-3.4.0-py3.6.egg/transformers/__init__.py", line 135, in <module>
from .pipelines import (
File "/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/transformers-3.4.0-py3.6.egg/transformers/pipelines.py", line 38, in <module>
from .tokenization_auto import AutoTokenizer
File "/home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/transformers-3.4.0-py3.6.egg/transformers/tokenization_auto.py", line 210, in <module>
(XLMProphetNetConfig, (XLMProphetNetTokenizer, None)),
NameError: name 'XLMProphetNetTokenizer' is not defined
- `transformers` version: 3.4.0
- Platform: Ubuntu 20.04 LTS
- Python version: 3.6.11
- PyTorch version (GPU?): 1.7.0 (no GPU)
- Tensorflow version (GPU?): 2.2.0 (no GPU)
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
```
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
tokenizers: @mfuntowicz
examples/distillation: @VictorSanh
-->
## Information
## To reproduce
Steps to reproduce the behavior:
1. RUN_SLOW=1 pytest examples
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 10-30-2020 18:22:25 | 10-30-2020 18:22:25 | I got the same error while loading BERT tokeniser and model from torch hub<|||||>Hello! Do you mind pasting the result of `pip list` done in your environment? Thank you!<|||||>Itโs an Anaconda virtual environment.
Python 3.6.11
$ pip list
Package Version Location
--------------------------------- ------------------- ----------------------------------------------------------
absl-py 0.11.0
aiohttp 3.7.2
appdirs 1.4.4
argon2-cffi 20.1.0
astor 0.8.1
astunparse 1.6.3
async-generator 1.10
async-timeout 3.0.1
attrs 20.2.0
Automat 20.2.0
awscli 1.18.169
Babel 2.8.0
backcall 0.2.0
backports.functools-lru-cache 1.6.1
bcrypt 3.2.0
beautifulsoup4 4.9.3
bertopic 0.2.3
black 20.8b1
bleach 3.2.1
blinker 1.4
bokeh 2.2.3
boto 2.49.0
boto3 1.16.9
botocore 1.19.9
brotlipy 0.7.0
bz2file 0.98
cachetools 4.1.1
certifi 2020.6.20
cffi 1.14.3
chainer 7.7.0
chardet 3.0.4
click 7.1.2
cloudpickle 1.2.2
colorama 0.4.3
constantly 15.1.0
cryptography 3.2.1
cssselect 1.1.0
cycler 0.10.0
cymem 1.31.2
Cython 0.29.21
dataclasses 0.7
decorator 4.4.2
deepdist 0.1
defusedxml 0.6.0
dill 0.3.2
diskcache 4.0.0
docutils 0.15.2
entrypoints 0.3
feynman 2.0.0
filelock 3.0.12
findspark 1.3.0
Flask 1.1.2
flatbuffers 1.12
funcy 1.15
future 0.18.2
gast 0.3.3
gensim 3.8.3
google-auth 1.23.0
google-auth-oauthlib 0.4.2
google-pasta 0.2.0
googleapis-common-protos 1.52.0
grpcio 1.33.2
h5py 2.10.0
hdbscan 0.8.26
html5lib 1.1
hyperlink 20.0.1
hypothesis 5.41.0
idna 2.10
idna-ssl 1.1.0
importlib-metadata 2.0.0
incremental 17.5.0
iniconfig 1.1.1
ipykernel 5.3.4
ipython 7.12.0
ipython-genutils 0.2.0
ipywidgets 7.5.1
itemadapter 0.1.1
itemloaders 1.0.3
itsdangerous 1.1.0
jedi 0.17.2
Jinja2 2.11.2
jmespath 0.10.0
joblib 0.17.0
json5 0.9.5
jsonschema 3.2.0
jupyter-client 6.1.7
jupyter-console 6.2.0
jupyter-contrib-core 0.3.3
jupyter-core 4.6.3
jupyter-nbextensions-configurator 0.4.1
jupyterlab 2.2.9
jupyterlab-pygments 0.1.2
jupyterlab-server 1.2.0
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.2
kiwisolver 1.3.0
llvmlite 0.34.0
lxml 4.6.1
Markdown 3.3.3
MarkupSafe 1.1.1
matplotlib 3.3.2
mistune 0.8.4
mnist 0.2.2
more-itertools 8.6.0
mpmath 1.1.0
MulticoreTSNE 0.1
multidict 4.7.5
murmurhash 0.26.4
mypy-extensions 0.4.3
nbclient 0.5.1
nbconvert 6.0.7
nbformat 5.0.8
nest-asyncio 1.4.1
nltk 3.4.4
notebook 6.1.4
numba 0.51.2
numexpr 2.7.1
numpy 1.19.2
oauthlib 3.0.1
olefile 0.46
opt-einsum 3.3.0
packaging 20.4
pandas 1.1.4
pandocfilters 1.4.2
parameterized 0.7.4
parsel 1.6.0
parso 0.7.1
pathspec 0.8.0
patsy 0.5.1
petastorm 0.7.6 /home/ubuntu/petastorm
pexpect 4.8.0
pickleshare 0.7.5
Pillow 8.0.1
pip 20.2.4
plac 1.0.0
pluggy 0.13.1
preshed 0.46.4
prometheus-client 0.8.0
promise 2.3
prompt-toolkit 3.0.8
Protego 0.1.16
protobuf 3.13.0
psutil 5.7.3
ptyprocess 0.6.0
py 1.9.0
py4j 0.10.9
pyarrow 2.0.0
pyasn1 0.4.8
pyasn1-modules 0.2.7
pycparser 2.20
PyDispatcher 2.0.5
pydot 1.4.1
Pygments 2.7.2
PyHamcrest 2.0.2
PyJWT 1.7.1
pyLDAvis 2.1.2
pyOpenSSL 19.1.0
pyparsing 2.4.7
PyQt5 5.12.3
PyQt5-sip 4.19.18
PyQtChart 5.12
PyQtWebEngine 5.12.1
pyrsistent 0.17.3
PySocks 1.7.1
pyspark 3.0.1
pytest 6.1.2
python-dateutil 2.8.1
pytz 2020.1
PyWavelets 1.1.1
PyYAML 5.3.1
pyzmq 19.0.2
qtconsole 4.7.7
QtPy 1.9.0
queuelib 1.5.0
regex 2020.10.28
requests 2.24.0
requests-oauthlib 1.3.0
rsa 4.4.1
s3transfer 0.3.3
sacremoses 0.0.43
scapy 2.4.4
scikit-learn 0.23.2
scipy 1.5.2
Scrapy 2.4.0
seaborn 0.11.0
semver 2.8.1
Send2Trash 1.5.0
sense2vec 0.6.0
sentence-transformers 0.3.6
sentencepiece 0.1.91
service-identity 18.1.0
setuptools 49.6.0.post20201009
six 1.15.0
sklearn 0.0
smart-open 1.6.0
sortedcontainers 2.2.2
soupsieve 2.0.1
spacy 0.101.0
sputnik 0.9.3
statsmodels 0.12.1
sympy 1.6.2
tensorboard 2.3.0
tensorboard-plugin-wit 1.7.0
tensorflow 2.2.0
tensorflow-datasets 1.2.0
tensorflow-estimator 2.2.0
tensorflow-metadata 0.14.0
tensorflow-probability 0.6.0
tensorflowonspark 1.4.1
termcolor 1.1.0
terminado 0.9.1
testpath 0.4.4
tfp-nightly 0.5.0.dev20190522
thinc 5.0.8
threadpoolctl 2.1.0
timeout-decorator 0.4.1
tokenizers 0.9.2
toml 0.10.1
torch 1.7.0
torchaudio 0.7.0a0+ac17b64
torchvision 0.8.1
tornado 6.1
tqdm 4.51.0
traitlets 4.3.3
transformers 3.1.0 /home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages
Twisted 20.3.0
twython 3.8.2
typed-ast 1.4.1
typing-extensions 3.7.4.3
umap-learn 0.4.6
urllib3 1.25.11
w3lib 1.22.0
wcwidth 0.2.5
webencodings 0.5.1
Werkzeug 1.0.1
wheel 0.35.1
widgetsnbextension 3.5.1
wordcloud 1.8.0
wrapt 1.12.1
yarl 1.6.2
zipp 3.4.0
zope.interface 5.1.2
> On Nov 2, 2020, at 7:33 AM, Lysandre Debut <[email protected]> wrote:
>
>
> Hello! Do you mind pasting the result of pip list done in your environment? Thank you!
>
> โ
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub <https://github.com/huggingface/transformers/issues/8196#issuecomment-720544945>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAXWFW2Z4DMWNXRLVPMHSD3SN3GLNANCNFSM4TFJGMGQ>.
>
<|||||>It seems you have a conflict between your `transformers` version, as `transformers-cli env` returns v3.4.0, while your `pip list` returns v3.1.0?<|||||>Mea culpa! I sent you the pip list from my Mac.
Hereโs the Ubuntu 20.04 LTS results
$ conda list transformers
# packages in environment at /home/ubuntu/anaconda2/envs/ai:
#
# Name Version Build Channel
sentence-transformers 0.3.6 pypi_0 pypi
transformers 3.4.0 dev_0 <develop>
(ai) ubuntu@ip-10-0-1-82:~/transformers$
$ pip list
Package Version Location
--------------------------------- ------------------- ---------------------------------------------------------------------------------------
absl-py 0.11.0
aiohttp 3.7.2
appdirs 1.4.4
argon2-cffi 20.1.0
astor 0.8.1
astunparse 1.6.3
async-generator 1.10
async-timeout 3.0.1
attrs 20.2.0
Automat 20.2.0
awscli 1.18.169
Babel 2.8.0
backcall 0.2.0
backports.functools-lru-cache 1.6.1
bcrypt 3.2.0
beautifulsoup4 4.9.3
bertopic 0.2.3
black 20.8b1
bleach 3.2.1
blinker 1.4
bokeh 2.2.3
boto 2.49.0
boto3 1.16.9
botocore 1.19.9
brotlipy 0.7.0
bz2file 0.98
cachetools 4.1.1
certifi 2020.6.20
cffi 1.14.3
chainer 7.7.0
chardet 3.0.4
click 7.1.2
cloudpickle 1.2.2
colorama 0.4.3
constantly 15.1.0
cryptography 3.2.1
cssselect 1.1.0
cycler 0.10.0
cymem 1.31.2
Cython 0.29.21
dataclasses 0.7
decorator 4.4.2
deepdist 0.1
defusedxml 0.6.0
dill 0.3.2
diskcache 4.0.0
docutils 0.15.2
entrypoints 0.3
feynman 2.0.0
filelock 3.0.12
findspark 1.3.0
Flask 1.1.2
flatbuffers 1.12
funcy 1.15
future 0.18.2
gast 0.3.3
gensim 3.8.3
google-auth 1.23.0
google-auth-oauthlib 0.4.2
google-pasta 0.2.0
googleapis-common-protos 1.52.0
grpcio 1.33.2
h5py 2.10.0
hdbscan 0.8.26
html5lib 1.1
hyperlink 20.0.1
hypothesis 5.41.0
idna 2.10
idna-ssl 1.1.0
importlib-metadata 2.0.0
incremental 17.5.0
iniconfig 1.1.1
ipykernel 5.3.4
ipython 7.12.0
ipython-genutils 0.2.0
ipywidgets 7.5.1
itemadapter 0.1.1
itemloaders 1.0.3
itsdangerous 1.1.0
jedi 0.17.2
Jinja2 2.11.2
jmespath 0.10.0
joblib 0.17.0
json5 0.9.5
jsonschema 3.2.0
jupyter-client 6.1.7
jupyter-console 6.2.0
jupyter-contrib-core 0.3.3
jupyter-core 4.6.3
jupyter-nbextensions-configurator 0.4.1
jupyterlab 2.2.9
jupyterlab-pygments 0.1.2
jupyterlab-server 1.2.0
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.2
kiwisolver 1.3.0
llvmlite 0.34.0
lxml 4.6.1
Markdown 3.3.3
MarkupSafe 1.1.1
matplotlib 3.3.2
mistune 0.8.4
mnist 0.2.2
more-itertools 8.6.0
mpmath 1.1.0
MulticoreTSNE 0.1
multidict 4.7.5
murmurhash 0.26.4
mypy-extensions 0.4.3
nbclient 0.5.1
nbconvert 6.0.7
nbformat 5.0.8
nest-asyncio 1.4.1
nltk 3.4.4
notebook 6.1.4
numba 0.51.2
numexpr 2.7.1
numpy 1.19.2
oauthlib 3.0.1
olefile 0.46
opt-einsum 3.3.0
packaging 20.4
pandas 1.1.4
pandocfilters 1.4.2
parameterized 0.7.4
parsel 1.6.0
parso 0.7.1
pathspec 0.8.0
patsy 0.5.1
petastorm 0.7.6 /home/ubuntu/petastorm
pexpect 4.8.0
pickleshare 0.7.5
Pillow 8.0.1
pip 20.2.4
plac 1.0.0
pluggy 0.13.1
preshed 0.46.4
prometheus-client 0.8.0
promise 2.3
prompt-toolkit 3.0.8
Protego 0.1.16
protobuf 3.13.0
psutil 5.7.3
ptyprocess 0.6.0
py 1.9.0
py4j 0.10.9
pyarrow 2.0.0
pyasn1 0.4.8
pyasn1-modules 0.2.7
pycparser 2.20
PyDispatcher 2.0.5
pydot 1.4.1
Pygments 2.7.2
PyHamcrest 2.0.2
PyJWT 1.7.1
pyLDAvis 2.1.2
pyOpenSSL 19.1.0
pyparsing 2.4.7
PyQt5 5.12.3
PyQt5-sip 4.19.18
PyQtChart 5.12
PyQtWebEngine 5.12.1
pyrsistent 0.17.3
PySocks 1.7.1
pyspark 3.0.1
pytest 6.1.2
python-dateutil 2.8.1
pytz 2020.1
PyWavelets 1.1.1
PyYAML 5.3.1
pyzmq 19.0.2
qtconsole 4.7.7
QtPy 1.9.0
queuelib 1.5.0
regex 2020.10.28
requests 2.24.0
requests-oauthlib 1.3.0
rsa 4.4.1
s3transfer 0.3.3
sacremoses 0.0.43
scapy 2.4.4
scikit-learn 0.23.2
scipy 1.5.2
Scrapy 2.4.0
seaborn 0.11.0
semver 2.8.1
Send2Trash 1.5.0
sense2vec 0.6.0
sentence-transformers 0.3.6
sentencepiece 0.1.91
service-identity 18.1.0
setuptools 49.6.0.post20201009
six 1.15.0
sklearn 0.0
smart-open 1.6.0
sortedcontainers 2.2.2
soupsieve 2.0.1
spacy 0.101.0
sputnik 0.9.3
statsmodels 0.12.1
sympy 1.6.2
tensorboard 2.3.0
tensorboard-plugin-wit 1.7.0
tensorflow 2.2.0
tensorflow-datasets 1.2.0
tensorflow-estimator 2.2.0
tensorflow-metadata 0.14.0
tensorflow-probability 0.6.0
tensorflowonspark 1.4.1
termcolor 1.1.0
terminado 0.9.1
testpath 0.4.4
tfp-nightly 0.5.0.dev20190522
thinc 5.0.8
threadpoolctl 2.1.0
timeout-decorator 0.4.1
tokenizers 0.9.2
toml 0.10.1
torch 1.7.0
torchaudio 0.7.0a0+ac17b64
torchvision 0.8.1
tornado 6.1
tqdm 4.51.0
traitlets 4.3.3
transformers 3.4.0 /home/ubuntu/anaconda2/envs/ai/lib/python3.6/site-packages/transformers-3.4.0-py3.6.egg
Twisted 20.3.0
twython 3.8.2
typed-ast 1.4.1
typing-extensions 3.7.4.3
umap-learn 0.4.6
urllib3 1.25.11
w3lib 1.22.0
wcwidth 0.2.5
webencodings 0.5.1
Werkzeug 1.0.1
wheel 0.35.1
widgetsnbextension 3.5.1
wordcloud 1.8.0
wrapt 1.12.1
yarl 1.6.2
zipp 3.4.0
zope.interface 5.1.2
> On Nov 2, 2020, at 9:15 AM, Lysandre Debut <[email protected]> wrote:
>
>
> It seems you have a conflict between your transformers version, as transformers-cli env returns v3.4.0, while your pip list returns v3.1.0?
>
> โ
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub <https://github.com/huggingface/transformers/issues/8196#issuecomment-720607259>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAXWFWYIWSYOAK3B7CD7PRTSN3SM7ANCNFSM4TFJGMGQ>.
>
<|||||>After looking a bit into it, it seems there was the initialization of the XLMProphetNetTokenizer missing when the `sentencepiece` dependency was not detected. #8245 should solve it, thank you for raising an issue!<|||||>Great! Thank you.
btw - There are many missing packages when I try to run โpytest' for tests and examples.
E.g. - datasets, timeout-decorator, faiss, parameterized, etc.
It would be nice if there was a requirements.txt file. (Just a suggestion).
;-)
> On Nov 2, 2020, at 10:58 AM, Lysandre Debut <[email protected]> wrote:
>
>
> After looking a bit into it, it seems there was the initialization of the XLMProphetNetTokenizer missing when the sentencepiece dependency was not detected. #8245 <https://github.com/huggingface/transformers/pull/8245> should solve it, thank you for raising an issue!
>
> โ
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub <https://github.com/huggingface/transformers/issues/8196#issuecomment-720662390>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAXWFWYOLUGKB3SN3IT3CH3SN36LVANCNFSM4TFJGMGQ>.
>
<|||||>For the tests, you should be able to get it working with `pip install transformers[testing]` or `pip install . [testing]` if you have cloned the repository.
For the examples, there is a `requirements.txt` file in the `examples/` directory:
```shell-script
cd examples
pip install -r requirements.txt
```<|||||>Just merged #8245, installing from source should remove the error mentioned previously. Thanks again for letting us know! |
transformers | 8,195 | closed | Attempt at a temporary fix on `model_max_length` for roberta and Camembert variants | - The issue is that this information is not contained in the
`tokenizer` config file.
- It used to be harcoded already (with 512 value too).
- It is unclear right now how to "properly" fix it.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #8117 (tentatively) (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | 10-30-2020 18:11:39 | 10-30-2020 18:11:39 | No, `model_max_length` is not defined in the `tokenizer.json` for these models so truncation is off, or fails at inference in the model.<|||||>Well we also load the `transformers` specific configuration with `from_pretrained`, not only the `tokenizers` file (because We have additional attributes in `transformers`). It should be in this configuration file. Iโll take a look.<|||||>Okay, I'll wait @thomwolf for your advice on this then.<|||||>Stale<|||||>Too old |
transformers | 8,194 | closed | [Seq2SeqTrainer] Move import to init to make file self-contained | # What does this PR do?
Seq2SeqTrainer can be used as an independent file if no `label_smoothing` is done. This PR move the import to the init to make it possible to simple download this file and use it as it is without any dependencies for standard seq2seq training.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Would be awesome if @patil-suraj and @stas00 could review as well :-)
| 10-30-2020 18:06:54 | 10-30-2020 18:06:54 | I'm okay with this, but users will still need to copy paste or rewrite `Seq2SeqTrainingArguments` and pass those as `args` instead of default `TrainingArguments`, since we assume `args` is `Seq2SeqTrainingArguments` in multiple methods
|
transformers | 8,193 | closed | Fix two bugs with --logging_first_step | # What does this PR do?
This PR fixes two bugs relating to the `--logging_first_step` flag:
1. Though the description for `--logging_first_step` says `"Log and eval the first global_step"`, the flag doesn't actually eval (it only logs). This PR makes sure that eval happens on the first step.
2. When `--logging_first_step` is on, the logged training loss for the first step is miscalculated in `Trainer._maybe_log_save_evaluate`:
```python
logs["loss"] = (tr_loss_scalar - self._logging_loss_scalar) / self.args.logging_steps
```
This divides the loss by `logging_steps` (which is typically large, e.g. 500), when it should be divided by 1. This PR makes sure that the loss is divided by the correct number of steps.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger | 10-30-2020 17:35:31 | 10-30-2020 17:35:31 | Thanks for your PR! For the first point, I think we should fix the docs, not add an evaluation. It doesn't make any sense to evaluate at step 1 (one could call `trainer.evaluate()` before training if they really wanted to).
For the second point, good catch, this is certainly useful!<|||||>OK, both the description and the behavior is now logging only (no eval).<|||||>Perfect, thanks! |
transformers | 8,192 | closed | Add model cards. | Complete the author list in model cards for DynaBERT. | 10-30-2020 16:29:42 | 10-30-2020 16:29:42 | |
transformers | 8,191 | closed | Patch 3 | complete author list in model cards | 10-30-2020 16:20:34 | 10-30-2020 16:20:34 | |
transformers | 8,190 | closed | TextDataset support for tensorflow? | Hey, guys. I find [`TextDataset`](https://github.com/huggingface/transformers/blob/9a21b50614991889f11dbe0743af25923765f9e9/src/transformers/data/datasets/language_modeling.py#L20) and `LineByLineTextDataset` is a great design, it help people build input data more faster. But it's a pity that it only support **pytorch** now. Is there any possible to support **tensorflow**? | 10-30-2020 15:58:14 | 10-30-2020 15:58:14 | There is no plan for that as both those APIs will be deprecated soon. Users should directly use the [Datasets](https://github.com/huggingface/datasets) library which works for both PyTorch and TF. There are examples of how to replicate `TextDataset` and `LineByLineTextDataset` using that library in the new [`run_clm`](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py) and [`run_mlm`](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py) scripts. To convert the datasets to the TF format, just use their `set_format` method (see the [doc here](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.set_format)). <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,189 | closed | Doc fixes and filter warning in wandb | # What does this PR do?
There is no `XxxForPreTrainingModel`, just `XxxForPretraining`, so fixing the docs strings in multiple files.
Also, as discussed on the comet side, there should be no warning when the ENV says wandb should not be used.
| 10-30-2020 15:43:00 | 10-30-2020 15:43:00 | |
transformers | 8,188 | closed | Finalize lm examples | # What does this PR do?
This PR adds a `run_mlm_wwm` script for an example of MLM with whole word masking, which was the last kind of examples supporter by `run_language_modeling`.
As a result, it moves this script to `contrib/legacy/` and udpates the README to document how to use all the new example scripts.
I also reworked a tiny bit the table of tasks in the main README to include whether or not the examples leverage the Datasets library. | 10-30-2020 15:25:46 | 10-30-2020 15:25:46 | |
transformers | 8,187 | closed | Configuration initialized from checkpoint does not keep the checkpoint identifier in its attributes | Since version v3.4.0, initializing a model using the `from_pretrained` method adds a `name_or_path` attribute to the configuration, referencing the checkpoint used for initialization:
```py
from transformers import BertModel
model = BertModel.from_pretrained(model_name)
print(model.config.name_or_path)
# model_name
```
However, initializing the configuration on its own with the `from_pretrained` method does not yield the same attribute:
```py
from transformers import BertConfig
config = BertConfig.from_pretrained(model_name)
# config has no `name_or_path` attribute
```
This means that the configuration object initialized is not the same in both cases, whereas it probably should be. | 10-30-2020 15:16:55 | 10-30-2020 15:16:55 | Good point.
Maybe what we could do is:
- initializing the configuration with `from_pretrained` initializes the `name_or_path` attribute of the config as you mention, and
- using the configuration in a *model* `from_pretrained` method override the `name_or_path` attribute with the one of the model so that it's in priority linked to the weights path.
Another option would be to have two attributes in the configuration:
- `configuration_name_or_path`
- `weights_name_or_path`
respectively populated by the config `from_pretrained` and the model `from_pretrained`. Maybe with a property linking to one in priority.
but I'm wondering if it's worth so many attributes... ๐ค<|||||>I would go for the first version you proposed: having `name_or_path` for the configuration initialized if used alongside `from_pretrained`, which gets overridden by the model `from_pretrained`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,186 | closed | T5 (probably BART) issues with the `tf.saved_model.save` API and the `output_xxx` configuration attributes. | The TensorFlow implementation of the T5 (and very probably the BART) model has an issue with using the tf.saved_model.save API alongside the `output_attentions=True` and the `output_hidden_states=True` configuration attributes.
The tests are skipped currently due to this issue. | 10-30-2020 15:12:28 | 10-30-2020 15:12:28 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,185 | closed | TensorFlow Longformer model as a saved model with attention outputs | The TensorFlow implementation of the Longformer model has an issue with using the `tf.saved_model.save` API alongside the `output_attentions=True` configuration attribute.
The test is skipped currently due to this issue. | 10-30-2020 15:09:26 | 10-30-2020 15:09:26 | Don't manage to get this test passing even with the new design of #7562 -> the problem to me is that the shape of `attentions` in Longformer depends on the input tensor => so not sure we'll find a good solution here<|||||>If it can't pass the test defined in the common tests, then the best would be to override the test in the `LongformerModelTester` and do a test to ensure that the correct behavior still works, even if not adhering to the common tests.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,184 | closed | trainer.evaluate returns 'epoch' from training | I am training a BERT model: `trainer.train()`
Then I call `evaluate_result = trainer.evaluate(labeled_dataset_test)`
The value of `evaluate_result` looks like this:
```python
{'eval_loss': 0.5908029079437256,
'eval_acc': 0.8282828282828283,
'eval_bac': 0.8243021346469622,
'eval_mcc': 0.7422526698197041,
'eval_f1_macro': 0.826792009400705,
'epoch': 3.0,
'total_flos': 1373653507542624}
```
IMO the dict should not contain `'epoch': 3.0,`. That is the number of epochs from training. It has nothing to do with evaluation... | 10-30-2020 14:01:39 | 10-30-2020 14:01:39 | You can easily ignore that value though.
The problem is that you won't have it at each eval during the training loop if we don't include it. There could be something smarter there, but it would take time for something that is just purely cosmetic.<|||||>I added a small PR for improved documentation about this: #8273<|||||>Closing this since PR was merged. |
transformers | 8,183 | closed | Summarization outputs on T5-small gets truncated | # โ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- --> I been fine tuning t5-small for my own dataset but every time I set a max_length it just truncates the output. For example my input statement is :
**When I first entered high school I was very nervous as it was a new school for me and it was a big adjustment. I was overwhelmed with work and mentally wasn't staying optimistic as I found it hard to manage my time and make friends. I felt like I wasn't good enough, and this caused me to treat myself like I wasn't worthy of being at such a place. In terms of behavior to others, I would say it made me more shy while still adapting to the new environment.**
and my output is as follows:
**when I first entered high school I was very nervous as it was a new school for me and it was a**
My generate is as follows:
**( input,
min_length= 0,
max_length=25,
length_penalty=2.0,
num_beams=4,
early_stopping=True )**
Is it possible for me to make it not truncate at the end? and also make it generate a reasonable summary ?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 10-30-2020 12:53:02 | 10-30-2020 12:53:02 | Hey @harung1993,
sorry I'm having trouble understanding your question here. Also this seems like a question that should rather be posted in https://discuss.huggingface.co/ . We are trying to use github issues only for bug reports. Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,182 | closed | cannot load pytorch_model.bin / pytorch version ? | # โ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
torch version 1.4.0
I execute run_language_modeling.py and save the model. However, when I load the saved model, "OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a Pytorch model from a TF 2.0 checkpoint, please set from_tf=True" occurs.
If I install torch==1.6.0, it is successful to load model. However, I have to use torch version 1.4.0 and torchvision 0.5.0. How can I load pytorch_model.bin in torch 1.4.0???
++ I tried to train run_language_modeling.py in torch version 1.4.0, but it cannot import "torch.optim.lr_scheduler" thus the train code cannot be executed.
Thus my question is
[1] How can I load pytorch_model.bin in torch version 1.4.0 / or
[2] How can I train run_language_modeling.py in torch version 1.4.0?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 10-30-2020 12:35:12 | 10-30-2020 12:35:12 | Hi! Could you provide the code you're using, as well as all the environment information?<|||||>@LysandreJik should we update the issue template for this last option `Questions & Help`?
I feel like our first question to everybody is always `Could you provide the code you're using, as well as all the environment information`<|||||>You're right, we should!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,181 | closed | Documentation on how to get results out of trainer is missing. | Hi,
some time ago it was possible to get the results out of the trainer by `trainer.log_history`.
This now changed to `trainer.state.log_history`. But everything is not documented. I suggest to add documentation on how to get results out of the trainer. | 10-30-2020 12:06:02 | 10-30-2020 12:06:02 | Hello! Indeed, do you want to open a PR to fix this? Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,180 | closed | Fix the behaviour of DefaultArgumentHandler (removing it). | # What does this PR do?
This PR will attempt to fix some clearly wrong error message in what I think should be
valid calls.
The decision to remove `DefaultArgumentHandler` comes from the fact that the real usage actually lied for only QuestionAnswering and ZeroShot which already have their own handler.
Having real python handle arguments and errors seems way more predictable and remove `*args` from function
signatures makes code more readable I think. We need to be very careful though as arguments number need to be in sync
otherwise errors can happen (This is due to mix of positional arguments, named positional arguments and generic keyword arguments being used together).
For the reader, the call order of functions is something like
```python
SpecificPipeline.__call__(myargument1, myargument2, **kwargs)
# Which calls
Pipeline.__call__(*args, **kwargs)
# Which in turn calls
SpecificPipeline._parse_and_tokenize(my_argument1, my_argument2, **kwargs)
```
Smaller changes for QoL where I tried to normalize inputs as early on as possible in the call stack (i.e SpecificPipeline.__call__) so we don't have to do it over and over.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@mfuntowicz
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | 10-30-2020 11:23:50 | 10-30-2020 11:23:50 | Sure before the change
```python
from transformers import pipeline
pipe = pipeline(task='fill-mask', model='bert-base-uncased')
pipe("I am a real [MASK]", targets=["superhero", "legend"])
# [{'sequence': '[CLS] i am a real superhero [SEP]',
# 'score': 1.21390044682812e-07,
# 'token': 16251,
# 'token_str': 'superhero'},
# {'sequence': '[CLS] i am a real legend [SEP]',
# 'score': 4.292454747201191e-08,
# 'token': 5722,
# 'token_str': 'legend'}]
pipe("I am a real [MASK]", otherarg=True)
ValueError Traceback (most recent call last)
<ipython-input-13-4784fa412984> in <module>
----> 1 pipe("I am a real [MASK]", otherarg=True)
~/.pyenv/versions/3.8.5/lib/python3.8/site-packages/transformers/pipelines.py in __call__(self, targets, *args, **kwargs)
1201 - **token** (:obj:`str`) -- The predicted token (to replace the masked one).
1202 """
-> 1203 inputs = self._parse_and_tokenize(*args, **kwargs)
1204 outputs = self._forward(inputs, return_tensors=True)
1205
~/.pyenv/versions/3.8.5/lib/python3.8/site-packages/transformers/pipelines.py in _parse_and_tokenize(self, padding, add_special_tokens, *args, **kwargs)
625 """
626 # Parse arguments
--> 627 inputs = self._args_parser(*args, **kwargs)
628 inputs = self.tokenizer(
629 inputs,
~/.pyenv/versions/3.8.5/lib/python3.8/site-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)
179 def __call__(self, *args, **kwargs):
180 if len(kwargs) > 0 and len(args) > 0:
--> 181 raise ValueError("Pipeline cannot handle mixed args and kwargs")
182
183 if len(kwargs) > 0:
ValueError: Pipeline cannot handle mixed args and kwargs
```
And afterwards:
```python
from transformers import pipeline
pipe = pipeline(task='fill-mask', model='bert-base-uncased')
pipe("I am a real [MASK]", otherarg=True)
# [{'sequence': '[CLS] i am a real. [SEP]',
# 'score': 0.94329434633255,
# 'token': 1012,
# 'token_str': '.'},
# {'sequence': '[CLS] i am a real ; [SEP]',
# 'score': 0.02879592962563038,
# 'token': 1025,
# 'token_str': ';'},
# {'sequence': '[CLS] i am a real! [SEP]',
# 'score': 0.022438935935497284,
# 'token': 999,
# 'token_str': '!'},
# {'sequence': '[CLS] i am a real? [SEP]',
# 'score': 0.00518036400899291,
# 'token': 1029,
# 'token_str': '?'},
# {'sequence': '[CLS] i am a real... [SEP]',
# 'score': 3.598905823309906e-05,
# 'token': 2133,
# 'token_str': '...'}]
```<|||||>Should I merge ?<|||||>I could start that.<|||||>(no need to do it in this PR, it can wait :) |
transformers | 8,179 | closed | `do_predict` option of `TrainingArguments` - but no way to pass test set. | The `TrainingArguments` class has the option to pass `do_predict=True`. The doc sais: "Whether to run predictions on the test set or not."
But there is no way to pass a test set to the trainer. At least I can not find it in the documentation...
Can you please clarify / fix this?
Many thanks
Philip | 10-30-2020 11:18:28 | 10-30-2020 11:18:28 | The `do_predict` argument (like `do-train` and `do-eval` is not used by `Trainer`), just by the training scripts provided as examples.
Getting predictions on a test set is done with `trainer.predict(test_dataset)`.<|||||>Should corresponding documentation be added?<|||||>Sure, do you want to take a stab at it?<|||||>@sgugger I can do a PR if you want. But...
... for me it smells like a design flaw when this is only for CLI usage and has no meaning for the "normal use".
Should we consider just removing it?
How would a documentation look like? _"This field is just a workaround for CLI value storage for the example code and has no meaning for normal usage."_?<|||||>It's not `TrainerArguments` but `TrainingArguments`, so I don't see the problem with some of those arguments being only for CLI usage. Besides, removing them would break existing code so it would do more harm than good IMO.
For the documentation itself, it's not just a workaround. Something along the line of
```
This argument is not directly used by :class:`~transformers.Trainer`, it's intended to be used by your training/evaluation scripts instead. See the `example scripts <https://github.com/huggingface/transformers/tree/master/examples>`___ for more details.
```
would sound better.<|||||>> ```
> This argument is not directly used by :class:`~transformers.Trainer`, it's intended to be used by your training/evaluation scripts instead. See the `example scripts <https://github.com/huggingface/transformers/tree/master/examples>`___ for more details.
> ```
>
> would sound better.
PR has been created: #8270<|||||>closing this since PR was merged |
transformers | 8,178 | closed | Minor style improvements for the Flax BERT and RoBERTa examples | 1. Use `@nn.compact` rather than `@compact` (as to not make it seem
like compact is a standard Python decorator.
2. Move attribute docstrings from two `__call__` methods to comments
on the attributes themselves. (This was probably a remnant from
the pre-Linen version where the attributes were arguments to
`call`.)
# What does this PR do?
Minor style improvements:
1. Use `@nn.compact` rather than `@compact` (as to not make it seem
like `compact` is a standard Python decorator.
2. Move attribute docstrings from two `__call__` methods to comments
on the attributes themselves. (This was probably a remnant from
the pre-Linen version where the attributes were arguments to
`call`.)
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? No. It's just adjusting the Flax example to the current best practices (I work on Flax)
- [x] Did you make sure to update the documentation with your changes? No doc changes.Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
It's not clear what the right pattern is for docstrings of dataclass attributes. I went with something pragmatic here. I couldn't find any online references for the "correct Pythonic pattern" here -- LMK if there's another form you prefer.
- [x] Did you write any new necessary tests? No new tests. Exists tests pass.
## Who can review? | 10-30-2020 10:22:05 | 10-30-2020 10:22:05 | cc @LysandreJik @mfuntowicz <|||||>Offline approval from @mfuntowicz! |
transformers | 8,177 | closed | AutoTokenizer.from_pretrained function cannot be customized | A customized tokenizer has been provided in the tokenizer library, so that I can directly use the word segmentation data and vocab in the previous fairseq, but there is AutoTokenizer.from_pretrained function in the transformer, which cannot be customized like the tokenizer library, so I have no choice to directly use fairseq's vocab and word segmentation data in transformer.
What needs to be done๏ผ | 10-30-2020 09:37:17 | 10-30-2020 09:37:17 | Hi! Could you provide an example of the usage that you would like to see with the `transformers` library, so that we may see what can be done?<|||||>Is this the same as #8125 ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,176 | closed | Fixing some warnings in DeBerta | # What does this PR do?
Just fixes some simple warning coming from python from incorrect escapes in docstrings + `collections.abc` import.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
--> | 10-30-2020 09:20:56 | 10-30-2020 09:20:56 | |
transformers | 8,175 | closed | Onnx converted model output shape not matching with the finetuned model (BUG) | I have trained a transformer 3 class classification model, the model used is distilbert-base-uncased.
Now, after training I tried to convert the model to onnx for faster inference using below script.
`
!python convert_graph_to_onnx.py --framework pt --model pt_line-distilbert --tokenizer distilbert-base-uncased --quantize onnx/line-distilbert.onnx`
```
====== Converting model to ONNX ======
ONNX opset version set to: 11
Loading pipeline (model: pt_line-distilbert, tokenizer: distilbert-base-uncased)
Creating folder /home/segments/onnx/linetype
Using framework PyTorch: 1.6.0
Found input input_ids with shape: {0: 'batch', 1: 'sequence'}
Found input attention_mask with shape: {0: 'batch', 1: 'sequence'}
Found output output_0 with shape: {0: 'batch', 1: 'sequence'}
Ensuring inputs are in correct order
head_mask is not present in the generated input list.
Generated inputs order: ['input_ids', 'attention_mask']
/home/miniconda3/envs/reas/lib/python3.8/site-packages/transformers/modeling_utils.py:1645: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
input_tensor.shape == tensor_shape for input_tensor in input_tensors
====== Optimizing ONNX model ======
2020-10-30 02:50:55.673526328 [W:onnxruntime:, inference_session.cc:1143 Initialize] Serializing optimized model with Graph Optimization level greater than ORT_ENABLE_EXTENDED. The generated model may contain hardware and execution provider specific optimizations, and should only be used in the same environment the model was optimized for.
Optimized model has been written at /home/segments/onnx/line/line-distilbert-optimized.onnx: โ
/!\ Optimized model contains hardware specific operators which might not be portable. /!\
As of onnxruntime 1.4.0, models larger than 2GB will fail to quantize due to protobuf constraint.
This limitation will be removed in the next release of onnxruntime.
Warning: onnxruntime.quantization.quantize is deprecated.
Please use quantize_static for static quantization, quantize_dynamic for dynamic quantization.
Quantized model has been written at /home/segments/onnx/line/line-distilbert-optimized-quantized.onnx: โ
```
Now, when trying to do inference,
```
options = SessionOptions()
options.intra_op_num_threads = 1
options.execution_mode = ExecutionMode.ORT_SEQUENTIAL
model_path = "onnx/line/line-distilbert-optimized-quantized.onnx"
session = InferenceSession(model_path, options)
tokens = tokenizer.encode_plus("did you get it?", max_length=256, truncation=True, padding='max_length')
tokens = {name: np.atleast_2d(value) for name, value in tokens.items()}
sequence, = session.run(None, tokens)
sequence.shape (1, 256, 768)
```
but my model output should be (1, 3) # 3 class classification model
Any way to fix it? I have gone through this issue :- https://github.com/huggingface/transformers/issues/4825 but there's no proper solution mentioned there. | 10-30-2020 09:02:07 | 10-30-2020 09:02:07 | It was working for me when using --pipeline sentiment-analysis<|||||>Hi @user06039 did you find a solution for this? Cause I am also facing the same issue. |
transformers | 8,174 | closed | Possible bug in "trainer" when training "BertForPretraining.from_pretrained()" | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
Environment is colab with GPU enabled. modules are provided in the Jupyter notebook on Google Drive here:
[https://colab.research.google.com/drive/1UX6NMXA2cHGUtDJwh_U6LL-kyd8Gyt9y?usp=sharing](https://colab.research.google.com/drive/1UX6NMXA2cHGUtDJwh_U6LL-kyd8Gyt9y?usp=sharing)
### Who can help
@sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using BERT
BertForPreTraining.from_pretrained("bert-base-uncased")
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: Error can be reproduced with the Notebook provided above.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
I am attempting to "fine-tune" bert-base-uncased by training it on additional sentences. I am unable to do this with Trainer - I get an error message shown in the notebook.
## To reproduce
Steps to reproduce the behavior:
1. Execute the notebook
2. First example succeeds with model BertLMHeadModel.from_pretrained("bert-base-uncased")
3. Second example fails at train() simply by changing the model to BertForPreTraining.from_pretrained("bert-base-uncased")
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
The bug is entirely reproduced on the linked Jupyter Notebook above, when run on Google Colab. The error message is:
RuntimeError: grad can be implicitly created only for scalar outputs
## Expected behavior
The model "BertForPretraining.from_pretrained("bert-base-uncased") should train on the two sentences provided.
<!-- A clear and concise description of what you would expect to happen. -->
If you know a work-around for this bug, I will appreciate it.
Sylvain - good to see you doing interesting work!! - Dana Ludwig (student of fast.ai course and owner of your book)
| 10-30-2020 08:15:39 | 10-30-2020 08:15:39 | Hi Dana! I don't think this is a bug. You are not providing `BertForPreTraining` the `nsp_labels` it also requires for training, so it does not compute the loss (and then the rest fails). You should use `DataCollatorForNextSentencePrediction` to have the batches get those labels too (and it might requires using `TextDatasetForNextSentencePrediction` with it) or write your own `data_collator` that will add those labels.<|||||>Hi Sylvain! Thanks for the quick response! My understanding of the process for pre-training BERT is that it is self-supervised and creates it's own labels. For example, for "next sentence prediction", it looks at the input sentences and uses the "next sentence" as the label for that task. That's how it worked when I used the TensorFlow model to train my BERT from scratch. Does the HuggingFace trainer not do that? I will look at "extDatasetForNextSentencePrediction" to see if that has some answers. I just thought that HuggingFace framework would be easier to fine-tune than using Google TensorFlow code.<|||||>The trainer just does the training loop, it is independent from the tasks. Transformers provides tools to get the data together (which I mentioned) and ready for the Trainer on all the most common NLP tasks, BERT-pretraining objective included.<|||||>Hi Sylvain,
Your suggestion did the trick! I used โTextDatasetForNextSentencePredictionโ to build my dataset and โDataCollatorForNextSentencePredictionโ for my collator. Itโs training now and the validation loss is getting lower, so everything looks fine. As you remember, fine-tuning the baseline pre-trained model with new task-specific data was part of your workflow for ULMFIT, so I can imagine this use-case will come up a lot. If you would like me to clean up my test example notebook, Iโd be glad to let you post it in your examples section. It took me days to get this far, so Iโd like to save the next person some time if possible.
Thank you! Dana |
transformers | 8,173 | closed | raining loss is not decreasing when using the Roberta pre-trained model from the transformers library | I load the Roberta pre-trained model from the transformers library and use it for the sentence-pair classification task. The loss function used to decrease during the training per epoch until the last week, but now even though all of the parameters, including the batch size and the learning rate have the same value, when I fit my model the value of the loss function is not decreasing. I am a little bit confused and I have trained my model using various parameters and also I utilized another code in PyTorch, but still, the loss function is not decreasing. Can anyone help me to figure out the problem?
here is the link to my code:
https://colab.research.google.com/drive/1CFg41KDHJSJNkehJOHbp3gfXRdva60oW?usp=sharing
and the dataset:
https://drive.google.com/drive/folders/1CUH_z_HI31-yfj8hOmRfJBKRKe_BNkku
| 10-30-2020 07:27:05 | 10-30-2020 07:27:05 | Hello! Could you open a post on the [forum](https://discuss.huggingface.co) instead? We try to keep issues for bugs only.<|||||>Hi, sure. |
transformers | 8,172 | closed | Create Speedtest.py | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-30-2020 04:21:05 | 10-30-2020 04:21:05 | Hello, could you provide more information on what this is? The template is empty, and I'm not sure what this brings to the library.<|||||>Don't spend time on this @LysandreJik this is just HacktoberFest spam |
transformers | 8,171 | closed | Need suggestion on contributing TFDPR | # ๐ New model addition
## Model description
Hi, I would love to try contributing TFDPR . This is the first time to me, so I need some suggestions.
I have followed @sshleifer 's [great PR on TFBart model](https://github.com/huggingface/transformers/commit/829842159efeb1f920cbbb1daf5ad67e0114d0b9) on 4 files :` __init__.py , convert_pytorch_checkpoint_to_tf2.py , utils/dummy_tf_objects.py` and (newly created) `modeling_tf_dpr.py `
Now the TF model works properly and can load Pytorch's weights successfully the same output as Pytorch's counterparts **except** small random noise (1e-5) which I suspect of some dtypes different , but I could not find the cause.
I guess I need to add document on docs/source/model_doc/dpr.rst , and that's all ?
**My question is do I need to change / fix any other files ? and/or do I need to do some other thing before making PR ?**
<!-- Important information -->
To resolve TF vs. Pytorch naming issues, there's one change regarding `TFBertModel` vs. `TFBertMainLayer` as [discussed here](https://discuss.huggingface.co/t/solved-issue-on-translating-dpr-to-tfdpr-on-loading-pytorch-weights-to-tf-model/1764) .
Thanks to @sshleifer for his help to solve the issue.
## Open source status
* [X] the model implementation is available: (give details)
You can see all the modified codes with test run at :
https://colab.research.google.com/drive/1lU4fx7zkr-Y3CXa3wmHIY8yJhKdiN3DI?usp=sharing
(to easily navigate the changes, please โfind on pageโ for e.g. `TFDPRContextEncoder` )
* [X] the model weights are available: (give details)
At the moment, I use existing Pytorch weights, but will upload TF weights too.
* [X] who are the authors: (mention them, if possible by @gh-username)
@ratthachat | 10-30-2020 03:48:35 | 10-30-2020 03:48:35 | Hello! Thanks for offering to contribute the TF implementation of the DPR model! Something that may help you is to open a PR very early on, even if you have a lot of questions. This way we can help provide pointers, and we can guide you in the right direction.
Another aspect that may be of tremendous help, would be to follow the checklist when adding a new model. It is available [here](https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model). If you open a PR, we recommend to put this checklist in the description so that everybody can follow better.
Let me know if I can help further.<|||||>@LysandreJik Thanks for your suggestion and the checklist which is just what I want!
I will try to follow the checklist as much as possible and then PR. (UPDATED : already open a PR with checklist)
Please let me know if I should close this issue.<|||||>This is great, only the tests are left! No need to close the issue here, we can close this issue once the PR is merged.<|||||>Thanks for your kind words @LysandreJik !
At first, I have no idea how to test. Now I know I have to translate `test_modeling_dpr.py` and see an example on the recent `test_modeling_tf_bart.py` .
<|||||>@LysandreJik :D
After several hours of testing and debugging, my current model is alreay passed 27 tests :D
The test run is in Colab here : (in the last cell)
https://colab.research.google.com/drive/1czS_m9zy5k-iSJbzA_DP1k1xAAC_sdkf?usp=sharing
My [current repo](https://github.com/ratthachat/transformers) already contained `test_modeling_tf_dpr.py`
Could you please suggest me the next step (make a repo update with latest Transformers ?)<|||||>The next steps would be for us to review what you've contributed until now! We'll take a look as soon as possible.<|||||>Thanks Lysandre! I actually have aimed for TFRag . Meanwhile, I will make a new branch and use TFDPR on translating TFRag .<|||||>Close the issue as TFDPR is already merged. Very happy. Thanks a lot everybody!! |
transformers | 8,170 | closed | Create README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-30-2020 03:18:36 | 10-30-2020 03:18:36 | |
transformers | 8,169 | closed | Create README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-30-2020 02:54:30 | 10-30-2020 02:54:30 | |
transformers | 8,168 | closed | Create README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-30-2020 02:50:24 | 10-30-2020 02:50:24 | |
transformers | 8,167 | closed | Create README.md | Telugu BERTU Readme file
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-30-2020 02:19:19 | 10-30-2020 02:19:19 | Thanks for sharing! You should add more metadata to your model card if possible: https://huggingface.co/docs#what-metadata-can-i-add-to-my-model-card |
transformers | 8,166 | closed | Replace swish with silu | This pull request replaces the swish with the silu. Note "silu" still maps to tf.keras.activations.swish not tf.keras.activations.silu for tensorflow since the silu is in the tensorflow nightlies, but not in the stable version of tensorflow.
This fixes https://github.com/huggingface/transformers/issues/8100
@LysandreJik | 10-30-2020 02:13:26 | 10-30-2020 02:13:26 | Thanks @TFUsers for this important PR.
As far as I know the activation names are also directly inside the the `config.json` files in the model hub. @sgugger @LysandreJik Do we plan to update all of them?<|||||>Very good point @jplu. We can't change those names in the config hosted online because it wouldn't be backward-compatible, so we need to still accept the old names (without documenting the behavior). So we should leave the old `swish` in the dictionaries `ACT2FN`.<|||||>It looks like everything passes (ignoring "src/transformers/activations.py:52:5: F811 redefinition of unused 'silu' from line 40").<|||||>> I guess the cleanest approach in that regard would be to remove the definition of ACT2FN in these files, and instead import the centralized ACT2FN from the activations files
Line 30 of `modeling_bert.py` is
`from .activations import ACT2FN`
Are the main concerns resolved?<|||||>You're right, I was checking in the wrong file. Could you fix the code quality issue related to the redefinition of `silu`? You can follow what's done with the `gelu` method, by renaming the `silu` method to `_silu_python` and doing an if/else statement according to the torch version.
Also that version check (same with the `gelu`) doesn't seem robust at all. Could we use the `packaging` util to do something better? Something like:
```py
from packaging import version
if version.parse(torch.__version__) < version.parse("1.4"):
...
```<|||||>Thanks! |
transformers | 8,165 | closed | Fix typo: s/languaged/language/ | 10-30-2020 02:11:32 | 10-30-2020 02:11:32 | ||
transformers | 8,164 | closed | [s2s] Option to aggregate rouge deterministically | Optionally take randomness/sampling out of calculate_rouge_score.
Not breaking, the default is unchanged. | 10-29-2020 23:05:21 | 10-29-2020 23:05:21 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,163 | closed | [CI] Better reports #2 | As discussed at https://github.com/huggingface/transformers/pull/8110 this PR:
* [x] - generates 3 types of failure reports - long, short and one-per-line
* [x] - fixes multiple test suite tasks in a single job to allow them all to run regardless of the outcome of the previous test suites (using `if: always()`
* [x] - adds a workaround for the cumbersome way github makes the artifacts available by printing the short failure report in its own tab, so getting to errors should be very easy now.
Once we perfect this hack to our liking, I intend to submit this to `pytest` and see if perhaps they would consider accepting it as a feature.
@sshleifer | 10-29-2020 22:32:15 | 10-29-2020 22:32:15 | Thanks!<|||||>OK, everything seems to be working well. Let me know if you have any comments/suggestions/recommendations before replicating this to the rest of the jobs.
See: https://github.com/huggingface/transformers/runs/1329578690?check_suite_focus=true
I will wait for https://github.com/huggingface/transformers/pull/8007 to be merged before spreading the love to the rest of the jobs, so that they won't need to deal with a lot of conflicts.<|||||>I also proposed this a `pytest` feature: https://github.com/pytest-dev/pytest/issues/7972 - probably others would benefit from it.
|
transformers | 8,162 | closed | Fix typo: s/Chinees/Chinese/ | 10-29-2020 21:37:07 | 10-29-2020 21:37:07 | Oh, it was already fixed in #8159 |
|
transformers | 8,161 | closed | generate() always starts with bos_token_id | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Ubuntu 16.04
- Python version: 3.7
- PyTorch version (GPU?): 1.6 GPU
- Tensorflow version (GPU?): Doesn't matter
- Using GPU in script?: Doesn't matter
- Using distributed or parallel set-up in script?: Doesn't matter
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
- TextGeneration: @TevenLeScao
- T5: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): T5
The problem arises when using:
* the official example scripts: [T5ForConditionalGeneration Doc](https://huggingface.co/transformers/model_doc/t5.html?highlight=t5#transformers.T5ForConditionalGeneration)
## To reproduce
Steps to reproduce the behavior:
```python3
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('t5-small')
model = T5ForConditionalGeneration.from_pretrained('t5-small', return_dict=True)
input_ids = tokenizer('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt').input_ids
labels = tokenizer('<extra_id_0> cute dog <extra_id_1> the <extra_id_2>', return_tensors='pt').input_ids
outputs = model(input_ids=input_ids, labels=labels)
model.generate(input_ids)[0]
>>> tensor([ 0, 32099, 2447, 704, 32098, 8, 32097, 2447, 5, 1])
# <- start with 0 = pad_token_id = decoder_start_token_id of T5
input_ids = tokenizer("summarize: studies have shown that owning a dog is good for you ", return_tensors="pt").input_ids # Batch size 1
outputs = model.generate(input_ids)
outputs
>>> tensor([[ 0, 2116, 43, 2008, 24, 293, 53, 3, 9, 1782, 19, 207,
21, 25, 3, 5, 1]])
# <- start with 0 = pad_token_id = decoder_start_token_id of T5
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Generation outputs should not start with 0 (= pad_token_id = decoder_start_token_id of T5)
```python3
>>> tensor([ 32099, 2447, 704, 32098, 8, 32097, 2447, 5, 1])
>>> tensor([[ 2116, 43, 2008, 24, 293, 53, 3, 9, 1782, 19, 207,
21, 25, 3, 5, 1]])
```
<!-- A clear and concise description of what you would expect to happen. -->
## Analysis / Suggestion
This happens because the `input_ids` are initialized with [bos_token_id](https://github.com/huggingface/transformers/blob/v3.4.0/src/transformers/generation_utils.py#L329) or [decoder_start_token_id](https://github.com/huggingface/transformers/blob/v3.4.0/src/transformers/generation_utils.py#L432) then iteratively updated during `.generate()`
But should `.generate()` return the first token? It is confusing and makes it hard to debug since `tokenizer.decode()` hides this behavior.
It would be better to exclude the first token and just return `output [:, 1:]` in [the last line of generate()](https://github.com/huggingface/transformers/blob/v3.4.0/src/transformers/generation_utils.py#L512). | 10-29-2020 21:35:47 | 10-29-2020 21:35:47 | Hey @j-min, I don't think we will change this behavior because a) huge backward breaking change b) I think it's important to understand that generate **has** to start with a BOS/decoder_start_token_id (see Encoder-Decoder blog post: https://huggingface.co/blog/encoder-decoder)
Also you could add `skip_special_tokens=True` to the decode method to not return this token |
transformers | 8,160 | closed | ConnectionError: ('Connection aborted.', OSError("(32, 'EPIPE')")) | Getting this error uploading a T5-3b model (~5 GB) to the model-hub.
I don't think it's my connection; I didn't have any issues with other smaller models, except this one.
Any thoughts on what could be the issue?
```
$ transformers-cli upload unifiedqa-t5-3b --organization allenai
About to upload file /Users/danielk/ideaProjects/t2t-qa/experiments/upload_models/3b/unifiedqa-t5-3b/tokenizer_config.json to S3 under filename unifiedqa-t5-3b/tokenizer_config.json and namespace allenai
About to upload file /Users/danielk/ideaProjects/t2t-qa/experiments/upload_models/3b/unifiedqa-t5-3b/special_tokens_map.json to S3 under filename unifiedqa-t5-3b/special_tokens_map.json and namespace allenai
About to upload file /Users/danielk/ideaProjects/t2t-qa/experiments/upload_models/3b/unifiedqa-t5-3b/config.json to S3 under filename unifiedqa-t5-3b/config.json and namespace allenai
About to upload file /Users/danielk/ideaProjects/t2t-qa/experiments/upload_models/3b/unifiedqa-t5-3b/spiece.model to S3 under filename unifiedqa-t5-3b/spiece.model and namespace allenai
About to upload file /Users/danielk/ideaProjects/t2t-qa/experiments/upload_models/3b/unifiedqa-t5-3b/pytorch_model.bin to S3 under filename unifiedqa-t5-3b/pytorch_model.bin and namespace allenai
Proceed? [Y/n] y
Uploading... This might take a while if files are large
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/allenai/unifiedqa-t5-3b/tokenizer_config.json
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/allenai/unifiedqa-t5-3b/special_tokens_map.json
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/allenai/unifiedqa-t5-3b/config.json
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/allenai/unifiedqa-t5-3b/spiece.model
0%| | 9756672/11406640119 [00:02<1:41:40, 1868312.30it/s]Traceback (most recent call last):
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 331, in _send_until_done
return self.connection.send(data)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/OpenSSL/SSL.py", line 1737, in send
self._raise_ssl_error(self._ssl, result)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/OpenSSL/SSL.py", line 1639, in _raise_ssl_error
raise SysCallError(errno, errorcode.get(errno))
OpenSSL.SSL.SysCallError: (32, 'EPIPE')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py", line 603, in urlopen
chunked=chunked)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py", line 355, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/Users/danielk/opt/anaconda3/lib/python3.7/http/client.py", line 1244, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/Users/danielk/opt/anaconda3/lib/python3.7/http/client.py", line 1290, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/Users/danielk/opt/anaconda3/lib/python3.7/http/client.py", line 1239, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/Users/danielk/opt/anaconda3/lib/python3.7/http/client.py", line 1065, in _send_output
self.send(chunk)
File "/Users/danielk/opt/anaconda3/lib/python3.7/http/client.py", line 987, in send
self.sock.sendall(data)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 342, in sendall
sent = self._send_until_done(data[total_sent:total_sent + SSL_WRITE_BLOCKSIZE])
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 337, in _send_until_done
raise SocketError(str(e))
OSError: (32, 'EPIPE')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py", line 641, in urlopen
_stacktrace=sys.exc_info()[2])
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/util/retry.py", line 368, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/packages/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py", line 603, in urlopen
chunked=chunked)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/connectionpool.py", line 355, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/Users/danielk/opt/anaconda3/lib/python3.7/http/client.py", line 1244, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/Users/danielk/opt/anaconda3/lib/python3.7/http/client.py", line 1290, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/Users/danielk/opt/anaconda3/lib/python3.7/http/client.py", line 1239, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/Users/danielk/opt/anaconda3/lib/python3.7/http/client.py", line 1065, in _send_output
self.send(chunk)
File "/Users/danielk/opt/anaconda3/lib/python3.7/http/client.py", line 987, in send
self.sock.sendall(data)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 342, in sendall
sent = self._send_until_done(data[total_sent:total_sent + SSL_WRITE_BLOCKSIZE])
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 337, in _send_until_done
raise SocketError(str(e))
urllib3.exceptions.ProtocolError: ('Connection aborted.', OSError("(32, 'EPIPE')"))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/danielk/opt/anaconda3/bin/transformers-cli", line 10, in <module>
sys.exit(main())
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/transformers/commands/transformers_cli.py", line 33, in main
service.run()
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/transformers/commands/user.py", line 234, in run
token=token, filename=filename, filepath=filepath, organization=self.args.organization
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/transformers/hf_api.py", line 168, in presign_and_upload
r = requests.put(urls.write, data=data, headers={"content-type": urls.type})
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/requests/api.py", line 131, in put
return request('put', url, data=data, **kwargs)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/requests/api.py", line 60, in request
return session.request(method=method, url=url, **kwargs)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/Users/danielk/opt/anaconda3/lib/python3.7/site-packages/requests/adapters.py", line 498, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', OSError("(32, 'EPIPE')"))
``` | 10-29-2020 21:18:37 | 10-29-2020 21:18:37 | cc @julien-c <|||||>We've seen similar issues in the past @danyaljj.
We are going to release a new upload system for models in the coming week, can this wait till then? If it can't, and if you can upload your model to another bucket, we can copy it over manually. Let us know.<|||||>Hey, @julien-c ๐ Sounds fair, I think I can wait until your new system is rolled out. |
transformers | 8,159 | closed | Fix typo: indinces -> indices | 10-29-2020 20:21:03 | 10-29-2020 20:21:03 | > Looks good, thanks! Just make sure to run `make style` to have our scripts automatically fix the files you changed.
Oh. So should I send a new patch with the changes after running `make style`?<|||||>I'm not following. Your last commit had the styling changes so all is good.<|||||>> I'm not following. Your last commit had the styling changes so all is good.
Oh. But if I run `make style` I see tons of changes (193 files changed, 754 insertions(+), 2969 deletions(-)). For example, this file:
```diff
diff --git a/examples/adversarial/utils_hans.py b/examples/adversarial/utils_hans.py
index bf0623ff..17d4a8c4 100644
--- a/examples/adversarial/utils_hans.py
+++ b/examples/adversarial/utils_hans.py
@@ -112,10 +112,7 @@ if is_torch_available():
cached_features_file = os.path.join(
data_dir,
"cached_{}_{}_{}_{}".format(
- "dev" if evaluate else "train",
- tokenizer.__class__.__name__,
- str(max_seq_length),
- task,
+ "dev" if evaluate else "train", tokenizer.__class__.__name__, str(max_seq_length), task,
),
)
label_list = processor.get_labels()
@@ -281,10 +278,7 @@ class HansProcessor(DataProcessor):
def hans_convert_examples_to_features(
- examples: List[InputExample],
- label_list: List[str],
- max_length: int,
- tokenizer: PreTrainedTokenizer,
+ examples: List[InputExample], label_list: List[str], max_length: int, tokenizer: PreTrainedTokenizer,
):
"""
Loads a data file into a list of ``InputFeatures``
```<|||||>Are you sure you have proper versions of black/isort/flake8 ? Run `pip install -e .[dev]` in the repo to make sure you have them.
<|||||>Oh, yeah, it was that. Silly mistake. :hand: Sorry for the noise!<|||||>Np! |
|
transformers | 8,158 | closed | EncoderDecoderModel: tie weights between different classes of models | # ๐ Feature request
Tie weights between different classes of models, tie embedding matrices, update tutorial.
## Motivation
I have been following the Longformer2Roberta tutorial https://github.com/huggingface/transformers/blob/master/model_cards/patrickvonplaten/longformer2roberta-cnn_dailymail-fp16/README.md and it seems like the crucial part of tying weights is missing.
The EncoderDecoderModel is initialized with "allenai/longformer-base-4096" and "roberta-base", and "allenai/longformer-base-4096" in its turn was initialized from "roberta-base". It seems natural to be able to tie their attention and FFNN weights, although it might be problematic to deal with positional embedding. Anyway, I think one feature that definitely should be implemented is the tying embedding matrices.
## Your contribution
For now I solve the issue with
```
model.encoder.embeddings.word_embeddings.weight = model.decoder.roberta.embeddings.word_embeddings.weight
```
@patrickvonplaten | 10-29-2020 18:41:46 | 10-29-2020 18:41:46 | Yeah I think you have a good point here! Was discussing this with @ibeltagy and I think we should add a `_tie_encoder_decoder_word_embeddings(...)` that does exactly what you suggested. We should probably run this method when initializing an Encoder-Decoder model and if word embeddings are of the same size. We can provide a `tie_encoder_decoder_word_embeds` config params that defaults to True.
@alexyalunin do you want to try to make a PR for this ? :-)<|||||>> Yeah I think you have a good point here! Was discussing this with @ibeltagy and I think we should add a `_tie_encoder_decoder_word_embeddings(...)` that does exactly what you suggested. We should probably run this method when initializing an Encoder-Decoder model and if word embeddings are of the same size. We can provide a `tie_encoder_decoder_word_embeds` config params that defaults to True.
>
> @alexyalunin do you want to try to make a PR for this ? :-)
Ok, let me try this. I will put you in the reviewers. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,157 | closed | [testing] distributed: correct subprocess output checking | This PR fixes an issue revealed on CI - https://github.com/huggingface/transformers/runs/1327577422
* the external subprocess runner will now be more flexible and check `stdout|stderr` to validate that the subprocess sent at least some output. Currently the code checks only `stdout` which isn't right since the subprocess may not generate any.
* adds `stdout:` prefix to subprocess' stdout, like it was already doing for `stderr`.
@sgugger | 10-29-2020 18:00:23 | 10-29-2020 18:00:23 | |
transformers | 8,156 | closed | BertTokenizer loses unicode character | ## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-4.15.0-109-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
## Information
The tokenizer seems to lose a specific unicode character on tokenization. From the sentence
`General Saprang Kalayanamitr ( Thai : <unk> เน เธ เธเธฑเธฅเธขเธฒเธเธกเธดเธเธฃ ;`
from line 2557 of the Wiki02 training dataset, there is a little dot after and above `<unk>`
however the tokenizer produces
` ['general', 'sap', '##rang', 'kala', '##yana', '##mit', '##r', '(', 'thai', ':', '<', 'un', '##k', '>', 'เธ', '[UNK]', ';']`
## To reproduce
`t = BertTokenizer.from_pretrained('bert-base-uncased')`
`o = t.tokenize('General Saprang Kalayanamitr ( Thai : <unk> เน เธ เธเธฑเธฅเธขเธฒเธเธกเธดเธเธฃ ;')`
`o`
`['general', 'sap', '##rang', 'kala', '##yana', '##mit', '##r', '(', 'thai', ':', '<', 'un', '##k', '>', 'เธ', '[UNK]', ';']`
## Expected behavior
A subword should be given for the ' เน ' token. Otherwise there should be an option to be given a warning for removed characters.
| 10-29-2020 17:47:17 | 10-29-2020 17:47:17 | I'm having a similar problem since upgrading to 4.0 where certain unicode characters are being "eaten" even though I set 'use_fast' = False.
## Environment info
- `transformers` version: 4.0.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.6
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@mfuntowicz
## Information
BertTokenizer
The problem comes from certain unicode character being not turned correctly into UNK characters by the tokenizers when they did in earlier versions.
## To reproduce
```
import transformers
import torch
from transformers import BertTokenizer
print(torch.__version__)
print(transformers.__version__)
# THERE IS A UNICODE character between the , and '' (specifically \U+200D\U+200D\U+200D\U+200D)
sentence = ": ุกููพูุงุนูุฑ ุณูููุง , โโโโ '' Upal"
tokenizer = BertTokenizer.from_pretrained(
"bert-base-cased",
do_lower_case=False,
use_fast=False
)
print(tokenizer.tokenize(sentence))
# output 4.0.0
[':', '[UNK]', '[UNK]', ',', "'", "'", 'Up', '##al']
# output from 3.0.1
[':', '[UNK]', '[UNK]', ',', '[UNK]', "'", "'", 'Up', '##al'
```<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 8,155 | closed | ONNX T5 with Beam Search | Hey guys,
I didn't know where this belonged so opening up a generic issue.
I was working on integrating the ONNX T5 code by @abelriboulot with the HuggingFace Beam Search decoding code since I already had a decently performing T5 model for summarization and wanted to improve performance on CPU while maintaining the inference accuracy.
It works for the most part, but is slower as the HF code uses cached past state values to speed up the decoding. I got around this issue by creating two decoders with lm-head, one which doesn't take in past values for the initial decoding and another for subsequent steps where past values are considered. This is a bit complicated as the past values have to be flattened out to pass through the ONNX graph which I did and it works for getting back the output.
But for passing the input parameters, I get the following error:
**RUNTIME_EXCEPTION : Non-zero status code returned while running Mul node. Name:'Mul_48' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/math/element_wise_ops.h:479 void onnxruntime::BroadcastIterator::Init(int64_t, int64_t) axis == 1 || axis == largest was false. Attempting to broadcast an axis by a dimension other than 1. 2 by 3**
I feel like I am close to the solution which could essentially be added to the repo but this error is tripping me up :(
Any help whatsoever will be appreciated.
Thanks
@mfuntowicz @abelriboulot @patrickvonplaten @patil-suraj @sshleifer
ONNX Export code:
```python
past_state_input_pre = torch.rand((1,12,1,64))
past_state_input_post = torch.rand((1, 12, 10, 64))
past_key_value_states = [(past_state_input_pre, past_state_input_pre, past_state_input_post, past_state_input_post) for i in range(12)]
past_val_outputs = {'past_states_op_'+str(i): {0:'batch', 2: 'sequence'} for i in range(48)}
past_val_inputs = {'past_states_ip' + str(i): {0: 'batch', 2: 'sequence'} for i in range(48)}
dynamix_axes_dict = {
'input_ids': {0:'batch', 1: 'sequence'},
'encoder_hidden_states': {0:'batch', 1: 'sequence'}
}
dynamix_axes_dict.update(past_val_inputs)
dynamix_axes_dict.update({'hidden_states': {0:'batch', 1: 'sequence'}})
dynamix_axes_dict.update(past_val_outputs)
output_names_list = ['hidden_states'] + ['past_states_op_' + str(i) for i in range(48)]
input_names_list = ['input_ids', 'encoder_hidden_states'] + ['past_states_ip' + str(i) for i in range(48)]
# Exports to ONNX
_ = torch.onnx.export(
decoder_with_lm_head,
(torch.tensor([[42]]), simplified_encoder(input_ids), past_key_value_states),
f"{output_prefix}-decoder-with-lm-head.onnx",
export_params=True,
opset_version=12,
input_names=input_names_list,
output_names=output_names_list,
dynamic_axes= dynamix_axes_dict) | 10-29-2020 16:59:11 | 10-29-2020 16:59:11 | Hi @amanpreet692 ,
I'm not sure what this error means, but I've `T5` `onnx` version ready which is compatible with `generate` method.
To be able to use cache I exported the `encoder` and `lm_head` to `onnx` and kept the `decoder` in `torch`. This is bit hacky but still gives 1.4-1.6x speed-up for beam search, I'll be sharing it soon.<|||||>Yep, Even I was able to do that but since majority of the time is taken while decoding I wanted to convert decoder as well! Will keep trying for now.<|||||>@patil-suraj A question, somehow converting both lm-head and encoder is giving me worse result as compared to only converting the encoder. Did you try any additional optimizations like quantization?<|||||>No, I didn't try quantization with T5, so far I'm getting good enough speed-up and results are same as that of torch.
Not related to your `onnx` question, but you could also distill the models, to get additional speed-ups with minimal perf drop. Sam has just relased amazing s2s distillation [paper](https://arxiv.org/pdf/2010.13002.pdf). See if that helps you with speeding-up inference.<|||||>Hey @amanpreet692!
Thanks a lot for looking at making the ONNX version compatible with beam search. Could you send over your full script to make it easier to debug? Happy to hop on a call this week and hear a bit more what you have in mind. The two decoders solution sounds interesting!<|||||>I've posted the script on the [forum ](https://discuss.huggingface.co/t/speeding-up-t5-inference/1841).<|||||>@abelriboulot Thanks a lot for getting back :)
Here are the scripts for my work (The first two are changes on top of your code and the third is my custom model with 2 decoders):
1) [huggingface_utilities.py](https://gist.github.com/amanpreet692/41dba767220b5b1a6417066197781328) : Additional changes to include past states as input and output and convert 3 components (2 decoders, 1 encoder) into onnx format.
2) [models.py](https://gist.github.com/amanpreet692/d36af959e0d8d9cf84b19ff26d9b19d8) : Smallish change to include a new class CombinedDecoderNoPast
3) [t5_onnx_model.py](https://gist.github.com/amanpreet692/a8bf2d45a8f368830f3838790461d26b) : Complete T5 model that works with beam search, major changes in decoder processing.
Just an update: I was able to resolve to above issue but started getting a new shape issue for buffers, have raised an issue on onnx repo as well: [ONNX Issue](https://github.com/microsoft/onnxruntime/issues/5646)
Any pointers for debugging would be great, and sure it would be awesome if we can get on a call and work on this!!
Will keep trying on my own till then.
@patil-suraj Good job! I looked at your code and I had tried something very similar. Although am still skeptical as I was getting worse performance with converting both encoder and lm-head rather than only encoder. Will look at your results again.
Thanks again @abelriboulot !<|||||>as long as we pass same arguments to `generate` then we should get same results, I didn't observe any loss in accuracy.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,154 | closed | [s2s] Trainer vs PTL timings | For the following two commands,
+ PTL finishes: 2.01 it/s, ~3H, 21.32 Rouge
+ Trainer: 1.0 it/s, roughly 5.5H, 21.36 Rouge
I wanted to report this so I don't lose track of it. Looked at the code, and don't see any obvious issue, besides that the slowdown is suspiciously close to 2x.
Any idea @patil-suraj ?
### PTL Command
```bash
export BS=32
export GAS=1
python finetune.py \
--learning_rate=3e-5 \
--fp16 \
--gpus 1 \
--do_train \
--do_predict \
--val_check_interval 0.25 \
--n_val 500 \
--num_train_epochs 2 \
--freeze_encoder --freeze_embeds --data_dir cnn_dm \
--max_target_length 142 --val_max_target_length=142 \
--train_batch_size=$BS --eval_batch_size=$BS --gradient_accumulation_steps=$GAS \
--model_name_or_path sshleifer/student_cnn_12_6 \
--tokenizer_name facebook/bart-large \
--warmup_steps 500 \
--output_dir distilbart-cnn-12-6
```
### Trainer command
same as `builtin_trainer/train_distilbart_cnn.sh`:
```bash
export BS=32
export GAS=1
export m=sshleifer/student_cnn_12_6
export tok=facebook/bart-large
export MAX_TGT_LEN=142
python finetune_trainer.py \
--model_name_or_path $m --tokenizer_name $tok \
--data_dir cnn_dm \
--output_dir distilbart-cnn-12-6-trainer --overwrite_output_dir \
--learning_rate=3e-5 --sortish-sampler \
--warmup_steps 500 \
--fp16 \
--n_val 500 \
--gradient_accumulation_steps=$GAS \
--per_device_train_batch_size=$BS --per_device_eval_batch_size=$BS \
--freeze_encoder --freeze_embeds \
--num_train_epochs=2 \
--save_steps 3000 --eval_steps 3000 \
--logging_first_step \
--max_target_length 142 --val_max_target_length $MAX_TGT_LEN --test_max_target_length $MAX_TGT_LEN \
--do_train --do_eval --do_predict --evaluate_during_training \
--predict_with_generate --sortish_sampler
``` | 10-29-2020 16:41:53 | 10-29-2020 16:41:53 | I might be misreading progress bars. Running another test, will reopen if I can replicate. <|||||>PTL (device 1) using less GPU ram it seems:

Progress bars (note that PTL/Bottom is per epoch):

<|||||>I'm also experiencing slow down on TPU's, didn't run the new changes on GPU yet. I"ll investigate this<|||||>Thx!<|||||>I've confirmed that builtin ~2x slower on 1 GPU than PTL. Same commands as above on a different machine. All the screenshots above are valid.<|||||>These seem to run at the same speed if you pass `--fp16_opt_level=O1` to pytorch-lightning. Verifying now and will post results in 5 hrs.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,153 | closed | Add a template for examples and apply it for mlm and plm examples | # What does this PR do?
This PR adds a cookiecutter template to add a new example and experiments with it to add the run_mlm new script and a run_plm specific to XLNet. It runs with the same results as the old version.
Side note: the part for random masking applied in a data collator can become platform agnostic later on, if datasets adds a lazy map method. | 10-29-2020 15:18:44 | 10-29-2020 15:18:44 | |
transformers | 8,152 | closed | Document tokenizer_class in configurations | # What does this PR do?
Some random guy made a PR adding a `tokenizer_class` argument to `PretrainedConfig` but did not document it. This PR fixes that. | 10-29-2020 14:13:57 | 10-29-2020 14:13:57 | โค๏ธ |
transformers | 8,151 | closed | Smarter prediction loop and no- -> no_ in console args | # What does this PR do?
This PR does two things:
- the first one is to replace `no-` to `no_` in the `HFArgumentParser` so that arguments get a more consistent name: for instance `use_tokenizer_fast` in the new examples script give an argument `no-use_tokenizer_fast` and the inconsistency between - and _ makes it hard to find.
- the second one is to avoid computing the predictions and labels (and storing them) in the evaluation of a `Trainer` when there is no `compute_metrics` function. | 10-29-2020 14:07:49 | 10-29-2020 14:07:49 | |
transformers | 8,150 | closed | [s2s] distillBART docs for paper replication | 10-29-2020 13:37:50 | 10-29-2020 13:37:50 | ||
transformers | 8,149 | closed | Model card: Update widget examples. | The previous example in the widget has an error, correct it this time | 10-29-2020 12:47:06 | 10-29-2020 12:47:06 | |
transformers | 8,148 | closed | Masking in Pooling Layer from BERT Output? | In Keras when the embedding layers using masking it is propagate to layer afterwards like Pooling or RNN-Layers. I wonder if this holds for using transformers Bert Models? I.e. in the following are the attention masks used in the following Pooling as masking too? So, the averages do not include padding tokens?
```
id_ = Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32)
mask_ = Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32)
atn_ = Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32)
bert_model = TFBertModel.from_pretrained('bert-base-uncased')
embedding = bert_model(id_, attention_mask=mask_, token_type_ids=atn_)[0]
x = GlobalAveragePooling1D()(embedding) #are here attention_mask used as mask?
x = Dropout(0.2)(x)
out = Dense(3, activation='softmax')(x)
model = Model(inputs=[id_, mask_, atn_], outputs=out)
model.compile(loss='sparse_categorical_crossentropy', optimizer=opt)
```
https://discuss.huggingface.co/t/bert-output-for-padding-tokens/1550 | 10-29-2020 12:05:08 | 10-29-2020 12:05:08 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,147 | closed | [Model cards] Seq2Seq tags | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-29-2020 11:45:34 | 10-29-2020 11:45:34 | ๐ |
transformers | 8,146 | closed | Make tokenizer.pad() also pad `labels` | # ๐ Feature request
Make tokenizer.pad() also pad `labels`
## Motivation
I tried to use this:
https://github.com/huggingface/transformers/blob/8065fea87007fbf7542fc060ff8ddd0b5df567da/src/transformers/data/data_collator.py#L69
But since labels is not padded, the result cannot turn into a tensor. `ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.`
It currently pads `input_ids, attention_mask, token_type_ids, special_tokens_mask`
It seems logical to me that `tokenizer.pad()` should also pad `'labels'`.
## Your contribution
I have already created a PR #8116. It solves the problem above. | 10-29-2020 10:47:41 | 10-29-2020 10:47:41 | |
transformers | 8,145 | closed | TransformerXL: StopIteration: Caught StopIteration in replica 0 on device 0 | ## Environment info
- `transformers` version: 3.4.0
- Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-debian-stretch-sid
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
@TevenLeScao
## Error I get
```
Traceback (most recent call last):
File "/ai/fzc/minGPT/transformerXLtest.py", line 163, in <module>
input_ids=inputs["input_ids"].to(device),
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 155, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 165, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply
output.reraise()
File "/opt/conda/lib/python3.6/site-packages/torch/_utils.py", line 395, in reraise
raise self.exc_type(msg)
StopIteration: Caught StopIteration in replica 0 on device 0.
Original Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/modeling_transfo_xl.py", line 866, in forward
mems = self.init_mems(bsz)
File "/opt/conda/lib/python3.6/site-packages/transformers/modeling_transfo_xl.py", line 800, in init_mems
param = next(self.parameters())
StopIteration
```
## To reproduce the problem
Run Code below:
```python
import torch
from torch.nn import DataParallel
from transformers import TransfoXLTokenizer, TransfoXLModel
device = "cuda:0"
# Get model
tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')
model = TransfoXLModel.from_pretrained('transfo-xl-wt103', return_dict=True)
model = DataParallel(model, device_ids=list(range(torch.cuda.device_count())))
model.to(device=device)
# Run forward
inputs = tokenizer(["This is an example"], return_tensors="pt")
outputs = model(
input_ids=inputs["input_ids"].to(device),
)
print(f"outputs: {outputs}")
print("Success.")
```
| 10-29-2020 09:32:06 | 10-29-2020 09:32:06 | The same code I tested on GPT-2, works fine for me. Guess something wrong with transformer-xl
GPT-2 code below:
```Python
import torch
from torch.nn import DataParallel
from transformers import GPT2Tokenizer, GPT2LMHeadModel
device = "cuda:0"
# Get model
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained("gpt2")
model = DataParallel(model, device_ids=list(range(torch.cuda.device_count())))
model.to(device=device)
# Run forward
inputs = tokenizer(["This is an example"], return_tensors="pt")
outputs = model(
input_ids=inputs["input_ids"].to(device),
attention_mask=inputs["attention_mask"].to(device),
labels=inputs["input_ids"].to(device),
)
print(f"outputs: {outputs}")
print("Success.")
```<|||||>Seems like this was overlooked in #4300 ! I'll update TransfoXL in the same way.<|||||>As of now, Pytorch [doesn't support calling](https://github.com/pytorch/pytorch/issues/40457) `self.parameters()` within `DataParallel`, which causes the current issue. Even after fixing that, which was straightforward, Pytorch [also doesn't support calling](https://github.com/pytorch/pytorch/issues/36035) `self.ParameterList` and `self.ParameterDict`, which are also used in TransfoXL, which will cause another issue. As Pytorch is moving people away from `DataParallel`, they are unlikely to fix this anytime soon on their end. On our end, this is going to be much harder to fix in a non-BC way, as changing the way the model is organized means previous checkpoints cannot be loaded. In the meantime, you could use `DistributedDataParallel` instead. <|||||>I used `torch.nn.parallel.DistributedDataParallel` to run the model in forward pass with the script below:
```python
import os
import sys
import tempfile
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.optim as optim
import torch.multiprocessing as mp
from torch.nn.parallel import DistributedDataParallel as DDP
from transformers import TransfoXLTokenizer, TransfoXLLMHeadModel
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# initialize the process group
dist.init_process_group("gloo", rank=rank, world_size=world_size)
def demo_model_parallel(rank, world_size):
print(f"Running DDP with model parallel example on rank {rank}.")
setup(rank, world_size)
# transfoXL model
tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')
mp_model = TransfoXLLMHeadModel.from_pretrained('transfo-xl-wt103', return_dict=True)
ddp_mp_model = DDP(mp_model, find_unused_parameters=True)
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_mp_model.parameters(), lr=0.001)
for i in range(10):
optimizer.zero_grad()
# check to see if the model returns different losses
if rank == 0:
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
else:
inputs = tokenizer("Borat and the republic of Kazakhistan!", return_tensors = "pt")
outputs = ddp_mp_model(input_ids = inputs["input_ids"], labels=inputs["input_ids"], return_dict = True)
_l = outputs.losses.mean() # documentation is incorrect there is no `loss` but `losses`
print(_l)
_l.backward()
optimizer.step()
def run_demo(demo_fn, world_size):
mp.spawn(demo_fn,
args=(world_size,),
nprocs=world_size,
join=True)
if __name__ == "__main__":
run_demo(demo_model_parallel, 2)
```
However during backward pass I get this error:
```
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that
your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1)
passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making
sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the
distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward`
function. Please include the loss function and the structure of the return value of `forward` of your module when reporting
this issue (e.g. list, dict, iterable).
```
The code is modified from [tutorial](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html)<|||||>@TevenLeScao
I have the same error when trainning TransformerXL.
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_transfo_xl.py", line 1056, in forward
return_dict=return_dict,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_transfo_xl.py", line 866, in forward
mems = self.init_mems(bsz)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_transfo_xl.py", line 800, in init_mems
param = next(self.parameters())
StopIteration
how to solve it quikly?<|||||>@yashbonde this seems to be an unrelated issue! I'll take a look tomorrow.
@ismymajia see my message above - this is a Pytorch issue that we cannot fix without breaking backwards compatibility of checkpoints, as they're slowly stopping support for `DataParallel`.<|||||>Now the problem is that huggingface tansformer-xl model cannot be trained. huggingface tansformer-xl model will not be supported? Do you plan to update the tansformer-xl code?
@TevenLeScao<|||||>As I said in my previous post, you can just use single-GPU or distributed training instead. Of course transformer-xl is supported ; but we cannot update its code to bypass the Pytorch issues with `DataParallel` without breaking backwards compatibility with previous checkpoints.<|||||>when i train tansformer-xl as below:
python -m torch.distributed.launch --nproc_per_node 4 run_language_modeling.py --output_dir ${model_dir} \
--tokenizer_name $data_dir/wordpiece-custom.json \
--config_name $data_dir/$config_file \
--train_data_files "$data_dir/train*.txt" \
--eval_data_file $data_dir/valid.txt \
--block_size=128 \
--do_train \
--do_eval \
--per_device_train_batch_size 4 \
--gradient_accumulation_steps 1 \
--learning_rate 6e-4 \
--weight_decay 0.01 \
--adam_epsilon 1e-6 \
--adam_beta1 0.9 \
--adam_beta2 0.98 \
--max_steps 500_000 \
--warmup_steps 24_000 \
--fp16 \
--logging_dir ${model_dir}/tensorboard \
--save_steps 5000 \
--save_total_limit 20 \
--seed 108 \
--max_steps -1 \
--num_train_epochs 20 \
--dataloader_num_workers 0 \
--overwrite_output_dir
occur error:
[INFO|language_modeling.py:324] 2020-11-11 13:50:49,520 >> Loading features from cached file /opt/ml/input/data/training/mm/huggingface/data/train40G/cached_lm_PreTrainedTokenizerFast_126_train3.txt [took 93.739 s]
[INFO|language_modeling.py:324] 2020-11-11 13:52:30,959 >> Loading features from cached file /opt/ml/input/data/training/mm/huggingface/data/train40G/cached_lm_PreTrainedTokenizerFast_126_train2.txt [took 101.436 s]
Traceback (most recent call last):
File "run_language_modeling.py", line 350, in <module>
main()
File "run_language_modeling.py", line 313, in main
trainer.train(model_path=model_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 657, in train
else True
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/distributed.py", line 333, in __init__
self.broadcast_bucket_size)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/distributed.py", line 549, in _distributed_broadcast_coalesced
dist._broadcast_coalesced(self.process_group, tensors, buffer_size)
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:518, unhandled cuda error, NCCL version 2.4.8
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py", line 261, in <module>
main()
File "/usr/local/lib/python3.6/dist-packages/torch/distributed/launch.py", line 257, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python', '-u', 'run_language_modeling.py', '--local_rank=3', '--output_dir', '/opt/ml/input/data/training/mm/huggingface/data/20201107/checkpoints/transfo-xl_1L_dembed1024_dhead64_dInner4096_dmodel1024_heads16_1', '--tokenizer_name', '/opt/ml/input/data/training/mm/huggingface/data/20201107/wordpiece-custom.json', '--config_name', '/opt/ml/input/data/training/mm/huggingface/data/20201107/config-transfo-xl.json', '--train_data_files', '/opt/ml/input/data/training/mm/huggingface/data/20201107/train*.txt', '--eval_data_file', '/opt/ml/input/data/training/mm/huggingface/data/20201107/valid.txt', '--block_size=128', '--do_train', '--do_eval', '--per_device_train_batch_size', '16', '--gradient_accumulation_steps', '1', '--learning_rate', '6e-4', '--weight_decay', '0.01', '--adam_epsilon', '1e-6', '--adam_beta1', '0.9', '--adam_beta2', '0.98', '--max_steps', '500_000', '--warmup_steps', '24_000', '--fp16', '--logging_dir', '/opt/ml/input/data/training/mm/huggingface/data/20201107/checkpoints/transfo-xl_1L_dembed1024_dhead64_dInner4096_dmodel1024_heads16_1/tensorboard', '--save_steps', '5000', '--save_total_limit', '20', '--seed', '108', '--max_steps', '-1', '--num_train_epochs', '20', '--overwrite_output_dir']' died with <Signals.SIGKILL: 9>.
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
my env is below:
pytorch:1.6+cu101
transformer 3.4
tokenizer 0.9.3
@TevenLeScao How to solve it ? <|||||>Hey, looking at the error message (the SIGKILL) this looks more like the machine killing the process than like a bug. What's your setup? This happens if the machine runs out of RAM for example. <|||||>I am trainning the transformer-xl on one machine with multi-gpus by ddp.
I don't know if this is a problem.
@TevenLeScao
<|||||>Hey, usually when you get a mysterious CUDA error like this ("RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:518, unhandled cuda error, NCCL version 2.4.8") it's because of GPU memory. I'll close the issue now as this is unrelated, and does not particularly look like a library bug. You should probably post on the forums at https://discuss.huggingface.co/ to see if you can get help with debugging!<|||||>[INFO|language_modeling.py:242] 2020-11-11 11:54:46,363 >> Loading features from cached file /opt/ml/input/data/training/kyzhan/huggingface/data/train40G/cached_lm_PreTrainedTokenizerFast_126_train3.txt [took 116.431 s]
/ _th_index_copy_
main()
File "run_hf_train_lm_ti.py", line 338, in main
trainer.train(model_path=model_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 758, in train
tr_loss += self.training_step(model, inputs)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1056, in training_step
loss = self.compute_loss(model, inputs)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1082, in compute_loss
outputs = model(**inputs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/distributed.py", line 511, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_transfo_xl.py", line 1056, in forward
return_dict=return_dict,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_transfo_xl.py", line 888, in forward
word_emb = self.word_emb(input_ids)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_transfo_xl.py", line 448, in forward
emb_flat.index_copy_(0, indices_i, emb_i)
RuntimeError: Expected object of scalar type Float but got scalar type Half for argument #4 'source' in call to _th_index_copy_
Now encounter this problem. @TevenLeScao<|||||>I'm closing this issue as it concerns an unrelated problem that we cannot solve. Can you open a new one with a complete description ? |
transformers | 8,144 | closed | ETA on TFEncoderDecoderModel and is BERTShare from https://arxiv.org/pdf/1907.12461.pdf planned? | # ๐ Feature request
Is there a plan for BERTShare from https://arxiv.org/pdf/1907.12461.pdf to be an option for the EncoderDecoderModel?
Also, I can see that an TFEncoderDecoderModel is on the 'ToDo' list for the [EncoderDecoder Framework](https://github.com/huggingface/transformers/projects/23). Any chance of an expected time of completion of this would be greatly appreciated.
## Motivation
Having an easy to use seq2seq model integrated into hugging face (with TensorFlow) would help my research immensely. Also, models like BERTShare are much more parameter efficient.
## Your contribution
I am happy to help in any form. Not sure where help is needed tbh.
| 10-29-2020 08:27:51 | 10-29-2020 08:27:51 | I think we can keep this open, this looks like a fun project. Pinging @patrickvonplaten to let him know!<|||||>The models of https://arxiv.org/pdf/1907.12461.pdf are already added. You can check them out here (they are not called shared, but are shared indeed): https://huggingface.co/models?search=google%2Froberta2roberta
Also, I'll be releasing an in-detail notebook about these models on Monday, so stay tuned :-)
No ETA on TFEncoderDecoder models, but it's definitely on the roadmap :-) <|||||>> The models of https://arxiv.org/pdf/1907.12461.pdf are already added. You can check them out here (they are not called shared, but are shared indeed): https://huggingface.co/models?search=google%2Froberta2roberta
>
> Also, I'll be releasing an in-detail notebook about these models on Monday, so stay tuned :-)
>
> No ETA on TFEncoderDecoder models, but it's definitely on the roadmap :-)
Thanks, I am switching from TF to PyTorch :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,143 | closed | Trainer makes RAM go out of memory after a while | ## Environment info
- `transformers` version: 3.4.0
- Platform: Linux-4.14.193-113.317.amzn1.x86_64-x86_64-with-glibc2.9
- Python version: 3.6.10
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help
@sgugger @patrickvonplaten
## Information
Model I am using: T5
The problem arises when using my own modified scripts:
I load my dataset this way:
def tokenize(batch):
tokenized_input = tokenizer(batch[text_column], padding=True, truncation=True, max_length=153)
tokenized_label = tokenizer(batch[generated_column], padding=True, truncation=True, max_length=274)
tokenized_input['labels'] = tokenized_label['input_ids']
return tokenized_input
dataset = load_dataset('csv', data_files=dataset_file, split='train')
dataset = dataset.train_test_split(test_size=0.05, seed=SEED)
train_dataset = dataset['train']
val_dataset = dataset['test']
train_dataset = train_dataset.map(tokenize, batched=True, batch_size=len(train_dataset))
val_dataset = val_dataset.map(tokenize, batched=True, batch_size=len(val_dataset))
train_dataset.set_format('numpy', columns=['input_ids', 'attention_mask', 'labels'])
val_dataset.set_format('numpy', columns=['input_ids', 'attention_mask', 'labels'])
And then I use Trainer to train my T5 model like this:
training_args = TrainingArguments(
output_dir=output_dir,
num_train_epochs=1,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
eval_accumulation_steps=1,
learning_rate=0.001,
evaluation_strategy='steps',
save_steps=1000000,
save_total_limit=1,
remove_unused_columns=True,
run_name=now,
logging_steps=100,
eval_steps=100,
logging_first_step=True
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset
)
trainer.train()
The tasks I am working on is my own task or dataset:
I am using a custom dataset for machine translation which has 12MB size and 18.000 examples. The sequence max token sizes are 153 for input and 274 for output. I have also added 68 special tokens as the dataset has many symbols in it.
## To reproduce
Steps to reproduce the behavior:
1. Load a dataset like I did.
2. Start training using Trainer
3. During every evaluation, RAM usage grows and is not freed. So the next evaluation step accumulates other RAM and so on, until you reach the maximum and the training stops giving this error: `RuntimeError: [enforce fail at CPUAllocator.cpp:64] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 281882432 bytes. Error code 12 (Cannot allocate memory)` (The machine I am using has 60GB RAM).
## Expected behavior
The evaluation RAM should be freed after every step. Looks like something gets accumulated while training and RAM is not freed. I get the same behavior if I don't run training but only evaluation: after many evaluation steps the RAM blows up.
| 10-29-2020 07:44:21 | 10-29-2020 07:44:21 | Additional info:
as a workaround, I am now using a smaller validation set, but it is not ideal. If the memory issue can't be solved, a better solution could be to introduce an option to use a random subset of the validation set to use to evaluate during training.<|||||>If the problem is just that the RAM is not freed after evaluation, we can try to work around that (though Python garbage collector can be tricky to trigger).
If the validation set gives predictions that do not fit in RAM, we can't do much in the generic Trainer directly. You can subclass `Trainer` and the `evaluate` function to use the `datasets` library `Metric` objects, which store the predictions with arrows so use less RAM.<|||||>> If the problem is just that the RAM is not freed after evaluation, we can try to work around that (though Python garbage collector can be tricky to trigger).
I think the problem is not this one. The RAM is freed after evaluation (after some seconds), but it is not freed between an evaluation single step and the other. Correct me if I am wrong, but after a step the only thing to keep in RAM should be the loss, so it can be averaged at the end of evaluation, so the RAM usage should not increase as the steps go ahead, which instead is what happens.<|||||>During evaluation, we need to store predictions and labels too, for the metric computation. If you want to store the loss only, then pass along the flag `prediction_loss_only=True` to your training arguments, which will use less more RAM (and you can then probably remove the `eval_accumulation_steps=1` to speed up evaluation).<|||||>I didn't know that, it solved my problem thank you!<|||||>Should even be automatic now as I just merged a PR on master where the Trainer does not bother saving the predictions when there is no `compute_metrics` (which is your case here). |
transformers | 8,142 | closed | Is there any Jupyter notebook or detailed example using BertGeneration or EncoderDecoderModel classes? | I have been looking to do some seq2seq tasks in the huggingface-transformers using BertGeneration or EncoderDecoderModel classes.
But I only have ended up finding some simple examples described in the API documentation like below.
```
>>> import torch
>>> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
>>> model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased') # initialize Bert2Bert from pre-trained checkpoints
>>> # forward
>>> input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
>>> outputs = model(input_ids=input_ids, decoder_input_ids=input_ids)
>>> # training
>>> outputs = model(input_ids=input_ids, decoder_input_ids=input_ids, labels=input_ids, return_dict=True)
>>> loss, logits = outputs.loss, outputs.logits
>>> # save and load from pretrained
>>> model.save_pretrained("bert2bert")
>>> model = EncoderDecoderModel.from_pretrained("bert2bert")
>>> # generation
>>> generated = model.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id)
```
Is there any Jupyter notebook or detailed example using BertGeneration or EncoderDecoderModel classes specifically? Even though I already know that these classes are released quite recently...
It would be a great help for me if I could find one. Thanks! | 10-29-2020 07:01:44 | 10-29-2020 07:01:44 | Releasing in ~1 week - it's almost ready :-) <|||||>Thanks for letting me know! :)<|||||>I've released two condensed notebooks as mentioned here: https://discuss.huggingface.co/t/leveraging-pre-trained-checkpoints-for-summarization/835/13?u=patrickvonplaten
Will also release a longer educational blog post in a bit.<|||||>https://huggingface.co/blog/warm-starting-encoder-decoder |
transformers | 8,141 | closed | Vocab files missing in community pre-trained t5 model | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Linux-5.4.0-1028-gcp-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help
Summarization: @sshleifer
## Information
I am trying to use the sshleifer/t5-base-cnn for summarization task, but there seems to be an issue with the tokenizer portion. I tried looking at the files part in https://huggingface.co/sshleifer/t5-base-cnn# and there doesn't seem to be a vocab file there.
>tokenizer = AutoTokenizer.from_pretrained("sshleifer/t5-base-cnn")
>model = AutoModelWithLMHead.from_pretrained("sshleifer/t5-base-cnn")
>OSError: Model name 'sshleifer/t5-base-cnn' was not found in tokenizers model name list (t5-small, t5-base, t5-large
, t5-3b, t5-11b). We assumed 'sshleifer/t5-base-cnn' was a path, a model identifier, or url to a directory containin
g vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.
| 10-29-2020 06:16:28 | 10-29-2020 06:16:28 | I'm guessing you can use the tokenizer from t5-base (https://huggingface.co/t5-base#list-files) but @sshleifer can confirm or infirm<|||||>> I'm guessing you can use the tokenizer from t5-base (https://huggingface.co/t5-base#list-files) but @sshleifer can confirm or infirm
This is what I used in the interim. I'm just not sure if there are some implications with using a different tokenizer with the fine-tuned model.<|||||>Correct all t5 tokenizers are identical. There will be no issue. |
transformers | 8,140 | closed | Customize tokenizer in model card's widget | I trained a Chinese Roberta model. In the model card, the widget uses a tokenizer defined in config.json(`RobertaTokenizer`). But my model uses `BertTokenizer`. Can I customize the tokenizer in the widget of the model card just like I can choose any combination of model and tokenizer in a pipeline? | 10-29-2020 03:01:07 | 10-29-2020 03:01:07 | I tried to use `BertModel` instead of `RobertaModel` (copy weights from Roberta to Bert). But the position embedding is different. And the outputs are different... So I have to use this combination of `RobertaModel` and `BertTokenizer`. Is that mean I can't use the inference widget?<|||||>Yes, this is possible. See https://github.com/huggingface/transformers/commit/ed71c21d6afcbfa2d8e5bb03acbb88ae0e0ea56a, you should add a `tokenizer_class` attribute to your config.json with the tokenizer class you want to use.
cc @sgugger @LysandreJik I have no idea if this is currently documented or just in the code ๐คญ<|||||>> Yes, this is possible. See [ed71c21](https://github.com/huggingface/transformers/commit/ed71c21d6afcbfa2d8e5bb03acbb88ae0e0ea56a), you should add a `tokenizer_class` attribute to your config.json with the tokenizer class you want to use.
>
> cc @sgugger @LysandreJik I have no idea if this is currently documented or just in the code ๐คญ
Thank you! It works. I think you are right and I did not find this configuration in the documentation: https://huggingface.co/transformers/main_classes/configuration.html<|||||>Looks like that guy who made the PR did not document the new argument he added :-p <|||||>arg, who does that guy think he is? ๐ |
transformers | 8,139 | closed | Fix doc errors and typos across the board | 10-29-2020 02:09:56 | 10-29-2020 02:09:56 | ||
transformers | 8,138 | closed | How to get translation of one batch of sentences after batch_encode_plus? | ```
model = AutoModelWithLMHead.from_pretrained("Helsinki-NLP/opus-mt-es-en")
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-es-en")
batch_input_str = (("Mary spends $20 on pizza"), ("She likes eating it"), ("The pizza was great"))
encoded = (tokenizer.batch_encode_plus(batch_input_str, pad_to_max_length=True))
```
The ```encoded```is like:
```
{'input_ids': [[4963, 10154, 5021, 9, 25, 1326, 2255, 35, 17462, 0], [552, 3996, 2274, 9, 129, 75, 2223, 25, 1370, 0], [42, 17462, 12378, 9, 25, 5807, 1949, 0, 65000, 65000]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 0, 0]]}
```
Then, should I just pass the ```encoded``` to
```
output = model.generate(a)
```
And then use
```
res = tokenizer.decode(output)
```
Thanks! | 10-29-2020 01:43:52 | 10-29-2020 01:43:52 | Hello, have you read the docs concerning the translation task? It is [available here](https://huggingface.co/transformers/task_summary.html#translation).
Since you're specifically asking about a Helsinki model, you can find the documentation, with examples, [here](https://huggingface.co/transformers/model_doc/marian.html#multilingual-models). |
transformers | 8,137 | closed | In built code not able to download for "bert-base-uncased" when running on cluster. | Traceback (most recent call last):
File "/users/sroychou/BERT_text_summarisation/scripts/train_bert_summarizer.py", line 12, in <module>
from metrics import optimizer, loss_function, label_smoothing, get_loss_and_accuracy, tf_write_summary, monitor_run
File "/users/sroychou/BERT_text_summarisation/scripts/metrics.py", line 16, in <module>
_, _, _ = b_score(["I'm Batman"], ["I'm Spiderman"], lang='en', model_type='bert-base-uncased')
File "/users/sroychou/.local/lib/python3.7/site-packages/bert_score/score.py", line 105, in score
tokenizer = AutoTokenizer.from_pretrained(model_type)
File "/users/sroychou/.local/lib/python3.7/site-packages/transformers/tokenization_auto.py", line 298, in from_pretrained
config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/users/sroychou/.local/lib/python3.7/site-packages/transformers/configuration_auto.py", line 330, in from_pretrained
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/users/sroychou/.local/lib/python3.7/site-packages/transformers/configuration_utils.py", line 382, in get_config_dict
raise EnvironmentError(msg)
OSError: Can't load config for 'bert-base-uncased'. Make sure that:
- 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'bert-base-uncased' is the correct path to a directory containing a config.json file | 10-29-2020 01:06:01 | 10-29-2020 01:06:01 | It seems that you have no internet access<|||||>Thank You. We also came to know that the cluster doesn't have internet access. I can manually download it and put that in a cache folder, if that is possible, can you please suggest where we can put this in a cache folder so that it could access from that place.<|||||>You could put it in any folder and point to that folder instead! The `from_pretrained` method takes either an identifier to point to the S3 bucket, or a local path containing the required files.
The files must be named correctly, however (`pytorch_model.bin` for the PT model, `tf_model.h5` for the TF model, and `config.json` for the configuration).
I guess the easiest for you would be to do something like the following:
1# Create the model cache
```shell-script
mkdir model_cache
cd model_cache
python
```
2# Download and save the models to the cache (here are two examples with BERT and RoBERTa)
```py
# When doing this you must be careful that the architectures you're using contain all the trained layers that
# you will need in your task. Using the architectures with which they were pre-trained makes sure to contain
# all of these layers
from transformers import BertForPreTraining, BertTokenizer, RobertaForMaskedLM, RobertaTokenizer
BertForPreTraining.from_pretrained("bert-base-cased").save_pretrained("bert-cache")
BertTokenizer.from_pretrained("bert-base-cased").save_pretrained("bert-cache")
RobertaForMaskedLM.from_pretrained("roberta-base").save_pretrained("roberta-cache")
RobertaTokenizer.from_pretrained("roberta-base").save_pretrained("roberta-cache")
```
You can check that the folder now contains all the appropriate files:
```shell-script
ls -LR
# Outputs the following
./bert-cache:
config.json pytorch_model.bin special_tokens_map.json tokenizer_config.json vocab.txt
./roberta-cache:
config.json merges.txt pytorch_model.bin special_tokens_map.json tokenizer_config.json vocab.json
```
You can then move your folder `model_cache` to your machine which has no internet access. Hope that helps.<|||||>Thanks a lot for the detailed explanation.
I followed your steps and saved the checkpoints in model_cache and uncased_l12 (with same contents).However it is showing a keyerrror when it is referencing the model_cache folder
INFO:tensorflow:Extracting pretrained word embeddings weights from BERT
2020-10-30 14:37:43.909781: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
Some layers from the model checkpoint at /users/sroychou/uncased_l12/ were not used when initializing TFBertModel: ['nsp___cls', 'mlm___cls']
- This IS expected if you are initializing TFBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing TFBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
All the layers of TFBertModel were initialized from the model checkpoint at /users/sroychou/uncased_l12/.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFBertModel for predictions without further training.
INFO:tensorflow:Embedding matrix shape '(30522, 768)'
INFO:tensorflow:Loading Pre-trained BERT model for BERT SCORE calculation
setting default value to last_recorded_value
Traceback (most recent call last):
File "/users/sroychou/BERT_text_summarisation/scripts/train_bert_summarizer.py", line 12, in <module>
from metrics import optimizer, loss_function, label_smoothing, get_loss_and_accuracy, tf_write_summary, monitor_run
File "/users/sroychou/BERT_text_summarisation/scripts/metrics.py", line 16, in <module>
_, _, _ = b_score(["I'm Batman"], ["I'm Spiderman"], lang='en', model_type='/users/sroychou/model_cache/')
File "/users/sroychou/.local/lib/python3.7/site-packages/bert_score/score.py", line 100, in score
num_layers = model2layers[model_type]
KeyError: '/users/sroychou/model_cache/'

Is there something I am doing wrong ? Been stuck on this for sometime.
<|||||>Hmm well it seems that is an issue with `bert_score`? I don't know what is `BERT_text_summarisation`, I don't know what is the `metrics` script, and I do not know what is the `bert_score` package. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,136 | closed | How to perform model.predict loop with TFRobertaForSequenceClassification? | I'd like to perform inference loop for the following roberta model:
```
model = TFRobertaForSequenceClassification.from_pretrained('roberta-base',return_dict=True,num_labels=2)
```
on a large set of pairs of sentences (couple of hundred thousands). I wanted to use `model.predict` and specify batch size, but there is no way to pass the below inputs (encoded_data is tokenization of the input data) to `model.predict`
```
attention_mask=encoded_data['attention_mask'],
token_type_ids=encoded_data['token_type_ids']
```
So what is the alternative way to do that? | 10-28-2020 23:54:04 | 10-28-2020 23:54:04 | Hi, this [Kaggle notebook](https://www.kaggle.com/xhlulu/jigsaw-tpu-xlm-roberta) shows a very concise way to efficiently train/predict Huggingface's `XLMRoberta` (which is the same format as `Roberta`) . Hope it help!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,135 | closed | Bort (Amazon's reduced BERT) | # ๐ New model addition
## Model description
Amazon Alexa researchers extract an optimal subset of architectural parameters for the BERT architecture by applying recent breakthroughs in algorithms for neural architecture search. The proposed optimal subset, โBort,โ is just 5.5 percent the effective size of the original BERT-large architecture (not counting the embedding layer), and 16 percent of its net size.
## Open source status
using mxnet and gluonnlp
paper https://arxiv.org/pdf/2010.10499.pdf
repo https://github.com/alexa/bort
* [X] the model implementation is available: (give details)
* [X] the model weights are available: (give details)
* [@adewynter] who are the authors: (mention them, if possible by @gh-username)
| 10-28-2020 21:21:12 | 10-28-2020 21:21:12 | Any update on this one?<|||||>This was added in #9112 |
transformers | 8,134 | closed | Error with multi-gpu training | I'm trying to build a QuestionAnswering model using transformers
It works with single gpu training but fails with multiple gpus.
Is there any bug in the below code?
```
class QAModel(pl.LightningModule):
def __init__(self):
super(QAModel, self).__init__()
self.model_type = parameters["BaseModel_type"]
self.config = AutoConfig.from_pretrained(model_name)
self.base_model = AutoModelForQuestionAnswering.from_pretrained(model_name, config = self.config)
self.tokenizer = tokenizer
def forward(self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
start_positions=None,
end_positions=None):
outputs = self.base_model(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
start_positions=start_positions,
end_positions=end_positions,
)
return outputs
def prepare_data(self):
self.train_dataset, _, _ = load_data(parameters["TRAIN_FILE"], is_training=True)
self.val_dataset, self.val_examples, self.val_features = load_data(parameters["DEV_FILE"], is_training=False)
self.test_dataset, self.test_examples, self.test_features = load_data(parameters["TEST_FILE"], is_training=False)
def train_dataloader(self):
return DataLoader(dataset=self.train_dataset, batch_size=parameters["batch_size"], shuffle=True, num_workers=parameters["num_threads"])
def val_dataloader(self):
return DataLoader(dataset=self.val_dataset, batch_size=parameters["batch_size"], num_workers=parameters["num_threads"])
def test_dataloader(self):
return DataLoader(dataset=self.test_dataset, batch_size=parameters["batch_size"], num_workers=parameters["num_threads"])
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=parameters["learning_rate"])
def training_step(self, batch, batch_idx):
inputs = {
"input_ids": batch[0],
"attention_mask": batch[1],
"token_type_ids": batch[2],
"start_positions": batch[3],
"end_positions": batch[4],
}
outputs = self.forward(**inputs)
loss = outputs[0]
return {"loss": loss}
def validation_step(self, batch, batch_idx):
inputs = {
"input_ids": batch[0],
"attention_mask": batch[1],
"token_type_ids": batch[2],
}
feature_indices = batch[3]
outputs = self.forward(**inputs)
model = QAModel()
trainer = pl.Trainer(gpus=-1, distributed_backend='dp', max_epochs=parameters["epochs"])
trainer.fit(model)
```
I get this error on running it with multiple gpus:
```
RuntimeError: grad can be implicitly created only for scalar outputs
``` | 10-28-2020 20:45:29 | 10-28-2020 20:45:29 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@nrjvarshney Hello did you managed this error? I am too having same error. Is there anybody to help?<|||||>Hello did you managed this error? I am too having same error. Is there anybody to help? |
transformers | 8,133 | closed | [examples] minimal version requirement run-time check in PL | This PR adds a run-time version check for PL. Via the warning for now. This is a follow up to https://github.com/huggingface/transformers/pull/7852#issuecomment-718144095
In the nature of development we don't constantly re-run `pip install -r requirements.txt` so often when a breaking change is introduced we have to signal to each other - hey, upgrade your PL or so. It'd be much simpler to let the program do this automatically for us.
for now one needs to update requirements.txt and the relevant .py files, but we could automate this to have one source to maintain - parse `requirements.txt` and pull the important min-version from there...
for now this is just a hardcoded plain check.
**My only suggestion is to make it an error** - there are too too many warnings in the test suite for someone to notice this yet another one - so I vote for making it an error.
@sshleifer, @sgugger, @LysandreJik | 10-28-2020 20:15:26 | 10-28-2020 20:15:26 | |
transformers | 8,132 | closed | New template for example and MLM example. | # What does this PR do?
This PR adds a cookiecutter template to add a new example and experiments with it to add the `run_mlm` new script. It runs with the same results as the old version. I'll also add a `run_plm` specific to XLNet then update the README and remove the old script.
Side note: the part for random masking applied in a data collator can become platform agnostic later on, if datasets adds a lazy map method.
| 10-28-2020 19:55:51 | 10-28-2020 19:55:51 | |
transformers | 8,131 | closed | [s2s test] cleanup | This PR introduces no functional change, just doing a clean up left behind from the initial split and copy of the distillation tests...
@sshleifer | 10-28-2020 19:33:13 | 10-28-2020 19:33:13 | |
transformers | 8,130 | closed | Name or path should be added on configuration as well | Close https://github.com/huggingface/transformers/issues/8035
Currently a configuration initialized with
```
config = BertConfig.from_pretrained(model_name)
```
does not have the `_model_name_or_path` attribute, whereas a configuration intialized from a model with
```py
model = BertModel.from_pretrained(model_name)
```
does.
This fixes the discrepancy and fixes the failing test in the process. | 10-28-2020 19:22:08 | 10-28-2020 19:22:08 | |
transformers | 8,129 | closed | Fix typo in `AutoModelForMaskedLM` docs | 10-28-2020 19:21:59 | 10-28-2020 19:21:59 | ||
transformers | 8,128 | closed | test style | 10-28-2020 18:47:10 | 10-28-2020 18:47:10 | ||
transformers | 8,127 | closed | Use pipeline on fine tuned model | # โ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
I have fine tuned 'roberta-large' model according to my dataset. It is a sequence classification task
```
MODEL_NAME = 'roberta-large'
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME)
# Prediction function
def predict(sent):
sequence = tokenizer.encode_plus(sent, return_tensors="pt")['input_ids'].to(device)
logits = model(sequence)[0]
```
The above works fine but now I would like to use this model in pipeline like we have one for question-answering
```
nlp = pipeline('question-answering', model='distilbert-base-cased-distilled-squad', tokenizer='bert-base-cased')
```
Any example would help.
Thank You.
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. --> | 10-28-2020 18:35:40 | 10-28-2020 18:35:40 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,126 | closed | Update CI cache | Update the CI cache as torch 1.7 has been released | 10-28-2020 17:59:19 | 10-28-2020 17:59:19 | |
transformers | 8,125 | closed | Cannot load saved tokenizer using AutoTokenizer | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Win10 x64 (1607 Build 14393.3866)
- Python version: 3.6.10
- PyTorch version (GPU?): 1.5.1
- Tensorflow version (GPU?): None
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@mfuntowicz
## Information
It appears that you can save a tokenizer to disk in a model agnostic way, but you cannot load it back in a model agnostic way. Is this a bug or by design?
## To reproduce
Steps to reproduce the behavior:
```
tokenizer = AutoTokenizer.from_pretrained('roberta-base')
tokenizer.save_pretrained('TEST/tokenizer')
tokenizer = AutoTokenizer.from_pretrained('TEST/tokenizer')
# ERROR
```
The error you get is because the config argument is None, which means AutoTokenizer calls AutoConfig.from_pretrained, which utilises file_utils.CONFIG_NAME, however tokenizer.save_pretrained uses tokenization_utils_base.TOKENIZER_CONFIG_FILE instead, so they're not compatible with one another.
## Expected behavior
I would assume that calling AutoTokenizer.from_pretrained would be able to load and instantiate the correct model tokenizer without the user having to directly import the model tokenizer class first (e.g. RobertaTokenizer.from_pretrained). This would help a lot in moving to a model agnostic way of handling tokenizers, which I feel is the goal of the AutoTokenizer class. The fact that it can't load a tokenizer from disk seems to be a bug, unless there is a different way of doing this?
| 10-28-2020 17:58:55 | 10-28-2020 17:58:55 | Hello! Indeed, I wouldn't say this is a bug but more of a limitation of the `AutoTokenizer` class that has to rely on the model configuration in order to guess which tokenizer is affiliated with the model. Since you're not interacting with the configuration in the configuration anywhere here, and, therefore, are not saving the model configuration in `TEST/tokenizer`, the AutoTokenizer cannot guess from which tokenizer to load.
One way to go around this limitation is to either specify the configuration when loading the tokenizer for the second time:
```py
from transformers import AutoTokenizer, AutoConfig
tokenizer = AutoTokenizer.from_pretrained('roberta-base')
tokenizer.save_pretrained('TEST/tokenizer')
tokenizer = AutoTokenizer.from_pretrained('TEST/tokenizer', config=AutoConfig.from_pretrained("roberta-base"))
```
Another way would be to save the configuration in the initial folder:
```py
from transformers import AutoTokenizer, AutoConfig
tokenizer = AutoTokenizer.from_pretrained('roberta-base')
config = AutoConfig.from_pretrained('roberta-base')
tokenizer.save_pretrained('TEST/tokenizer')
config.save_pretrained('TEST/tokenizer')
tokenizer = AutoTokenizer.from_pretrained('TEST/tokenizer')
```
In any case, the documentation about this should be improved.<|||||>Thank you for that reply, I very much appreciate it!
What about the following, would this work also?
```
tokenizer = AutoTokenizer.from_pretrained('roberta-base')
# make changes to tokenizer, for example add custom tokens
tokenizer.save_pretrained('TEST/tokenizer')
tokenizer = AutoTokenizer.from_pretrained('roberta-base')
tokenizer = tokenizer.from_pretrained('TEST/tokenizer')
```
If you do it this way, when you call the last line of the code, will you restore any changes you previously made to the tokenizer?
Finally, for what's it worth, I do believe that the way the library is doing it now is wrong, from a design philosophy perspective. The Tokenizers should be able to stand completely apart from their models, as they are their own classes, with their own configs and config format. You shouldn't need the Model Config in order to save down and restore a tokenizer, because you can do it entirely without the model if you call the direct model tokenizer class:
```
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
tokenizer.save_pretrained('TEST/tokenizer')
tokenizer = RobertaTokenizer.from_pretrained('TEST/tokenizer')
# WORKS
```
So it really should not make any difference if you execute the same design pattern, but from a model-agnostic way, as in my original example:
```
tokenizer = AutoTokenizer.from_pretrained('roberta-base')
tokenizer.save_pretrained('TEST/tokenizer')
tokenizer = AutoTokenizer.from_pretrained('TEST/tokenizer')
# ERROR, but really should work
```
The AutoTokenizer class should just be about Tokenizer, and should not be calling AutoConfig (which is for models). Basically, you need an AutoTokenConfig class instead, which decouples the two. Calling `save_pretrained` on a Tokenizer (any tokenizer) should save all the information about it (including it's model-class, for example RobertaTokenizer) such that you can then load it from disk using AutoTokenizer, and the AutoTokenizer would be smart enough to check the files on disk, read some JSON info, and say "Ah yes, this should be a RobertaTokenizer" and then return to you a RobertaTokenizer object, even though you called AutoTokenizer.from_pretrained. In fact, as it stands now, this information about tokenizer type is already being written to disk, it's just not being read back by the AutoTokenizer. If you created an AutoTokenizerConfig class with its own tokenizer-specific config reading-from-disk methods, then you could easily accomplish this.
The reason this would be a powerful design pattern to have is you could make complex language modelling pipelines across different scripts and the tokenizer would only need to be class specified once, at the topmost script.
For example, say you have a script which preprocesses a custom corpus for downstream language modelling, and it does this using Shelve, creating compressed records ready to be read by a downstream collator class. But in the same directory, it also saves down the (custom) tokenizer used, let's say a modified RobertaTokenizer.
The downstream script would not need to know anything about RobertaTokenizer, all it does is read in the Shelve records, and loads the tokenizer using AutoTokenizer.from_pretrained, and then just runs what it needs to run, and hands its results to yet another downstream process, and then that process also just loads the tokenizer using AutoTokenizer.from_pretrained, and doesn't need to know anything about what type of tokenizer it is, because it just uses the PretrainedTokenizer base class methods.
So the only script that ever knew about RobertaTokenizer was the very first one, and it saved it using save_pretrained, and then all of the downstream worker scripts just load that tokenizer using AutoTokenizer.from_pretrained. This allows all the downstream scripts to be model-agnostic, and not need to know about RobertaTokenizer at all, meaning they could work with any PretrainedTokenizer at all.
This is a very efficient pipeline that makes full use of the abstract base classes like PretrainedTokenizer. Otherwise you need each of your downstream scripts to be model-specific, because they need to be told to use RobertaTokenizer instead of BertTokenizer instead of GPT2Tokenizer, etc.
The only thing that's missing to make this all work is for AutoTokenizer.from_pretrained to work in the manner which I have original tried to make it work.<|||||>While we aim for tokenizers and models to be pairs and not standalone classes, I do agree it would be better from a user perspective to put the tokenizer's class directly in the `tokenizer_config.json`, so as to work in the use-case that you mention here. We could add a flag to the configuration, similar to the `architecture` that we have in the model configuration.
Thoughts @julien-c, @thomwolf ?<|||||>Yes, sounds good to me indeed<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,124 | closed | [s2s] distributed eval gets stuck on error w/ multigpu | `examples/seq2seq/distillation.py` and probably others remain hanging on internal error when run w/ multiple gpus (2 here):
```
rm -r /tmp/tmpqajqhzwo; PYTHONPATH="src" python examples/seq2seq/distillation.py --supervise_forward --normalize_hidden --label_smoothing=0.0 --eval_beams=1 --val_metric=loss --save_top_k=1 --adafactor --early_stopping_patience=-1 --logger_name=default --length_penalty=0.5 --cache_dir= --task=summarization --num_workers=2 --alpha_hid=0 --freeze_embeds --sortish_sampler --student_decoder_layers=1 --val_check_interval=0.5 --output_dir=/tmp/tmpqajqhzwo --no_teacher --fp16_opt_level=O1 --gpus=2 --max_grad_norm=1.0 --do_train --do_predict --accumulate_grad_batches=1 --seed=42 --model_name_or_path=sshleifer/tinier_bart --config_name= --tokenizer_name=facebook/bart-large --learning_rate=0.3 --lr_scheduler=linear --weight_decay=0.0 --adam_epsilon=1e-08 --warmup_steps=0 --max_epochs=2 --train_batch_size=1 --eval_batch_size=2 --max_source_length=12 --max_target_length=12 --val_max_target_length=12 --test_max_target_length=12 --n_train=-1 --n_val=-1 --n_test=-1 --student_encoder_layers=1 --freeze_encoder --data_dir=examples/seq2seq/test_data/wmt_en_ro --alpha_mlm=0.2 --alpha_ce=0.8 --teacher=sshleifer/bart-tiny-random
```
last output:
```
initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/2
Traceback (most recent call last):
File "/mnt/nvme1/code/huggingface/transformers-master/examples/seq2seq/distillation.py", line 281, in <module>
distill_main(args)
File "/mnt/nvme1/code/huggingface/transformers-master/examples/seq2seq/distillation.py", line 269, in distill_main
check_output_dir(args, expected_items=3)
File "/mnt/nvme1/code/huggingface/transformers-master/examples/seq2seq/utils.py", line 641, in check_output_dir
raise ValueError(
ValueError: Output directory (/tmp/tmpqajqhzwo) already exists and has 7 items in it (expected 3 items). Use --overwrite_output_dir to overcome.
```
and now it hangs, holding onto the gpu. Can't even Ctrl-C the process - needed to suspend+kill manually.
I know that adding `--overwrite_output_dir` will remove the error, but this is not the issue. It shouldn't hang on error (e.g. the test suite needs to continue running in such event).
@sshleifer | 10-28-2020 16:37:57 | 10-28-2020 16:37:57 | same happens with `finetune.py` - happened in another run when it hit OOM. So basically any error.<|||||>@williamFalcon @SeanNaren (lightning friends)
Do you guys have a clever way to collect failures in your multigpu tests?
When something breaks, our multigpu test hangs.
<|||||>yes... good questions haha.
So, some things we know:
1. multi gpu tests should run one per test (ie: donโt parametrize via pytest). Seems that the way pytest starts an experiment does not play well with pytorch distributed.
2. ddp in lightning needs to use subprocess inside a test and call an external file.
https://github.com/PyTorchLightning/pytorch-lightning/blob/master/tests/backends/test_ddp.py#L57
3. ddp spawn tests need to adhere to that single test per function call i mentioned in 1. pytest parametrized ddp testss WILL freeze the build. <|||||>Thank you for the insights, @williamFalcon
That is the case already - I discovered the subprocess idea by looking at your distributed ddp test ;) And none of these are parametrized.
So it must be something else.
p.s. btw, have you tried the `parameterized` module https://pypi.org/project/parameterized/? It's more flexible than `pytest`'s `parametrize` - perhaps it won't have the same impact (but that's unrelated to this issue).<|||||>oh, and to clarify, this has nothing to do with testing. The hanging happens in the standalone scripts. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,123 | closed | [DOC] Improve pipeline() docstrings for config and tokenizer | As currently written, it was not clear to me which arguments were needed when using a non-default model in `pipeline()`. It seemed that when you provided a non-default `model`, that you still needed to manually change the `config` and `tokenizer` because otherwise the "task's default will be used". In practice, though, the pipeline is smart enough to automatically choose the right config/tokenizer for the given model. This PR clarifies that a bit in the docstrings/documentation, by explaining exactly which priorities are used when loading the tokenizer. A small change was made for `config`, too.
Admittedly, the wording for the tokenizer part is a bit off (programmatical, even), but I think it should make clear how the right tokenizer is loaded.
cc @sgugger | 10-28-2020 16:31:31 | 10-28-2020 16:31:31 | @sgugger I made the change as you requested. Not sure why CI is failing on build_doc. Seems to have to do with some env installation.<|||||>The failure is spurious (basically the new version of pytorch is not cached on the CI and it fails to download it sometimes). Thanks for th fix! |
transformers | 8,122 | closed | behaviour of ZeroShotClassification using facebook/bart-large-mnli is different on online demo vs local machine | ## Environment info
- `transformers` version: 3.4.0
- Platform: Ubuntu 20.04
- Python version: 3.7.7
- PyTorch version (GPU?): 1.6.0 (GPU:Yes)
- Tensorflow version (GPU?): No
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@sshleifer
## Information
Model I am using (Bert, XLNet ...): facebook/bart-large-mnli
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
First I tried the hosted demo online at huggingface, which gives me a very high score of **0.99 for travelling (as expected)**:

Then I tried to run the code on my local machine, which returns **very different scores for all labels** (poor scores):
```
from transformers import pipeline
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-mnli")
model = AutoModel.from_pretrained("facebook/bart-large-mnli")
zsc = pipeline(task='zero-shot-classification', tokenizer=tokenizer, model=model)
sequences = 'one day I will see the world'
candidate_labels = ['travelling', 'cooking', 'dancing']
results = zsc(sequences=sequences, candidate_labels=candidate_labels, multi_class=False)
print(results)
>>>{'sequence': 'one day I will see the world',
'labels': ['travelling', 'dancing', 'cooking'],
'scores': [0.5285395979881287, 0.2499372661113739, 0.22152313590049744]}
```
I **got this warning message when initializing the model**:
`model = AutoModel.from_pretrained("facebook/bart-large-mnli")`
```
Some weights of the model checkpoint at facebook/bart-large-mnli were not used when initializing BartModel: ['model.encoder.version', 'model.decoder.version']
- This IS expected if you are initializing BartModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing BartModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
```
## Expected behavior
Code on my local machine's **_score_** to be quite similar to the online demo.
| 10-28-2020 16:25:39 | 10-28-2020 16:25:39 | Replace `AutoModel` with `AutoModelForSequenceClassification`. The former won't add the sequence classification head, i.e. it will use `BartModel` instead of `BartForSequenceClassification`, so the pipeline is trying to use just the outputs of the encoder instead of the NLI predictions in your snippet.<|||||>@joeddav that fixed it thanks !<|||||>Have the same problem:
conda environment: Python 3.7.9
```
pip3 install torch==1.6
pip3 install transformers
```
Running
```
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("facebook/bart-large-mnli")
```
Results in message:
> Some weights of the model checkpoint at facebook/bart-large-mnli were not used when initializing BartForSequenceClassification: ['model.encoder.version', 'model.decoder.version']
> - This IS expected if you are initializing BartForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
> - This IS NOT expected if you are initializing BartForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
@turmeric-blend: How is my setup different from yours?
<|||||>actually the error message was still there after the fix, but the scores running on local machine were consistent with the online demo @gustavengstrom
any ideas why is there still the warning message @joeddav ?<|||||>Yeah that warning isn't a concern. It's just letting you know that some of the parameters checkpointed in the pretrained model were not able to be matched with the model class, but in this case it's just a couple of meta-fields (encoder/decoder version), so your weights should be matched up fine. |
transformers | 8,121 | closed | fix(trainer_callback]: typo | # What does this PR do?
Fix a typo in `trainer_callback`
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR. @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 10-28-2020 16:08:48 | 10-28-2020 16:08:48 | Thanks a ton! |
transformers | 8,120 | closed | Rename add_start_docstrings_to_callable | # What does this PR do?
This PR rename all `add_start_docstrings_to_callable` to a more explicit `add_start_docstrings_to_model_forward`. This should avoid confusion on the use of this decorator.
(It's an internal function so there should be no breaking change.) | 10-28-2020 14:59:10 | 10-28-2020 14:59:10 | |
transformers | 8,119 | closed | feat(wandb): save model as artifact | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
**EDIT**
The logic has been simplified.
Model is just saved to a temporary folder and uploaded as artifact at the end of training.
**ORIGINAL message**
Save trained model as artifact.
A few different possibilities:
* log model at `on_save` callback -> the issue is there could quickly be too many checkpoints to upload, high bandwidthโฆ
* log model at `on_train_end`
* when we have access to `state.best_model_checkpoint`, we should just upload that folder
* we could upload entire `output_dir` but it could be very large (same problem as `on_save`, ideally we only upload one model
* we can save last model in a separate folder and upload it -> issue is that we don't have access to `Trainer.save_model` (where are the 2-way callbacks ๐)
* we could just use `_sorted_checkpoints` and log only last element (which would also be the best model when metrics are given)
I'm thinking Iย should actually go with the last option (use of `_sorted_checkpoints`). What do you think?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR. @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Trainer: @sgugger
-->
| 10-28-2020 14:44:43 | 10-28-2020 14:44:43 | Ok I submitted #8121 for the typo in `_new_step`.
As for this PR, I'm thinking the logic should be:
* use folder referenced by the last item returned from `_sorted_checkpoints`
* in case it's empty, we should probably save the current checkpoint locally and upload it (since we specifically requested an upload to wandb)<|||||>After experimenting a bit more:
1. Should we upload a model only if `_sorted_checkpoints(โฆ)` is non-empty?
* we don't necessarily get the last model (eg save every 100 steps with 520 steps total)
2. Should we just save current state model at end of training in `args.output_dir + "\wandb"`
* we need to have access to `Trainer.save_model` from `WandbCallback`
* we could decide to use `state.best_model_checkpoint` when present instead
* we ignore any checkpoint
<|||||>@sgugger Do you want me to make an attempt at giving access to the `Trainer` from callbacks or is it a pattern you want to avoid?<|||||>Hi @borisdayma, sorry I took a bit of time to reply on this, we were waiting for the new version of the model hub to materialize before moving forward on this.
So! The callbacks aren't 2-way in Transformers because then you have to be very careful about the order of their execution. Here the design was to just allow for callbacks that can read the state, not write, and for any piece of code that needs the write access, users should subclass the `Trainer`. The circular reference is also problematic for memory management so we leave 2-way callbacks for libraries focused on training models, and keep our simple reporting callbacks as they are :-)
Like you said, you have access to the state with `best_model_checkpoint`. You can also unpack the model from the kwargs and access it. What is in the `Trainer.save_model` method that you need? Worst case scenario, you can even isntantiate an empty Trainer with just the model and the training arguments, and use its `save_model` method.<|||||>The issue with `best_model_checkpoint` is that it does not exist if there's no measurement metric set.
It could make sense to define it as the last checkpoint in that case.
The next issues would then be:
* sometimes no model has been saved yet (maybe not enough epochs) while we want to log the model -> we could accept that it's an issue on the user side and give a warning
* sometimes we may log every 100 steps and run for 180 steps. The last checkpoint is a bit old -> on this aspect Iย feel like the `Trainer` should automatically save the final step as a checkpoint
What do you think?
The alternative would be to completely ignore that logic, let wandb save a model somewhere and upload it. I had not realized we could have access to `model` from the callback (though saving from `Trainer` is better as it handles TPU, save tokenizer, args and may also change in the future).<|||||>I think the most logical is to save the final model, the intermediate checkpoints are there to resume training is something went wrong, or load the best model at the end (which is done before the `on_train_end` event). That's also why we don't always save the model at the end of training, leaving that part to the user in a script.
If you use the logic of unpacking the model from the kwargs, you can simply create a new `Trainer` with it which can then save it easily with the `Trainer.save_model` method. Normally the model you unpack is the reference to the real model, so you won't have a `DistributedDataParallel` or something like that, and everything should work smoothly.
<|||||>Finally getting closer!
Few notes:
* I import Trainer inside my function to avoid circular reference
* I need to find a way to see if I need `Trainer` or `TfTrainer`, should I infer it through `type(model)`
* I use `Trainer.state` as model metadata but maybe it's not that useful. The artifact is associated to a run which already has all the config parameters but it could be useful to relog it, or maybe Iย should just log the final metrics instead (that Iย can get through wandb)<|||||>I think it's becoming pretty cool. Here is [an artifact](https://wandb.ai/borisd13/huggingface/artifacts/model/run-test-clm/2fc1855753d610244974) logged with this method.
The current limitation is that it only works with Pytorch for now.
Are there any plan for more synergy between `Trainer` or `TfTrainer` or should they be considered independently?<|||||>`TFTrainer` will be reworked in the near future and be a simple wrap around the Keras fit method (and the callbacks will be regular Keras callbacks).<|||||>I adjusted the metadata when we use `load_best_model_at_end`.
In that case we don't want to log the last metrics but only flos and best metric.<|||||>Small change:
* force commit of last step
* more robust way to get metadata, it will consider any data that has been logged and is a number
I'm now ready on my side. Feel free to ping me!<|||||>@LysandreJik let me know if you have any comments<|||||>Happy new year everybody!
Since it has already been a while since this PR was made, am I supposed to merge master and verify the tests are passing?
Let me know if I need to do anything on my side. |
transformers | 8,118 | closed | Document the various LM Auto models | # What does this PR do?
This PR adds the documentation for the three classes of mdoels for LM (`AutoModelForCausalLM`, `AutoModelForMaskedLM` and `AutoModelForSeq2SeqLM`) and their TF equivalent. It also removes the documentation of `AutoModelWithLMHead` which is deprecated. | 10-28-2020 14:41:36 | 10-28-2020 14:41:36 | |
transformers | 8,117 | closed | fast tokenizer issue on most user uploaded models | ## Environment info
- `transformers` version: 3.4.0
- Platform: Linux-5.8.0-25-generic-x86_64-with-glibc2.32
- Python version: 3.8.6
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help
@mfuntowicz @julien-c
## Information
Found the bug on `camembert/camembert-base-ccnet` but probably common to many models uploaded by users.
On camembert base model, it works out of the box (there is no bug).
## To reproduce
Since `tokenizer` 0.9, it's possible to load the many unigram based tokenizers with the fast Rust implementation.
It appears that the file `tokenizer_config.json` of some of them is not up to date, in particular the information `"model_max_length": 512` is missing.
Because of that, the value of `model_max_length` is a very big integer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("camembert/camembert-base-ccnet", use_fast=True)
tokenizer.model_max_length
# Out[4]: 1000000000000000019884624838656
```
To fix it, the field model_max_length has to be added to the config file.
## Expected behavior
I would expect `tokenizer.model_max_length` to be equal to 512.
| 10-28-2020 11:53:23 | 10-28-2020 11:53:23 | Yes, we need to remove all the hardcoded configuration values of tokenizers in the transformers source code, and upload `tokenizer_config.json` files for all those models.
Also cc @n1t0 <|||||>Very strange, making some tests, the Rust implem is much slower than the Python one...
Measure done on my Mac (i7)
```python
import time
from transformers import AutoTokenizer
text = """
Il se dรฉduit des arrรชts de la Cour de justice de lโUnion europรฉenne du 27 avril 2017 (A-Rosa Flussschiff GmbH, nยฐ C-620/15) et du 6 fรฉvrier 2018 (รmer Altun, nยฐ C-359/16) que le juge, lorsquโil est saisi de poursuites pรฉnales du chef de travail dissimulรฉ, pour dรฉfaut de dรฉclarations aux organismes de protection sociale, et que la personne poursuivie produit des certificats E101, devenus A1, ร lโรฉgard des travailleurs concernรฉs, dรฉlivrรฉs au titre de lโarticle 14, paragraphe 2, sous a, du rรจglement nยฐ 1408/71, ne peut, ร lโissue du dรฉbat contradictoire, รฉcarter lesdits certificats que si, sur la base de lโexamen des รฉlรฉments concrets recueillis au cours de lโenquรชte judiciaire ayant permis de constater que ces certificats avaient รฉtรฉ obtenus ou invoquรฉs frauduleusement et que lโinstitution รฉmettrice saisie sโรฉtait abstenue de prendre en compte, dans un dรฉlai raisonnable, il caractรฉrise une fraude constituรฉe, dans son รฉlรฉment objectif, par lโabsence de respect des conditions prรฉvues ร la disposition prรฉcitรฉe et, dans son รฉlรฉment subjectif, par lโintention de la personne poursuivie de contourner ou dโรฉluder les conditions de dรฉlivrance dudit certificat pour obtenir lโavantage qui y est attachรฉ.
Doit ainsi รชtre cassรฉ lโarrรชt de la cour dโappel qui รฉcarte les certificats E101 sans avoir, au prรฉalable, recherchรฉ si lโinstitution รฉmettrice desdits certificats avait รฉtรฉ saisie dโune demande de rรฉexamen et de retrait de ceux-ci sur la base des รฉlรฉments concrets recueillis dans le cadre de lโenquรชte judiciaire permettant, le cas รฉchรฉant, de constater que ces certificats avaient รฉtรฉ obtenus ou invoquรฉs de maniรจre frauduleuse et que lโinstitution รฉmettrice sโรฉtait abstenue, dans un dรฉlai raisonnable, de les prendre en considรฉration aux fins de rรฉexamen du bien-fondรฉ de la dรฉlivrance desdits certificats, et dans lโaffirmative, sans รฉtablir, sur la base de lโexamen des รฉlรฉments concrets et dans le respect des garanties inhรฉrentes au droit ร un procรจs รฉquitable, lโexistence dโune fraude de la part de la sociรฉtรฉ poursuivie, constituรฉe, dans son รฉlรฉment matรฉriel, par le dรฉfaut, dans les faits de la cause, des conditions prรฉvues ร lโarticle 14, paragraphe 2, sous a, prรฉcitรฉ aux fins dโobtention ou dโinvocation des certificats E101 en cause et, dans son รฉlรฉment moral, par lโintention de ladite sociรฉtรฉ de contourner ou dโรฉluder les conditions de dรฉlivrance dudit certificat pour obtenir lโavantage qui y est attachรฉ (arrรชt nยฐ 1, pourvoi 13-88.631, arrรชt nยฐ 2, pourvoi 13-88.632 et arrรชt nยฐ 3, pourvoi nยฐ 15-80.735).
En revanche, prononce par des motifs conformes ร la doctrine de la Cour de lโUnion europรฉenne prรฉcitรฉe, la cour dโappel qui, pour relaxer les prรฉvenues, sociรฉtรฉs dโaviation civile, รฉnonce que lโenquรชte nโ a pas permis de constater les รฉlรฉments de fraude et sโabstient, en consรฉquence, dโopรฉrer une vรฉrification relative aux certificats E101 produits par elles (arrรชt nยฐ 4, pourvoi nยฐ 1581316).
"""
fast = False
repeat = 1000
# use_fast
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path="camembert/camembert-base-ccnet", use_fast=fast)
_ = tokenizer(text)
start = time.time()
for _ in range(repeat):
_ = tokenizer(text)
print("ccnet new", time.time() - start)
# CCNET Camembert saved few months ago
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path="output/model", use_fast=fast)
_ = tokenizer(text)
start = time.time()
for _ in range(repeat):
_ = tokenizer(text)
print("ccnet old", time.time() - start)
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path="camembert-base", use_fast=fast)
_ = tokenizer(text)
start = time.time()
for _ in range(repeat):
_ = tokenizer(text)
print("camembert base", time.time() - start)
```
fast = False
```
wandb: WARNING W&B installed but not logged in. Run `wandb login` or set the WANDB_API_KEY env variable.
ccnet new 2.104267120361328
ccnet old 2.3693552017211914
Token indices sequence length is longer than the specified maximum sequence length for this model (684 > 512). Running this sequence through the model will result in indexing errors
camembert base 2.245959997177124
Process finished with exit code 0
```
fast = True
```
wandb: WARNING W&B installed but not logged in. Run `wandb login` or set the WANDB_API_KEY env variable.
ccnet new 2.7245991230010986
ccnet old 2.7714219093322754
camembert base 2.9007809162139893
```
It appears that fast tokenizer... is much slower than Python implementation on a Mac (measures not done a Linux machine).<|||||>Thank you for reporting this @pommedeterresautee.
I think this is expected though: You are comparing a tokenizer that is based on SentencePiece (c++) with one in Rust. Our rust implementation is a bit slower than the SentencePiece when encoding a single sentence, but as soon as you are starting to encode batches with padding and post-processing, it gets faster!<|||||>Thank you, I didn't understood that batching was a thing during tokenization too!
```python
import time
from transformers import AutoTokenizer
text = """
Il se dรฉduit des arrรชts de la Cour de justice de lโUnion europรฉenne du 27 avril 2017 (A-Rosa Flussschiff GmbH, nยฐ C-620/15) et du 6 fรฉvrier 2018 (รmer Altun, nยฐ C-359/16) que le juge, lorsquโil est saisi de poursuites pรฉnales du chef de travail dissimulรฉ, pour dรฉfaut de dรฉclarations aux organismes de protection sociale, et que la personne poursuivie produit des certificats E101, devenus A1, ร lโรฉgard des travailleurs concernรฉs, dรฉlivrรฉs au titre de lโarticle 14, paragraphe 2, sous a, du rรจglement nยฐ 1408/71, ne peut, ร lโissue du dรฉbat contradictoire, รฉcarter lesdits certificats que si, sur la base de lโexamen des รฉlรฉments concrets recueillis au cours de lโenquรชte judiciaire ayant permis de constater que ces certificats avaient รฉtรฉ obtenus ou invoquรฉs frauduleusement et que lโinstitution รฉmettrice saisie sโรฉtait abstenue de prendre en compte, dans un dรฉlai raisonnable, il caractรฉrise une fraude constituรฉe, dans son รฉlรฉment objectif, par lโabsence de respect des conditions prรฉvues ร la disposition prรฉcitรฉe et, dans son รฉlรฉment subjectif, par lโintention de la personne poursuivie de contourner ou dโรฉluder les conditions de dรฉlivrance dudit certificat pour obtenir lโavantage qui y est attachรฉ.
Doit ainsi รชtre cassรฉ lโarrรชt de la cour dโappel qui รฉcarte les certificats E101 sans avoir, au prรฉalable, recherchรฉ si lโinstitution รฉmettrice desdits certificats avait รฉtรฉ saisie dโune demande de rรฉexamen et de retrait de ceux-ci sur la base des รฉlรฉments concrets recueillis dans le cadre de lโenquรชte judiciaire permettant, le cas รฉchรฉant, de constater que ces certificats avaient รฉtรฉ obtenus ou invoquรฉs de maniรจre frauduleuse et que lโinstitution รฉmettrice sโรฉtait abstenue, dans un dรฉlai raisonnable, de les prendre en considรฉration aux fins de rรฉexamen du bien-fondรฉ de la dรฉlivrance desdits certificats, et dans lโaffirmative, sans รฉtablir, sur la base de lโexamen des รฉlรฉments concrets et dans le respect des garanties inhรฉrentes au droit ร un procรจs รฉquitable, lโexistence dโune fraude de la part de la sociรฉtรฉ poursuivie, constituรฉe, dans son รฉlรฉment matรฉriel, par le dรฉfaut, dans les faits de la cause, des conditions prรฉvues ร lโarticle 14, paragraphe 2, sous a, prรฉcitรฉ aux fins dโobtention ou dโinvocation des certificats E101 en cause et, dans son รฉlรฉment moral, par lโintention de ladite sociรฉtรฉ de contourner ou dโรฉluder les conditions de dรฉlivrance dudit certificat pour obtenir lโavantage qui y est attachรฉ (arrรชt nยฐ 1, pourvoi 13-88.631, arrรชt nยฐ 2, pourvoi 13-88.632 et arrรชt nยฐ 3, pourvoi nยฐ 15-80.735).
En revanche, prononce par des motifs conformes ร la doctrine de la Cour de lโUnion europรฉenne prรฉcitรฉe, la cour dโappel qui, pour relaxer les prรฉvenues, sociรฉtรฉs dโaviation civile, รฉnonce que lโenquรชte nโ a pas permis de constater les รฉlรฉments de fraude et sโabstient, en consรฉquence, dโopรฉrer une vรฉrification relative aux certificats E101 produits par elles (arrรชt nยฐ 4, pourvoi nยฐ 1581316).
"""
repeat = 1000
# use_fast
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path="camembert/camembert-base-ccnet", use_fast=True)
_ = tokenizer(text)
start = time.time()
for _ in range(repeat):
_ = tokenizer([text] * 10)
print("fast", time.time() - start)
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path="camembert/camembert-base-ccnet", use_fast=False)
_ = tokenizer(text)
start = time.time()
for _ in range(repeat):
_ = tokenizer([text] * 10)
print("slow", time.time() - start)
```
Produces
```
wandb: WARNING W&B installed but not logged in. Run `wandb login` or set the WANDB_API_KEY env variable.
fast 16.272130966186523
slow 22.52426290512085
```
... as expected!<|||||>@pommedeterresautee
Hi, I am not sure it's a `fast` tokenizers bug but maybe more a property that was (maybe unintentionnally) dropped from Tokenizers.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("camembert/camembert-base-ccnet", use_fast=False)
tokenizer.model_max_length
# Out[4]: 1000000000000000019884624838656
```
Can you tell us what's the actual bug for you in the end ? Just to make sure the fix I am working on will actually work as generally as possible<|||||>I think you are right, it's more about a dropped property in the config file or a change in the source code than a bug specific to the fast tokenizer.
I discovered the issue because I was comparing the model + tokenizer as exported few months ago with the fast tokenizer of today and thought it was because of the fast tokenizer. My "old" export returns me 512 when I call `max_len`.
Still it's not returning the correct value, fast tokenizer or not.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hi, this issue is still not resolved; i have the same problem with `camembert/camembert-large` |
transformers | 8,116 | closed | Add labels padding in tokenization_utils_base.py | # What does this PR do?
This PR makes `tokenizer.pad()` also pad `'labels'`.
I tried to use this:
https://github.com/huggingface/transformers/blob/8065fea87007fbf7542fc060ff8ddd0b5df567da/src/transformers/data/data_collator.py#L69
But since labels is not padded, the result cannot turn into a tensor. `ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same lengt
h.
`
This patch solves the problem.
It seems logical to me that `tokenizer.pad()` should also pad `'labels'`.
This portion of code is last changed in #4015 @n1t0 @thomwolf @LysandreJik | 10-28-2020 10:58:11 | 10-28-2020 10:58:11 | Hi there! Thanks for your PR! I see a few problems with this approach.
1. Not all labels need to be padded. If you are doing classification (with one or multiple labels) you don't want to pad them
2. I imagine you are in a token classification problem, and in those, the number of labels is not necessarily the same as the number of tokens, as the labels are for words and tokens can be parts of words.
I think the proper fix is to create an option in `DataCollatorWithPadding` to activate label padding (so a flag `pad_labels_too` or something like that) that then pads the labels to the maximum length of the labels (so `difference` that you use here might be a different number for the labels).<|||||>Thanks for the reply!
Considering that different problem may pad labels differently, I think maybe it's better to leave it as is and use this:
```python
class MyDataCollatorWithPadding(DataCollatorWithPadding):
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
batch = super().__call__(features)
# add custom label padding here
return batch
```
Just came up with this. ๐ Not sure if it works.<|||||>Just tried it, the above code does not work, because the error is in `self.tokenizer.pad()`.
Here is the truncated trace:
```
src/transformers/data/data_collator.py", line 103, in __call__
batch = self.tokenizer.pad(
src/transformers/tokenization_utils_base.py", line 2408, in pad
return BatchEncoding(batch_outputs, tensor_type=return_tensors)
src/transformers/tokenization_utils_base.py", line 186, in __init__
self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
src/transformers/tokenization_utils_base.py", line 571, in convert_to_tensors
raise ValueError(
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
```
Therefore `pad_labels_too` needs to be in `tokenizer.pad()`.
@sgugger
> the number of labels is not necessarily the same as the number of tokens, as the labels are for words and tokens can be parts of words.
Maybe we will need a `LabelPaddingStrategy` similar to `PaddingStrategy`. But I don't know what kinds of other label padding strategies needs to be added.<|||||>I think you should use the newly pushed DataCollatorForTokenClassification from #8274.<|||||>Very nice! I guess I will close this PR. |
transformers | 8,115 | closed | Fix eval ref miss in Chinese WWM. | Sorry for my reckless: I didn't add param for `eval_ref_file` when Chinese WWM.
It was found by @johnsonice in [here](https://github.com/huggingface/transformers/pull/7925#issuecomment-717701325).
So I fix it and update readme for Chinese WWM. | 10-28-2020 10:13:48 | 10-28-2020 10:13:48 | Before we merge you'll need to run `make fixup` or `make style` at the root of your transformers clone to pass the code quality test.<|||||>I had not realized that LTP has pinned master to 3.2.0. We can't have a script in examples that doesn't run on master, so I suggest copying the current version and moving it in the examples/contrib folder (or hosting it on your GitHub if you prefer) while still linking to it from the README.
We are in the process of rewriting all examples (and this script as it is will change in the next few days) to match the current version of transformers/datasets so this master requirement is really important. <|||||>> I had not realized that LTP has pinned master to 3.2.0. We can't have a script in examples that doesn't run on master, so I suggest copying the current version and moving it in the examples/contrib folder (or hosting it on your GitHub if you prefer) while still linking to it from the README.
>
> We are in the process of rewriting all examples (and this script as it is will change in the next few days) to match the current version of transformers/datasets so this master requirement is really important.
Seem the requirements of LTP is fixed.
So I move to `run_chinese_ref.py` to `examples/contrib` and update readme.<|||||>Great! If you can just merge Lysandre's suggestion, this should be good to merge then.<|||||>Applied the suggestion, merging! |
transformers | 8,114 | closed | Pegasus: Error when training with increased input length | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: MacOS Mojave (10.14.6)
- Python version: 3.7.9
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): NA
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@sshleifer
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): Pegasus
The problem arises when using:
* [ ] the official example scripts: NA
* [x] my own modified scripts:
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: NA
* [x] my own task or dataset: Long input training on the standard CNN/DM dataset.
## To reproduce
Steps to reproduce the behavior:
1. I am trying to train a Pegasus model using the following script on a larger input length.
```python
from transformers import PegasusConfig, PegasusForConditionalGeneration, PegasusTokenizer
from transformers import Trainer, TrainingArguments
from examples.seq2seq.utils import Seq2SeqDataset
config = PegasusConfig(
max_length=2048,
max_position_embeddings=2048,
encoder_layers=16,
decoder_layers=4,
num_beams=2
)
tokenizer = PegasusTokenizer.from_pretrained("sshleifer/distill-pegasus-cnn-16-4")
model = PegasusForConditionalGeneration(config=config)
dataset = Seq2SeqDataset(data_dir='data/cnn_dm', tokenizer=tokenizer, max_source_length=2048, max_target_length=150)
training_args = TrainingArguments(
output_dir="./data/output",
overwrite_output_dir=True,
num_train_epochs=1,
save_steps=10,
save_total_limit=2,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset,
prediction_loss_only=True,
)
trainer.train()
```
2. I am getting the following error message
```
/Users/sdasgupta02/code/summarization/summarization-long/transformers/src/transformers/trainer.py:263: FutureWarning: Passing `prediction_loss_only` as a keyword argument is deprecated and won't be possible in a future version. Use `args.prediction_loss_only` instead. Setting `args.prediction_loss_only=True
FutureWarning,
0%| | 0/2 [00:00<?, ?it/s]Traceback (most recent call last):
File "/Users/sdasgupta02/code/summarization/summarization-long/transformers/train_scratch.py", line 34, in <module>
trainer.train()
File "/Users/sdasgupta02/code/summarization/summarization-long/transformers/src/transformers/trainer.py", line 756, in train
tr_loss += self.training_step(model, inputs)
File "/Users/sdasgupta02/code/summarization/summarization-long/transformers/src/transformers/trainer.py", line 1056, in training_step
loss = self.compute_loss(model, inputs)
File "/Users/sdasgupta02/code/summarization/summarization-long/transformers/src/transformers/trainer.py", line 1080, in compute_loss
outputs = model(**inputs)
File "/anaconda3/envs/transformers/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
TypeError: forward() missing 1 required positional argument: 'input_ids'
0%| | 0/2 [00:00<?, ?it/s]
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Expected behaviour is to be able to train this Pegasus model on CNN/DM dataset on longer input sequences (> 1024).
<!-- A clear and concise description of what you would expect to happen. -->
| 10-28-2020 09:22:19 | 10-28-2020 09:22:19 | (This issue has nothing to do with pegasus or input dimension).
+ `max_length` should not be set like that, it refers to the maximum length to generate.
+ You probably don't want to initialize from random.
e+ You should be using Seq2SeqTrainer, and this script https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune_trainer.py#L2
You may need to modify line 188 to pass `max_position_embeddings=2048` and line 189 to pass `model_max_length=2048`. |
transformers | 8,113 | closed | [WIP] Add Tapas model | # What does this PR do?
Since the beginning of August, I'm working in my free time on incorporating the [Tapas](https://arxiv.org/abs/2004.02349) algorithm by Google AI in the Transformers library (because this library is awesome and I want to contribute to it!). Tapas is basically a BERT model with some clever modifications for natural language understanding related to **tabular data** (structured data like tables, or even HTML). Adding this model could foster research in this area ๐
Demo's of my current implementation:
* [colab notebook](https://colab.research.google.com/drive/1feRe1Jyjtw7iZVRiKWulP6WjBW6hBIJE?usp=sharing) to showcase `TapasForQuestionAnswering` on WTQ (WikiTable Questions by Stanford University)
* [colab notebook](https://colab.research.google.com/drive/1CDPUr7c8uCNnCtAmmFfj91j-sIzdcqym?usp=sharing) to showcase `TapasForQuestionAnswering`on SQA (Sequential Question Answering by Microsoft Research)
* [colab notebook](https://colab.research.google.com/drive/1JDwWrwHSt8KhGBQ57BDlCFZEe0xMGTQA?usp=sharing) to showcase `TapasForSequenceClassification` on TabFact (Table Fact checking, introduced at ICLR this year)
The model weights are available on the [original Github repository](https://github.com/google-research/tapas), and I wrote a conversion script (similar to other models in the Transformers library) to load them into their PyTorch counterpart.
I suggest reading the [paper](https://arxiv.org/abs/2004.02349) as well as my [notes](https://docs.google.com/document/d/1WIdZX6of1l-c4AmT909PT7Dpj57EfqUh8BBPaf9ztOw/edit?usp=sharing) to gain a full understanding of how the model works and how I implemented it. There's also a [blog post](https://ai.googleblog.com/2020/04/using-neural-networks-to-find-answers.html) by Google AI as well as a [video](https://www.youtube.com/watch?v=cIUtRNhY6Rw&ab_channel=YannicKilcher) by Yannic Kilcher explaining how the algorithm works.
The main classes are `TapasConfig`, `TapasModel`, `TapasForQuestionAnswering` and `TapasForSequenceClassification` which can all be found in `modeling_tapas.py`. I'm quite sure the models are OK, the output is the same as the Tensorflow implementation. I added a very extensive documentation (docstrings) to all classes, which you can view by running the `make html` command from the docs directory. Feedback appreciated!
However, there are 2 things for which I need some help/opinions to finish this work:
## 1. Making TapasTokenizer fully Transformers-compliant
To implement `TapasTokenizer`, I need some help/opinions. I suggest using Pandas dataframes as the central object for tabular data (as shown in the Colab notebooks above). and let the API be as follows:
```
from transformers import TapasTokenizer
import pandas as pd
data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"],
'Age': ["56", "45", "59"],
'Number of movies': ["87", "53", "69"],
'Date of birth': ["18 december 1963", "11 november 1974", "6 may 1961"]}
table = pd.DataFrame.from_dict(data)
queries = ["When was Brad Pitt born?",
"Which actor appeared in the least number of movies?",
"What is the average number of movies?"]
tokenizer = TapasTokenizer.from_pretrained("tapas-base-finetuned-wtq")
inputs = tokenizer(table=table, queries=queries)
```
Currently I've only implemented the `batch_encode_plus` method of TapasTokenizer, because it's not really clear to me how to make it fully compatible with the Transformers library, since the way that data is prepared for the model is a bit different compared to BERT/RoBERTa/etc (see also my notes above). It's also not straightforward to make it compatible with the different padding/truncation strategies of Transformers. Currently, the way it works is as follows: thereโs a function `_get_token_budget` in `tokenization_tapas.py` that calculates the number of tokens left for the flattened table after tokenizing a question. This is currently set to `self.model_max_length - (len(question_tokens) + 2)` (+ 2 for the CLS and SEP tokens), as was done in the original implementation. There is a hyperparameter when initializing `TapasTokenizer` called `drop_rows_to_fit` which drops rows of the table to fit into the token budget if set to `True`. If itโs set to `False` and a table is too big, it throws a `ValueError` indicating 'too many rows'.
## 2. Testing
Currently I've written `test_modeling_tapas.py` (23 tests passed, 5 failed) and `test_modeling_tapas_utilities.py` (9 tests passed). However, there are 4 different settings to use `TapasForQuestionAnswering` (see my notes) and these all need to be tested (currently only 1 setting is tested) - some help here would be great. Besides this, tests should be added to see whether the model can be properly trained, as well as adding `test_tokenization_tapas.py` (which depends on how TapasTokenizer will be implemented).
Fixes the following issues (people requesting to add the model):
- #4166
- #4288
## Who can review?
I suggest @sgugger @LysandreJik since we already discussed this on the forum [here](https://discuss.huggingface.co/t/adding-a-new-model-to-transformers-with-additional-dependencies/916/15).
tokenizers: @mfuntowicz
DISCLAIMER: this is my first PR of my life, never done this before, hopefully I don't mess up anything (just got the Pro Git book ๐). I assume I should not use `git rebase` anymore now since this branch submitted as PR and should only use `git add`, `git commit` and `git push -u origin tapas_v3`? And `git pull origin tapas_v3` in case others make commits to my branch?
Is there a Slack channel where people can help me out in case I have git issues? | 10-28-2020 08:38:21 | 10-28-2020 08:38:21 | I think you might need a rebase on the latest master as your PR seems to have taken master from a while ago (all the modifications in the README should not be there for instance). If it messes the diff, we can always close this PR and open a new one, the branch will be safe :-)<|||||>EDIT: I've always used `git fetch upstream` and `git rebase upstream/master`, before pushing my local `tapas_v3` branch to my fork on Github. I didn't know that all models starts with 1. now in the README (therefore I manually changed the numbering), should be fixed now, branch is up-to-date.<|||||>@sgugger should I keep rebasing this branch everyday, to keep up with master (as long as the code is not being reviewed)?
Also, is it normal that I have to do a force push everytime I perform a rebase and want to push to Github? Because when I want to do simply `git push -u origin tapas_v3`, I always get
```
(env) PS C:\Users\niels.rogge\Documents\Python projecten\transformers> git push -u origin tapas_v3
To https://github.com/NielsRogge/transformers.git
! [rejected] tapas_v3 -> tapas_v3 (non-fast-forward)
error: failed to push some refs to 'https://github.com/NielsRogge/transformers.git'
hint: Updates were rejected because the tip of your current branch is behind
hint: its remote counterpart. Integrate the remote changes (e.g.
hint: 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.
```
after a local rebase. <|||||>I'm far from being an expert on git and I don't use the command line anyway, so can't really help you with that.<|||||>> Hi! Thanks a lot @NielsRogge for implementing this model! It's a great model that definitely deserves its place in the huggingface library, and you did a great job implementing it! I'm reviewing the code below.
Thank you! Great to hear :)
@LysandreJik I addressed all of the comments. To summarize:
* `TapasConfig` and `TapasTokenizer` now inherit from `PretrainedConfig` and `PreTrainedTokenizer` respectively. For `TapasTokenizer`, a lot of the code of `tokenization_bert.py` was copied (such as the `BasicTokenizer` and `WordPieceTokenizer` classes), since the tokenization logic of text itself is the same. However, some things are different (see `tokenization_tapas.py`).
* `Modeling_tapas_utilities.py` and `tokenization_tapas_utilities.py` are also gone now, they are added to the bottom of `modeling_tapas.py` and `tokenization_tapas.py` respectively.
* `pandas` is not a dependency in the code (`pandas` is not imported in any of the files), but I assume you want to switch to `datasets` so that people can use Tapas using only the `Transformers` library? However, currently, in `tokenization_tapas.py` some Pandas logic is used, for example in the `_tokenize_table` function `.iterrows()` is used, so this will require some changes.
* concerning git, I assume I should stop rebasing at some point? I can do it as long as I'm the only one committing?<|||||>Hey @LysandreJik, thank you for your feedback.
I've fixed all comments that you had, apart from the tokenizer itself.
> This is a complicated part, so please let us know if you would like some help along the way/if you want us to take over from here, in which case we would open PRs against your branch with proposals.
I'm happy to accept PRs against my branch, because it's not that clear to me how the tokenizer should be implemented in the best possible way. Does that mean I should stop rebasing my branch with `upstream/master`? Since I read about the "golden rule of rebasing", which states to "never use it on public branches" ๐
<|||||>Alright, I'll take a look and open a PR on your fork with the proposed changes. Yes, please don't rebase on `master` anymore as it would mess up my history as I start working on your branch.
We can keep it as it is until we merge now, and fix the merge conflict as the last step.<|||||>Hi @NielsRogge, I'm nearly done with the tokenizer changes, but we're focusing on getting version v3.5.0 out today and tomorrow. I'll try to open a PR on your repository then. Please hold off from adding commits now, as rebasing/merging would be very painful now! :smile: <|||||>@LysandreJik no worries, I'm not working on the `tapas_v3` branch.
However, there's still some important work left to do in terms of preparing the data for the model. In the original implementation, this is done in 3 steps:
## 1. TSV in the SQA format
Any dataset (SQA, WTQ, WikiSQL) is first transformed into a TSV with the same columns as the SQA format:
* id: id of the table-question pair, if applicable.
* annotator: id of the annotator, if applicable.
* position: integer indicating if the question is the first, second, third,... related to the table. Only required in case of conversational setup (such as SQA)
* question: string
* table_file: string, name of a csv file containing the tabular data
* answer_text: list of strings (each string being a cell value that is part of the answer)
* answer_coordinates: list of string tuples (each tuple being a cell coordinate, i.e. row, column pair that is part of the answer)
* aggregation_label: only required in case of strong supervision for aggregation (such as WikiSQL-supervised)
* answer_float: float answer to the question. Only required in case of weak supervision for aggregation (such as WTQ and WikiSQL)
If people want to fine-tune `TapasForQuestionAnswering` on their own dataset, they must prepare it in this TSV format, and associated csv files containing the tabular data. It would be great if we can upload all datasets (SQA, WTQ, WikiSQL and WikiSQL-supervised) in SQA format to the HuggingFace datasets hub (they can easily be obtained from the official Tapas Github repo).
## 2. Intermediate format: Interaction
Next, each table-question pair is transformed into an intermediate protocol buffer message which the authors call **Interaction**. Its properties are defined [here](https://github.com/google-research/tapas/blob/master/tapas/protos/interaction.proto), and include things like Table, Question, Answer, AnswerCoordinate, Cell, NumericValue, NumericValueSpan, etc. Populating all the fields of an Interaction based on the TSV is defined in [interaction_utils.py](https://github.com/google-research/tapas/blob/master/tapas/utils/interaction_utils.py), [interaction_utils_parser.py](https://github.com/google-research/tapas/blob/master/tapas/utils/interaction_utils_parser.py), [number_annotation_utils.py](https://github.com/google-research/tapas/blob/master/tapas/utils/number_annotation_utils.py), [number_utils.py](https://github.com/google-research/tapas/blob/master/tapas/utils/number_utils.py) and [text_utils.py](https://github.com/google-research/tapas/blob/master/tapas/utils/text_utils.py).
## 3. tf.train.Example
Finally, each interaction is transformed into an actual training example (`tf.train.Example`), containing the input_ids, mask, etc. as `tf.train.Feature` objects. This is defined in [tf_example_utils.py](https://github.com/google-research/tapas/blob/master/tapas/utils/tf_example_utils.py).
_________________
`TapasTokenizer` must be able to directly convert a row (or multiple rows, i.e. a batch) from a TSV file into a dictionary with PyTorch tensors as values (in other words, combine steps 2 and 3). The remaining work is basically step 2. As I worked with `Pandas` as standard format for tables in my implementation, my idea was to define regular Python classes for each property of the Interaction proto. That is why I have defined a `NumericValue` class, `NumericValueSpan` class, `Cell` class, `Date` class, etc. in `tokenization_tapas.py`. Instances of these classes are then created each time `TapasTokenizer` is called.
I've noticed that the creation of the numeric values is not entirely correct in the `tapas_v3` branch. I'm now working on a correct implementation of this in a branch called `tapas_v3_up_to_date_with_master` (in which I also regularly rebase with upstream/master). It only involves changes to `tokenization_tapas.py`. The changes can eventually be added to `tapas_v3`. I'll wait until your PR is merged before I add those changes.
So my questions are:
- for each of the `xxx_utils.py` files which are used in step 2, there are corresponding `xxx_utils_test.py` files. Could you help in setting up tests in `test_tokenization_tapas.py`, to make sure we're following the original implementation?
- I'm still assuming that tables are `pandas` dataframes in `tokenization_tapas.py`. Is this OK? Or do you want to change to `dataset`? Wouldn't it be more logical to have SQA/WTQ/WikiSQL as `dataset` objects, and the actual tables as `pandas` dataframes? Pandas is not a dependency of `tokenization_tapas.py`, but tables must be provided as a Pandas dataframe to `TapasTokenizer`.
<|||||>Hi @NielsRogge! I just finished the tokenizer and its tests. The tests were kind of painful, as the API is a bit different (accepting a dataframe instead of a string), so I had to override most of the tests.
Here's how you can review:
- I did a `make style` on your branch, as I find the code easier to navigate once it's on par with our style.
- However, this introduces a bunch of changes that would make the PR hard to review.
- In order to circumvent this, I've pushed two branches:
- `tapas-style`, which is the exact branch you have, but with `make style` run on it and a few cosmetic adjustments
- `tapas-final`, which builds on top of `tapas-style` to implement all the tokenizer API and tests
- For you to review, the easiest would be to review the PR I opened [here](https://github.com/huggingface/transformers/pull/8482), which aims to merge `tapas-final` into `tapas-style`. This way you can see the actual changes and only these.
- I described my changes in that PR's description so as not to clog this one.
- I setup a todo list if items remaining on the tokenizer, which are not blocking for the merge.
Please review https://github.com/huggingface/transformers/pull/8482, and tell me if you're okay with the changes. If you're okay, I'll merge `tapas-final` into `tapas-style`, and open a PR on your fork with the branch `tapas-style`, which will have all the changes.
This is one of the best hands-on introduction to git you could ask for :smile:
Regarding your questions about data processing:
> for each of the xxx_utils.py files which are used in step 2, there are corresponding xxx_utils_test.py files. Could you help in setting up tests in test_tokenization_tapas.py, to make sure we're following the original implementation?
Yes, I can help you with that.
> I'm still assuming that tables are pandas dataframes in tokenization_tapas.py. Is this OK? Or do you want to change to dataset? Wouldn't it be more logical to have SQA/WTQ/WikiSQL as dataset objects, and the actual tables as pandas dataframes? Pandas is not a dependency of tokenization_tapas.py, but tables must be provided as a Pandas dataframe to TapasTokenizer.
Actually, `datasets.Dataset` behave very similarly to `pd.DataFrame`s. Nevertheless, we can start with Pandas DataFrames for now, and change to dataset's Datasets once progress is made<|||||>@LysandreJik I have improved the parsing of numeric values of both the question and table in `prepare_for_model` of `tokenization_tapas.py` to reflect the original implementation. What it does is turn the cells of a table into `Cell` objects (with potentially associated `NumericValue` objects) and the question into a `Question` object (with potentially a list of associated `NumericValueSpan` objects), before adding numeric-related features.
Besides this, I have fixed some of the comments I had on [my review of your PR](https://github.com/huggingface/transformers/pull/8482), and commented "done" on the ones that are fixed.
## To do:
- [x] Add correct implementation of `prev_label_ids` in case of a batch of table-question pairs (in case of a batch, all questions should refer to the same table). The implementation should reflect the [original implementation](https://github.com/google-research/tapas/blob/d8638f0909b3de32a85fe7491769d47d645d8e22/tapas/utils/tf_example_utils.py#L1155) as follows: for a given table-question pair in a batch,
```
prev_label_ids = self.get_answer_ids(
column_ids, row_ids, table_data, answer_text, answer_coordinates
)
```
Here, it's important that the `get_answer_ids` function is called with the `column_ids` and `row_ids` of the **current** table-question pair in the batch, but the `answer_text` and `answer_coordinates` of the **previous** table-question pair in the batch.
- [x] Fix the error that I'm currently having in the colab notebooks above (see first message of this PR), when `answer_coordinates` and `answer_text` are provided to the tokenizer. However, what's weird is that when calling `TapasTokenizer` on the real SQA dev set, everything works fine. Might be that I'm doing something wrong with the coordinates and text I provide?
- [x] Add support for the `drop_rows_to_fit` and `cell_trim_length` attributes of `TapasTokenizer`, which should reflect the original API (see also my [suggestion](https://github.com/huggingface/transformers/pull/8482#discussion_r522131827) on how this could be done for `cell_trim_length`). Also, setting `truncation=True` in `TapasTokenizer` doesn't do anything currently.
- [x] I've added support for the special [EMPTY] token for empty cells in a table (based on the `format_text` method, see [here](https://github.com/google-research/tapas/blob/4908213eb4df7aa988573350278b44c4dbe3f71b/tapas/utils/tf_example_utils.py#L330)). Does this have implications for the `add_special_tokens` method? I assume not? What about `get_special_tokens_mask`? To be verified.
- [x] **Testing TapasTokenizer:** make sure that the PyTorch tensors that `TapasTokenizer` creates are exactly the same as those of the original implementation on the same input data. I've created a [notebook](https://colab.research.google.com/drive/1MzyO-QSA5PZNCNoWa2EIrSA8UEqyUZVp) that tests this. Currently there's a misalignment due to the fact that the original implementation tokenizes a cell value like "1.0" into ["1", "."], whereas my implementation tokenizes this into ["1", ".", "0"]. Filed a Github issue to resolve this.
- [x] **Testing forward pass:** make sure that `TapasForQuestionAnswering`/`TapasForSequenceClassification` return the same `sequence_output`, `pooled_output`, `logits`, and `loss` tensors as the original implementation on the same input data. I've created notebooks that test this:
- SQA (`tapas_sqa_inter_masklm_base_reset`): [PyTorch](https://colab.research.google.com/drive/14bdSwdzvCF2gDF3L0z58IT1fSNzOXKey#scrollTo=6fvJbFF-xKfh) vs [Tensorflow](https://colab.research.google.com/drive/1KWD187cWDP-lOOwKjwzGtGfVR9UWzZVL#scrollTo=KTlX8ZEuRTBa) is giving me the same output (inference only). UPDATE: also loss calculation is OK, see [PyTorch](https://colab.research.google.com/drive/1z2ZRIBXOTk3Aqh6OYpRak6iJhs7e1l2S#scrollTo=2kakMASqmrG5) vs [Tensorflow](https://colab.research.google.com/drive/1Ba3jARJcAqRTd0uuPxOTjOrruIcXIV54#scrollTo=2ZIdZEJGw5RK).
- WTQ (`tapas_wtq_wikisql_sqa_inter_masklm_base_reset`): [PyTorch](https://colab.research.google.com/drive/1Z4T9ZzMvg3vGZ3dSNWbiA4FVbgMMkq_9#scrollTo=EXS4MmCy8Dti) vs [Tensorflow](https://colab.research.google.com/drive/1klaSP99q2aicwpVV9GrmL5nvrPGqrSPH#scrollTo=SIE7bTJMVuSh). I'm getting the same `sequence_output` and `logits_aggregation` on the same input data :) UPDATE: also loss calculation is OK, see [PyTorch](https://colab.research.google.com/drive/19Uq6k1f1178okv80Julfa0Zg41fvFN9x#scrollTo=LEOCtWmWt2IH) vs [Tensorflow](https://colab.research.google.com/drive/1ScF4R7Au8gbC5lN1ehTQFdDknmr2mMRz#scrollTo=GLgez6jJx9Xc) notebooks.
- Tabfact (`tapas_tabfact_inter_masklm_base_reset`): [PyTorch](https://colab.research.google.com/drive/1JDwWrwHSt8KhGBQ57BDlCFZEe0xMGTQA?usp=sharing#scrollTo=z6esPfPMFH1p) vs [Tensorflow](https://colab.research.google.com/drive/14-6VFjvrIiXsYPQEtv8MN-a8Mpo1UNH7#scrollTo=LYBkiqo38e7l) is giving me the same classification logits, confirming that the relative position embeddings implementation is OK.
- [x] **Testing backward pass:** I've created a [notebook](https://colab.research.google.com/drive/17L97m7cq7J_pnUHmQW6-ksGGVpbbGryP#scrollTo=y0YzoGY24I0C) that fine-tunes `TapasForQuestionAnswering` on 10 examples of the WTQ test set, just to see if it's able to overfit them. I've tried both with randomly initialized classification heads, as well as with the already-finetuned WTQ model. This seems to work well for the former (it can overfit the cell selection, however for aggregation this seems more difficult - probably due to the weak supervision). However, for the already fine-tuned one, the loss stays zero after the third example already. Is this a bug, or is this possible? Update: confirmed by the author.
Actually, reviewing the code of `modeling_tapas.py`(loss calculation + backward pass) is the most important. <|||||>> @NielsRogge if you want I can take care of the remaining steps for the tokenizer:
>
> > * Add support for the drop_rows_to_fit and cell_trim_length attributes of TapasTokenizer, which should reflect the original API (see also my suggestion on how this could be done for cell_trim_length). Also, setting truncation=True in TapasTokenizer doesn't do anything currently.
> > * I've added support for the special [EMPTY] token for empty cells in a table (based on the format_text method, see here). Does this have implications for the add_special_tokens method? I assume not? What about get_special_tokens_mask? To be verified.
@LysandreJik ok that would be great!
<|||||>Thanks for the reviews, I've updated the requested changes and marked the ones I did as resolved.
@LysandreJik, could you maybe fix the remaining comments? In short:
* remove the encoder-decoder logic of `TapasModel` (only remove this from the API of `TapasModel`, but leave them in the code that's copied from BERT (that the user won't see and won't use). I'll let you do this since I don't want to mess up anything.
* remove some tests and add a slow test as requested above
... then I'll mark these as resolved. Besides these, there's also the truncation of `TapasTokenizer` which should still be implemented. I copied what was left here:
* Add support for the `drop_rows_to_fit` and `cell_trim_length` attributes of `TapasTokenizer`, which should reflect the original API (see also [my suggestion](https://github.com/huggingface/transformers/pull/8482#discussion_r522131827) on how this could be done for `cell_trim_length`). The original implementation can be found [here](https://github.com/google-research/tapas/blob/4908213eb4df7aa988573350278b44c4dbe3f71b/tapas/utils/tf_example_utils.py#L999).
* Add support for the special `[EMPTY]` token for empty cells in a table (see the `_tokenize` method of `TapasTokenizer`, which now uses the `format_text` method as in the [original implementation](https://github.com/google-research/tapas/blob/4908213eb4df7aa988573350278b44c4dbe3f71b/tapas/utils/tf_example_utils.py#L330)). Does this have implications for the `add_special_tokens` method? I assume not? What about `get_special_tokens_mask`? To be verified.
* There was also a small discrepancy between the tokenization of TAPAS and the original implementation, see this [Github issue](https://github.com/google-research/tapas/issues/90#issuecomment-735723963). I don't expect this too big of an issue, but maybe you know more about this.
And then I assume we're done ๐ (finally)<|||||>Sure, will do so. Probably tomorrow morning/afternoon!<|||||>Closing this one as the most up-to-date is now #9117 . |
transformers | 8,112 | closed | Documentation code snippet has extra ) after model code | Documentation at https://huggingface.co/transformers/model_doc/roberta.html#tfrobertaforsequenceclassification has code snippet
```
>> from transformers import RobertaTokenizer, TFRobertaForSequenceClassification
>> import tensorflow as tf
>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
>> model = TFRobertaForSequenceClassification.from_pretrained('roberta-base', return_dict=True))
>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
>> inputs["labels"] = tf.reshape(tf.constant(1), (-1, 1)) # Batch size 1
>> outputs = model(inputs)
>> loss = outputs.loss
>> logits = outputs.logits
```
in the fourth line there is extra `)` at the end.
This is issue is also for other model code snippet as well | 10-28-2020 06:14:59 | 10-28-2020 06:14:59 | Hello, this has been fixed on: https://github.com/huggingface/transformers/pull/8082 and is now available in the `master` documentation and will be updated in the next version. Thanks for opening an issue! |
transformers | 8,111 | closed | [Model] mT5 Cross-Lingual Model | # ๐ New model addition
## Model description
<!-- Important information -->
Multilingual T5 (mT5) is a massively multilingual pretrained text-to-text transformer model, trained following a similar recipe as T5.
Weights, code are available.
Github Repo: [mT5 Weights and Code](https://github.com/google-research/multilingual-t5)
Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934)
## Open source status
* [x] the model implementation is available: [Implementation](https://github.com/google-research/multilingual-t5)
* [x] the model weights are available: [checkpoints](https://github.com/google-research/multilingual-t5#released-model-checkpoints)
* [x] who are the authors: (@craffel, @adarob)
| 10-28-2020 05:50:38 | 10-28-2020 05:50:38 | Will be a part of #6285<|||||>Hey, @sumanthd17 any update on this? <|||||>@julien-c thanks for your amazing nlp lib.
When do you plan to support mT5 ?
When #6285 will be release ?
Cheers
Philippe <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 8,110 | closed | [gh actions] run artifacts job always | I see that the recently added artifacts job won't run if the test job failed, which defeats the purpose. ([example](https://github.com/huggingface/transformers/runs/1317972448?check_suite_focus=true))
After some research it appears that adding `if: always()` may do the trick. Supposedly such a job should always be run regardless of the outcome of the previous jobs. Found it [here](https://github.community/t/continue-on-error-allow-failure-ui-indication/16773/2?u=stas00), documented [here](https://docs.github.com/en/free-pro-team@latest/actions/reference/context-and-expression-syntax-for-github-actions#always).
Let's merge and see if it fixes the issue.
@LysandreJik, @sgugger, @sshleifer | 10-28-2020 05:39:14 | 10-28-2020 05:39:14 | Please check if it worked :)<|||||>Hmm, I guess it'd be have been more practical to experiment with the push job and not scheduled - but ok, let's wait - if it's not working, I will attack it on all fronts.<|||||>One more thing:
[run_all_tests_torch_and_tf_gpu](https://github.com/stas00/transformers/blob/e248a114e7f13a970bd8b5e52c0a032c014f4a57/.github/workflows/self-scheduled.yml#L57) has 3 independent test suite runs and currently if one fails the others don't run! Which is not what is wanted I believe. I suggest that we add `if: always() ` to the last 2 test suites as they are independent from the first one. <|||||>I'm fine with that as long as the workflow run can still be correctly marked as a failure.
To rephrase the requirement at the risk of redundancy, we want a red x next to the job when any test fails:

Screenshot shows that we are meeting this requirement at the moment.
<|||||>Yes, that's the idea. I think the intention of the `if` condition is to define only whether a job is to be run, and not impact the total outcome. But we will see that already with the results of this PR - as artifact upload job will surely succeed. If the total outcome is [x] and artifacts have run, then we can replicate that condition to the other test suites. <|||||>OK, It did the trick, see: https://github.com/huggingface/transformers/actions/runs/334754818
Specifically, as requested: the final result is [x] and the artifact job did run regardless.
So we can apply this `if: always` condition to other `pytest` jobs on the same workflow. There is a nuance of a possibility of pre-pytest jobs failing and the `pytest` jobs running anyway with this condition, but if that situation arrives, it makes no difference - those jobs will just immediately fail.
Notes:
* It puts the artifact files in the same place from different jobs, so we need to call that artifact upload job differently for each job
* The so-so part is that the artifacts on github actions are provided as a single zipped file, so you have to first download the file, unpack it and only then you can see the results.
* Moreover it doesn't show the artifact file until **all** jobs have completed, despite saying that the file was successfully uploaded.
**A Possible workaround:**
One possible optimization here could be to `cat reports/report_tests_failures.txt` right after `pytest`, in a separate mini-step, so that you can immediately see just the failures and not wait for everything else to finish and go through the multiple steps to get to this file. (It has to be a separate step (name+run) not to affect the success/failure exit status of the `pytest` step.
Please review the outcome/my notes and let me know whether we proceed with this to other jobs.
Specifically to moving forward, we probably need to wait for this to be merged: https://github.com/huggingface/transformers/pull/8007 as it has multiple changes to the CI files.
<|||||>i have read your report. It is very clear, thank you. let's try a careful cat solution where we keep the size of the results as small as reasonably possible. one or two screens of text that show which tests failed (and short tracebacks (pytest --tb=short) ). Thanks for the help this is going to be so much easier to use than the status quo. Let me know if further clarifications/decisions would be helpful, and feel free to push back if implementation is difficult. <|||||>wrt/ proposed workaround:
Since the proposed quick `cat` is going to be in its own collapsible "tab" and will have only failures, let's start with just `cat reports/report_tests_failures.txt` and we can create other types of reports should it prove too verbose, and just cut those instead.
But also I could probably create `reports/report_tests_failures_short.txt` report which will emulate `pytest --tb=short`, so that we will have both long and short reports.
wrt/ the rest:
it still stands, correct? i.e. we still want the full artifacts in github actions<|||||>> we still want the full artifacts in github actions
Yes, don't see any downside.<|||||>It looks like the errors are generated with either `--tb=long` or `--tb=short` at run time, so when the reports time comes they are already saved as one or the other, but not both.
So if we want the short and the long reports, one possibility is to generate the long report and then to try to make it shorter with some regex or some simple truncation - resulting in a short report.
Another approach that might work is collecting the failures as they happen - I need to investigate whether I can control the format in that hook or not without impacting the global reporting, as I'm sure that ideally we do want the full long report too. Please correct me if I'm wrong and `--tb=short` is sufficient for CIs (i.e. there will be no full long failures report anywhere - neither terminal logs nor report files), then it's easy.
<|||||>I trust you to make those choices as you see fit. feel free to ignore tb=short.<|||||>I nailed it, got the cake and ate it too. |
transformers | 8,109 | closed | T5Tokenizer: decode does not show special tokens | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: N/a
- Using distributed or parallel set-up in script? N/a
### Who can help
examples/seq2seq: @sshleifer
-->
## Information
Model I am using (Bert, XLNet ...): T5Tokenizer
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import T5Tokenizer
input = "word <pad> <unk> </s> </s>"
t5tokenizer = T5Tokenizer.from_pretrained('t5-small')
tokenized = t5tokenizer.batch_encode_plus([input], max_length=10, padding="longest", return_tensors="pt").input_ids
print(t5tokenizer.batch_decode(tokenized, skip_special_tokens=False, clean_up_tokenization_spaces=False))
```
IDs output: ` _word <pad> <unk> </s> </s>`
decode output: `word โ `
## Expected behavior
The tokens should be shown in the decoded output, but everything except for the unknown token is dropped (no pad or EOS).
`convert_ids_to_tokens` followed by `convert_tokens_to_string` also drops the tokens.
<!-- A clear and concise description of what you would expect to happen. -->
| 10-28-2020 04:16:06 | 10-28-2020 04:16:06 | T5: @patrickvonplaten I think you need to set `_additional_special_tokens`.<|||||>@jsrozner want to try to fix?<|||||>This is a duplicate of #5142 and will be fixed with the PR linked below. Thanks for reporting it - seems like multiple people were running into this issue!!! |
transformers | 8,108 | closed | Support various BERT relative position embeddings | # What does this PR do?
The default BERT model `bert-base-uncased` was pre-trained with absolute position embeddings. We provide three pre-trained models which were pre-trained on the same training data (BooksCorpus and English Wikipedia) as in the BERT model training, but with different relative position embeddings (Shaw et al., Self-Attention with Relative Position Representations, https://arxiv.org/abs/1803.02155 and Huang et al., Improve Transformer Models with Better Relative Position Embeddings, https://arxiv.org/abs/2009.13658, accepted in findings of EMNLP 2020). We show how to fine-tune these pre-trained models on SQuAD1.1 data set and we also report the EM and F1 score on SQuAD1.1 dev dataset. See examples/question-answering/README.md for more details.
Fixes # (issue)
N/A
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik @julien-c @patrickvonplaten
| 10-27-2020 22:26:48 | 10-27-2020 22:26:48 | Hey @zhiheng-huang,
Thanks for the PR!
In general I'm fine with this PR - think adding more types of position embeddings to a model is OK.
Also can you rebase your PR to the most current version of `master` - I think you are working on a rather old version.
<|||||>> Hey @zhiheng-huang,
>
> Thanks for the PR!
>
> In general I'm fine with this PR - think adding more types of position embeddings to a model is OK.
> Also can you rebase your PR to the most current version of `master` - I think you are working on a rather old version.
It was rebased to master on 10/27, will rebase again for the new revision.<|||||>@patrickvonplaten, I did a rebase and I am not sure this is the correct way to review commit "Address review comment". Please let me know if there is a better way to upload the new diff.<|||||>Hey @zhiheng-huang,
I think there was a problem with the rebase it seems like you added all commits on master on top of your PR.
This happens from time to time sadly :-/
The way I'd fix it is to first save your changes of the last commit: https://github.com/huggingface/transformers/pull/8108/commits/ffe2e64c64f03c141cc085c8f3f509ae2e0992e2 somewhere (maybe a new branch).
Then correctly reset the head of your branch before all other commitns were falsely added:
```
git reset --hard 36729ee
```
Then add the single commit you saved in another branch
```
git cherry-pick ffe2e64
```
and finally either you correctly rebase OR the safer option here would probably be to merge the master into your branch
```
git fetch upstream/master
git merge upstream master
```
Hope this helps!<|||||>> Hey @zhiheng-huang,
>
> I think there was a problem with the rebase it seems like you added all commits on master on top of your PR.
> This happens from time to time sadly :-/
>
> The way I'd fix it is to first save your changes of the last commit: [ffe2e64](https://github.com/huggingface/transformers/commit/ffe2e64c64f03c141cc085c8f3f509ae2e0992e2) somewhere (maybe a new branch).
>
> Then correctly reset the head of your branch before all other commitns were falsely added:
>
> ```
> git reset --hard 36729ee
> ```
>
> Then add the single commit you saved in another branch
>
> ```
> git cherry-pick ffe2e64
> ```
>
> and finally either you correctly rebase OR the safer option here would probably be to merge the master into your branch
>
> ```
> git fetch upstream/master
> git merge upstream master
> ```
>
> Hope this helps!
Thanks. @patrickvonplaten. this helps but I may have to revert the commits merged to zhiheng-huang:transformers-relative-embedding. I created a new PR at https://github.com/huggingface/transformers/pull/8276 to continue the review. Thanks! |
transformers | 8,107 | closed | [testing] port test_trainer_distributed to distributed pytest + TestCasePlus enhancements | This PR:
* [x] ports `test_trainer_distributed` to run with pytest - it will skip if gpus < 2.
* [x] includes various improvements via refactoring now 3 use cases of distributed testing by extending `TestCasePlus` with a whole set of convenient features:
Feature 1: A set of fully resolved important file and dir path accessors.
In tests often we need to know where things are relative to the current test file, and it's not trivial since the test could be invoked from more than one directory or could reside in different sub-directories. This class solves this problem by sorting out all the basic paths and provides easy accessors to them:
* ``pathlib`` objects (all fully resolved):
- ``test_file_path`` - the current test file path (=``__file__``)
- ``test_file_dir`` - the directory containing the current test file
- ``tests_dir`` - the directory of the ``tests`` test suite
- ``examples_dir`` - the directory of the ``examples`` test suite
- ``repo_root_dir`` - the directory of the repository
- ``src_dir`` - the directory of ``src`` (i.e. where the ``transformers`` sub-dir resides)
* stringified paths - same as above but these return a string, rather than a ``pathlib`` object
- ``test_file_path_str``
- ``test_file_dir_str``
- ``tests_dir_str``
- ``examples_dir_str``
- ``repo_root_dir_str``
- ``src_dir_str``
Feature 2: Get a copy of the ``os.environ`` object that sets up ``PYTHONPATH`` correctly, depending on the test suite it's invoked from. This is useful for invoking external programs from the test suite - e.g. distributed training.
```
def test_whatever(self):
env = self.get_env()
# now call the external program, passing ``env`` to it
```
All these are also documented in `testing.rst`.
Fixes: #8058
@sgugger, @LysandreJik, @sshleifer
| 10-27-2020 21:53:53 | 10-27-2020 21:53:53 | LGTM! cc @patrickvonplaten for awareness! |
transformers | 8,106 | closed | Move installation instructions to the top | # What does this PR do?
This PR clarifies the instructions to run the examples by moving the source install to the top and putting it in bold. | 10-27-2020 21:31:36 | 10-27-2020 21:31:36 | |
transformers | 8,105 | closed | New run_clm script | # What does this PR do?
This PR adds an example of a causal language modeling fine-tuning (or training from scratch) using the ๐ค Datasets library. It supports loading a dataset via its name (from the hub) or local files. A test of training on a small text is added.
| 10-27-2020 21:14:50 | 10-27-2020 21:14:50 | |
transformers | 8,104 | closed | RagSequenceForGeneration how to get document texts retrieved in response to a query | When I run the retriever separately, how can I find out the text of the documents (from the doc_ids ?) that are retrieved ?
I created the retriever using:
retriever = RagRetriever.from_pretrained(rag_example_args.rag_model_name,index_name="custom",passages_path=passages_path,index_path=index_path,n_docs=8)
tokenizer = RagTokenizer.from_pretrained(rag_example_args.rag_model_name)
model = RagSequenceForGeneration.from_pretrained(rag_example_args.rag_model_name,index_name="custom",indexed_dataset=dataset)
question_hidden_states = model.question_encoder(input_ids)[0]
docs_dict = retriever(input_ids.numpy(), question_hidden_states.detach().numpy(), return_tensors="pt")
| 10-27-2020 21:02:37 | 10-27-2020 21:02:37 | retriever.index.get_doc_dicts(docs_dict["doc_ids"])[0]['text'] gets me the text of the retrieved documents. |
transformers | 8,103 | closed | run_language_modeling crashes with import cannot import name 'DataCollatorForWholeWordMask' from 'transformers' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: linux
- Python version: 3.8
- PyTorch version (GPU?): 1.5
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: No
## Information
I am trying to run the example for [language modeling](https://github.com/huggingface/transformers/tree/master/examples/language-modeling) but can't get it to start. Import fails
Traceback (most recent call last):
File "run_language_modeling.py", line 32, in <module>
from transformers import (
ImportError: cannot import name 'DataCollatorForWholeWordMask' from 'transformers' (/home/spacemanidol/miniconda3/envs/prunetransformer/lib/python3.8/site-packages/transformers/__init__.py)
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ X] the official example scripts: (give details below)
running the language modeling example script
The tasks I am working on is:
* [ X] an official GLUE/SQUaD task: (give the name)
Language modeling
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Create conda enviorment. install transformers from the source.
2. Run language modeling example script
## Expected behavior
Example runs.
| 10-27-2020 20:17:34 | 10-27-2020 20:17:34 | As explained in the README of the examples, you need an [installation from source](https://huggingface.co/transformers/installation.html#installing-from-source) to run the examples, which you don't have, otherwise you would have this object.
Alternatively, you can run the examples associated to your current version by using the files on the [last release tag](https://github.com/huggingface/transformers/releases/tag/v3.4.0). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.