repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
3,492
closed
[model_cards]: use MIT license for all dbmdz models
Hi, this PR adds MIT license tag for all dbmdz models 🤗
03-27-2020 21:01:35
03-27-2020 21:01:35
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3492?src=pr&el=h1) Report > Merging [#3492](https://codecov.io/gh/huggingface/transformers/pull/3492?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/17dceae7a1de5577cd0c07a97dcd5821a08af07c&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3492/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3492?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3492 +/- ## ========================================== - Coverage 77.80% 77.79% -0.01% ========================================== Files 100 100 Lines 17051 17051 ========================================== - Hits 13266 13265 -1 - Misses 3785 3786 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3492?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3492/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.81% <0.00%> (-0.14%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3492?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3492?src=pr&el=footer). Last update [17dceae...70cab16](https://codecov.io/gh/huggingface/transformers/pull/3492?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Awesome! One more data point for #3357 <|||||>Hey @stefan-it, thanks for creating some great models. It looks like a few [dbmdz models](https://huggingface.co/dbmdz) are missing model cards, including the default model for NER (`dbmdz/bert-large-cased-finetuned-conll03-english`). Are these licensed as MIT as well?
transformers
3,491
closed
cased -> uncased for example cmdline consistency
03-27-2020 19:39:26
03-27-2020 19:39:26
In the same file, there is also `--do_lower_case` wrongly apply to `roberta-base` and `xlnet-large-cased` model. It will be great if this PR also includes that fix.<|||||>> In the same file, there is also `--do_lower_case` wrongly apply to `roberta-base` and `xlnet-large-cased` model. It will be great if this PR also includes that fix. The intention is to have consistency across all the experiments, where all input is lowercase for all examples. Just because roberta and xlnet don't have lowercased models, doesn't mean their input has to be the cased version.<|||||>Thanks for raising this. Should be fixed by #3738 if one saves their tokenizers using `save_pretrained()` there shouldn't be a need for passing the `do_lower_case` manually anymore.
transformers
3,490
closed
which iterator to use for different hugging face transformer models for solving multiple choice questions?
Hello, I have used Hugging Face GPT-2 models to do Natural Language Processing. When I used the GPT-2 model, I used the ```BPTTIterator``` from TorchText to pre-process my text data, since GPT-2 essentially performs regular language modelling (next token prediction). I am wondering, when I use GPT-2, BERT and XLNet for **multiple-choice solving**: 1. What type of iterator should I use for multiple-choice question solving? If there is no specific iterator that can accommodate multiple-choice question solving, how should I pre-process my text (multiple-choice questions) before feeding them into the different transformer models? 2. if hugging face transformer models (BERT, XLNet, GPT-2) use special tokens to separate questions from multiple-choice options, what are those special tokens? I want to make the use of the pre-trained models rather than training a new model with new special tokens on my own. 3. Is the format of multiple-choice questions that can be processed with, for example, BERT different than the multiple-choice question format that can be processed by GPT-2? or can each of the transformer models process any type of multiple-choice questions? (for example, BERT could only be used to solve fill-in-the-blank multiple-choice questions, whereas multiple-choice question for GPT-2 does not have to have the fill-in-the-blank format?) Thank you,
03-27-2020 19:13:48
03-27-2020 19:13:48
Do I need to build my own iterator for this? How should the tokens be arranged for multiple choice question solving? e.g. (question) (token1) (token2) (MCoption)(answer1) . ----> is this the right format for all of the GPT-2, BERT and XLNet? Thank you<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,489
closed
missing import in BartForConditionalGeneration example
Add `from transformers import AutoTokenizer, AutoModelWithLMHead` to Bart for Conditional Generation example in # 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Language I am using the model on (English, Chinese ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?:
03-27-2020 18:50:47
03-27-2020 18:50:47
Please complete the issue template, I don't understand what the issue is.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,488
closed
[bart-tiny-random] Put a 5MB model on S3 to allow faster examples test
- Vocab size is the same to avoid requiring a new tokenizer. - this allows examples that parametrize `model_name`, like `evaluate_cnn.py`, to run much quicker: `summarization/bart/test_bart_examples.py` runs in 6 seconds vs 22 + download time previously. - Would be happy to do this for more models. - Will update `run_sum.py` if this idea is OK with people. - this also makes local debugging easier. You can easily pull down a tiny model without making a config.
03-27-2020 17:21:33
03-27-2020 17:21:33
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3488?src=pr&el=h1) Report > Merging [#3488](https://codecov.io/gh/huggingface/transformers/pull/3488?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/17dceae7a1de5577cd0c07a97dcd5821a08af07c&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3488/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3488?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3488 +/- ## ========================================== - Coverage 77.80% 77.79% -0.01% ========================================== Files 100 100 Lines 17051 17051 ========================================== - Hits 13266 13265 -1 - Misses 3785 3786 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3488?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3488/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.81% <0.00%> (-0.14%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3488?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3488?src=pr&el=footer). Last update [17dceae...3e0a394](https://codecov.io/gh/huggingface/transformers/pull/3488?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>LGTM<|||||>Merging to unblock other testing efforts!
transformers
3,487
closed
Create model card
03-27-2020 16:33:18
03-27-2020 16:33:18
@julien-c Do you know why does it fail?
transformers
3,486
closed
setup.py succeeds, then can't import transformers
```bash pip install -e . ``` works succesfully and makes `transformers.egg-info/` Then, ```python import transformers ``` fails with ``` ModuleNotFoundError: No module named 'transformers' ``` env ``` - `transformers` version: 2.6.0 - Platform: Darwin-19.0.0-x86_64-i386-64bit - Python version: 3.7.5 - PyTorch version (GPU?): 1.4.0 (False) - Tensorflow version (GPU?): 2.1.0 (False) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ```
03-27-2020 16:33:11
03-27-2020 16:33:11
Do you use the `zsh` shell? cause in this case I think you have to do `pip install -e "."` I think this has something to do with the shells. Can you compare `zsh` to pure `bash` shell?<|||||>Fails in bash identically. One clue, this is also happening for `pip install tokenizers` followed by import `tokenizers`, but not for `import numpy`.<|||||>And with `pip install -e "."` it works ?<|||||>I fixed it by **restarting** my zsh shell. Now I can't reproduce the bug in a new zsh shell. Closing.
transformers
3,485
closed
Fix circle ci flaky fail of wmt example
Weird bug which might be fixed by forcing bleu scorer. Test pass locally, but seem to fail on circle ci.
03-27-2020 15:25:43
03-27-2020 15:25:43
This is from this right? https://circleci.com/gh/huggingface/transformers/26455?utm_campaign=workflow-failed&utm_medium=email&utm_source=notification<|||||>Yeah exactly - it should be fixed now. I will push a slightly cleaner version in a second :-) <|||||>In general, every example test that creates folder or files should deleted them afterwards to avoid same file naming collisions. Will open a PR about this. @julien-c @sshleifer @LysandreJik
transformers
3,484
closed
Sphinx build for documentation fails when tensorflow is installed
## Information I am trying to build the documentation in `docs/` using `sphinx-build` so that I can add it to [Dash](https://kapeli.com/dash) as one of the docsets. However, I get an assertion error with sphinx-build when I run `make html` in the `docs/` folder *only when tensorflow is installed*. The build works without tensorflow installed, but the tensorflow methods and classes are emtpy in the generated documentation - only the pytorch ones have documentation in the resulting HTML. ## To reproduce Steps to reproduce the behavior: 1. Create a conda environment, and install pytorch, tensorflow and transformers using the official methods. 2. Run `pip install -e ".[docs]` in source directory, before running `make html` in `docs` folder. You will get the following error (full stacktrace given later): ``` Exception occurred: File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/util/docfields.py", line 260, in transform assert len(field) == 2 AssertionError ``` Full stacktrace: ``` # Sphinx version: 2.4.4 # Python version: 3.6.10 (CPython) # Docutils version: 0.16 release # Jinja2 version: 2.11.1 # Last messages: # reading sources... [ 13%] glossary # # reading sources... [ 16%] index # # reading sources... [ 18%] installation # # reading sources... [ 21%] main_classes/configuration # # reading sources... [ 24%] main_classes/model # # Loaded extensions: # sphinx.ext.mathjax (2.4.4) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/ext/mathjax.py # sphinxcontrib.applehelp (1.0.2) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinxcontrib/applehelp/__init__.py # sphinxcontrib.devhelp (1.0.2) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinxcontrib/devhelp/__init__.py # sphinxcontrib.htmlhelp (1.0.3) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinxcontrib/htmlhelp/__init__.py # sphinxcontrib.serializinghtml (1.1.4) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinxcontrib/serializinghtml/__init__.py # sphinxcontrib.qthelp (1.0.3) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinxcontrib/qthelp/__init__.py # alabaster (0.7.12) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/alabaster/__init__.py # sphinx.ext.autodoc.type_comment (2.4.4) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/ext/autodoc/type_comment.py # sphinx.ext.autodoc (2.4.4) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/ext/autodoc/__init__.py # sphinx.ext.coverage (2.4.4) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/ext/coverage.py # sphinx.ext.napoleon (2.4.4) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/ext/napoleon/__init__.py # recommonmark (0.6.0) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/recommonmark/__init__.py # sphinx.ext.viewcode (2.4.4) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/ext/viewcode.py # sphinx_markdown_tables (<module 'sphinx_markdown_tables.__version__' from '/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx_markdown_tables/__version__.py'>) from /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx_markdown_tables/__init__.py Traceback (most recent call last): File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/cmd/build.py", line 276, in build_main app.build(args.force_all, filenames) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/application.py", line 349, in build self.builder.build_update() File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 299, in build_update len(to_build)) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 311, in build updated_docnames = set(self.read()) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 418, in read self._read_serial(docnames) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 439, in _read_serial self.read_doc(docname) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 479, in read_doc doctree = read_doc(self.app, self.env, self.env.doc2path(docname)) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/io.py", line 316, in read_doc pub.publish() File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/core.py", line 218, in publish self.settings) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/io.py", line 130, in read self.parse() File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/readers/__init__.py", line 77, in parse self.parser.parse(self.input, document) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/parsers.py", line 93, in parse self.statemachine.run(inputlines, document, inliner=self.inliner) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 171, in run input_source=document['source']) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 242, in run context, state, transitions) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 459, in check_line return method(match, context, next_state) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2769, in underline self.section(title, source, style, lineno - 1, messages) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 327, in section self.new_subsection(title, lineno, messages) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 395, in new_subsection node=section_node, match_titles=True) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 282, in nested_parse node=node, match_titles=match_titles) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 196, in run results = StateMachineWS.run(self, input_lines, input_offset) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 242, in run context, state, transitions) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 459, in check_line return method(match, context, next_state) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2769, in underline self.section(title, source, style, lineno - 1, messages) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 327, in section self.new_subsection(title, lineno, messages) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 395, in new_subsection node=section_node, match_titles=True) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 282, in nested_parse node=node, match_titles=match_titles) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 196, in run results = StateMachineWS.run(self, input_lines, input_offset) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 242, in run context, state, transitions) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 459, in check_line return method(match, context, next_state) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2342, in explicit_markup nodelist, blank_finish = self.explicit_construct(match) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2354, in explicit_construct return method(self, expmatch) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2097, in directive directive_class, match, type_name, option_presets) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2146, in run_directive result = directive_instance.run() File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/ext/autodoc/directive.py", line 157, in run result = parse_generated_content(self.state, params.result, documenter) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/ext/autodoc/directive.py", line 104, in parse_generated_content state.nested_parse(content, 0, node) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 282, in nested_parse node=node, match_titles=match_titles) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 196, in run results = StateMachineWS.run(self, input_lines, input_offset) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 242, in run context, state, transitions) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 459, in check_line return method(match, context, next_state) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2342, in explicit_markup nodelist, blank_finish = self.explicit_construct(match) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2354, in explicit_construct return method(self, expmatch) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2097, in directive directive_class, match, type_name, option_presets) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2146, in run_directive result = directive_instance.run() File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/domains/__init__.py", line 265, in run return super().run() File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/directives/__init__.py", line 195, in run self.state.nested_parse(self.content, self.content_offset, contentnode) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 282, in nested_parse node=node, match_titles=match_titles) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 196, in run results = StateMachineWS.run(self, input_lines, input_offset) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 242, in run context, state, transitions) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 459, in check_line return method(match, context, next_state) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2344, in explicit_markup self.explicit_list(blank_finish) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2374, in explicit_list match_titles=self.state_machine.match_titles) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 319, in nested_list_parse node=node, match_titles=match_titles) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 196, in run results = StateMachineWS.run(self, input_lines, input_offset) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 242, in run context, state, transitions) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/statemachine.py", line 459, in check_line return method(match, context, next_state) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2647, in explicit_markup nodelist, blank_finish = self.explicit_construct(match) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2354, in explicit_construct return method(self, expmatch) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2097, in directive directive_class, match, type_name, option_presets) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/docutils/parsers/rst/states.py", line 2146, in run_directive result = directive_instance.run() File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/domains/__init__.py", line 265, in run return super().run() File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/directives/__init__.py", line 198, in run DocFieldTransformer(self).transform_all(contentnode) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/util/docfields.py", line 248, in transform_all self.transform(child) File "/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx/util/docfields.py", line 260, in transform assert len(field) == 2 AssertionError ``` 3. Uninstall tensorflow. Now when running `make html`, it does fininsh building, albeit with a bunch of warnings of the following form: `AttributeError: module 'transformers' has no attribute 'TFCamembertForMaskedLM'` -- for every `TFmethod`. ## Expected behavior `make html` should build with no errors. ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.6.0 - Platform: macOS 10.15.4 - Python version: 3.6.10 - PyTorch version (GPU?): 1.4.0 (No) - Tensorflow version (GPU?): 2.1.0 (No) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No - sphinx version: 2.4.4
03-27-2020 15:18:53
03-27-2020 15:18:53
This seems to be similar to [Issue #3466](https://github.com/huggingface/transformers/issues/3466)<|||||>Hi, this issue was solved with https://github.com/huggingface/transformers/commit/e2c05f06ef58ea77103d2c64492dd8d9a0b21c3f Could you try to pull the repository once again and try then?<|||||>Yes, that seems to have fixed it! I still get a bunch of warnings which are also related to indentation I believe? I've posted the full log of my build below with all the other warnings, but I'll close this issue for now: ``` Running Sphinx v2.4.4 making output directory... done building [mo]: targets for 0 po files that are out of date building [html]: targets for 38 source files that are out of date updating environment: [new config] 38 added, 0 changed, 0 removed /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document warn("Container node skipped: type={0}".format(mdnode.t)) /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document warn("Container node skipped: type={0}".format(mdnode.t)) /Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/recommonmark/parser.py:75: UserWarning: Container node skipped: type=document warn("Container node skipped: type={0}".format(mdnode.t)) reading sources... [100%] usage /Users/venkat/Downloads/transformers/src/transformers/modeling_utils.py:docstring of transformers.PreTrainedModel.from_pretrained:23: WARNING: Unexpected indentation. /Users/venkat/Downloads/transformers/src/transformers/modeling_tf_utils.py:docstring of transformers.TFPreTrainedModel.from_pretrained:20: WARNING: Definition list ends without a blank line; unexpected unindent. WARNING: error while formatting arguments for transformers.pipeline: 'function' object has no attribute '__mro__' /Users/venkat/Downloads/transformers/src/transformers/pipelines.py:docstring of transformers.Pipeline:6: WARNING: Unexpected indentation. /Users/venkat/Downloads/transformers/src/transformers/data/processors/utils.py:docstring of transformers.data.processors.utils.DataProcessor.get_dev_examples:1: WARNING: Inline interpreted text or phrase reference start-string without end-string. /Users/venkat/Downloads/transformers/src/transformers/data/processors/utils.py:docstring of transformers.data.processors.utils.DataProcessor.get_example_from_tensor_dict:3: WARNING: Unexpected indentation. /Users/venkat/Downloads/transformers/src/transformers/data/processors/utils.py:docstring of transformers.data.processors.utils.DataProcessor.get_train_examples:1: WARNING: Inline interpreted text or phrase reference start-string without end-string. /Users/venkat/Downloads/transformers/src/transformers/tokenization_utils.py:docstring of transformers.PreTrainedTokenizer.batch_encode_plus:32: WARNING: Bullet list ends without a blank line; unexpected unindent. /Users/venkat/Downloads/transformers/src/transformers/tokenization_utils.py:docstring of transformers.PreTrainedTokenizer.build_inputs_with_special_tokens:4: WARNING: Unexpected indentation. /Users/venkat/Downloads/transformers/src/transformers/tokenization_utils.py:docstring of transformers.PreTrainedTokenizer.encode:37: WARNING: Bullet list ends without a blank line; unexpected unindent. /Users/venkat/Downloads/transformers/src/transformers/tokenization_utils.py:docstring of transformers.PreTrainedTokenizer.encode_plus:36: WARNING: Bullet list ends without a blank line; unexpected unindent. /Users/venkat/Downloads/transformers/src/transformers/tokenization_utils.py:docstring of transformers.PreTrainedTokenizer.prepare_for_model:17: WARNING: Unexpected indentation. /Users/venkat/Downloads/transformers/src/transformers/tokenization_utils.py:docstring of transformers.PreTrainedTokenizer.prepare_for_model:18: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/venkat/Downloads/transformers/src/transformers/tokenization_utils.py:docstring of transformers.PreTrainedTokenizer.tokenize:9: WARNING: Unexpected indentation. /Users/venkat/Downloads/transformers/src/transformers/tokenization_utils.py:docstring of transformers.PreTrainedTokenizer.tokenize:10: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/venkat/Downloads/transformers/src/transformers/tokenization_utils.py:docstring of transformers.PreTrainedTokenizer.tokenize:10: WARNING: Inline strong start-string without end-string. /Users/venkat/Downloads/transformers/src/transformers/tokenization_utils.py:docstring of transformers.PreTrainedTokenizer.truncate_sequences:3: WARNING: Unexpected indentation. /Users/venkat/Downloads/transformers/src/transformers/modeling_tf_albert.py:docstring of transformers.TFAlbertModel.call:47: WARNING: Unexpected indentation. /Users/venkat/Downloads/transformers/src/transformers/modeling_tf_albert.py:docstring of transformers.TFAlbertModel.call:48: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/venkat/Downloads/transformers/src/transformers/configuration_auto.py:docstring of transformers.AutoConfig.from_pretrained:7: WARNING: Unexpected indentation. /Users/venkat/Downloads/transformers/src/transformers/tokenization_auto.py:docstring of transformers.AutoTokenizer:12: WARNING: Unexpected indentation. /Users/venkat/Downloads/transformers/src/transformers/tokenization_auto.py:docstring of transformers.AutoTokenizer.from_pretrained:6: WARNING: Unexpected indentation. /Users/venkat/Downloads/transformers/src/transformers/modeling_auto.py:docstring of transformers.AutoModel.from_pretrained:10: WARNING: Unexpected indentation. /Users/venkat/Downloads/transformers/docs/source/model_doc/auto.rst:29: WARNING: Title underline too short. ``AutoModelForPreTraining`` ~~~~~~~~~~~~~~~~~~~~~ /Users/venkat/Downloads/transformers/docs/source/model_doc/auto.rst:29: WARNING: Title underline too short. ``AutoModelForPreTraining`` ~~~~~~~~~~~~~~~~~~~~~ /Users/venkat/Downloads/transformers/src/transformers/modeling_auto.py:docstring of transformers.AutoModelForPreTraining.from_pretrained:9: WARNING: Unexpected indentation. /Users/venkat/Downloads/transformers/docs/source/model_doc/auto.rst:35: WARNING: Title underline too short. ``AutoModelWithLMHead`` ~~~~~~~~~~~~~~~~~~~~~ /Users/venkat/Downloads/transformers/docs/source/model_doc/auto.rst:35: WARNING: Title underline too short. ``AutoModelWithLMHead`` ~~~~~~~~~~~~~~~~~~~~~ /Users/venkat/Downloads/transformers/src/transformers/modeling_auto.py:docstring of transformers.AutoModelWithLMHead.from_pretrained:10: WARNING: Unexpected indentation. /Users/venkat/Downloads/transformers/docs/source/model_doc/auto.rst:41: WARNING: Title underline too short. ``AutoModelForSequenceClassification`` ~~~~~~~~~~~~~~~~~~~~~ /Users/venkat/Downloads/transformers/docs/source/model_doc/auto.rst:41: WARNING: Title underline too short. ``AutoModelForSequenceClassification`` ~~~~~~~~~~~~~~~~~~~~~ /Users/venkat/Downloads/transformers/src/transformers/modeling_auto.py:docstring of transformers.AutoModelForSequenceClassification.from_pretrained:10: WARNING: Unexpected indentation. /Users/venkat/Downloads/transformers/docs/source/model_doc/auto.rst:47: WARNING: Title underline too short. ``AutoModelForQuestionAnswering`` ~~~~~~~~~~~~~~~~~~~~~ /Users/venkat/Downloads/transformers/docs/source/model_doc/auto.rst:47: WARNING: Title underline too short. ``AutoModelForQuestionAnswering`` ~~~~~~~~~~~~~~~~~~~~~ /Users/venkat/Downloads/transformers/src/transformers/modeling_auto.py:docstring of transformers.AutoModelForQuestionAnswering.from_pretrained:10: WARNING: Unexpected indentation. /Users/venkat/Downloads/transformers/docs/source/model_doc/auto.rst:53: WARNING: Title underline too short. ``AutoModelForTokenClassification`` ~~~~~~~~~~~~~~~~~~~~~ /Users/venkat/Downloads/transformers/docs/source/model_doc/auto.rst:53: WARNING: Title underline too short. ``AutoModelForTokenClassification`` ~~~~~~~~~~~~~~~~~~~~~ /Users/venkat/Downloads/transformers/src/transformers/modeling_auto.py:docstring of transformers.AutoModelForTokenClassification.from_pretrained:10: WARNING: Unexpected indentation. /Users/venkat/Downloads/transformers/docs/source/model_doc/t5.rst:7: WARNING: Title underline too short. Overview ~~~~~ /Users/venkat/Downloads/transformers/src/transformers/tokenization_t5.py:docstring of transformers.T5Tokenizer.build_inputs_with_special_tokens:4: WARNING: Unexpected indentation. /Users/venkat/Downloads/transformers/src/transformers/modeling_t5.py:docstring of transformers.T5Model:9: WARNING: Duplicate explicit target name: "exploring the limits of transfer learning with a unified text-to-text transformer". /Users/venkat/Downloads/transformers/src/transformers/modeling_t5.py:docstring of transformers.T5Model.forward:54: WARNING: Unexpected indentation. /Users/venkat/Downloads/transformers/src/transformers/modeling_t5.py:docstring of transformers.T5Model.forward:55: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/venkat/Downloads/transformers/src/transformers/modeling_t5.py:docstring of transformers.T5ForConditionalGeneration:9: WARNING: Duplicate explicit target name: "exploring the limits of transfer learning with a unified text-to-text transformer". /Users/venkat/Downloads/transformers/src/transformers/modeling_t5.py:docstring of transformers.T5ForConditionalGeneration.forward:58: WARNING: Unexpected indentation. /Users/venkat/Downloads/transformers/src/transformers/modeling_t5.py:docstring of transformers.T5ForConditionalGeneration.forward:59: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/venkat/Downloads/transformers/src/transformers/modeling_tf_t5.py:docstring of transformers.TFT5Model:9: WARNING: Duplicate explicit target name: "exploring the limits of transfer learning with a unified text-to-text transformer". /Users/venkat/Downloads/transformers/src/transformers/modeling_tf_t5.py:docstring of transformers.TFT5Model:25: WARNING: Inline interpreted text or phrase reference start-string without end-string. /Users/venkat/Downloads/transformers/src/transformers/modeling_tf_t5.py:docstring of transformers.TFT5Model.call:56: WARNING: Unexpected indentation. /Users/venkat/Downloads/transformers/src/transformers/modeling_tf_t5.py:docstring of transformers.TFT5Model.call:57: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/venkat/Downloads/transformers/docs/source/model_doc/t5.rst:60: WARNING: Title underline too short. TFT5ForConditionalGeneration ~~~~~~~~~~~~~~~~~~~~~~~~~~ /Users/venkat/Downloads/transformers/docs/source/model_doc/t5.rst:60: WARNING: Title underline too short. TFT5ForConditionalGeneration ~~~~~~~~~~~~~~~~~~~~~~~~~~ /Users/venkat/Downloads/transformers/src/transformers/modeling_tf_t5.py:docstring of transformers.TFT5ForConditionalGeneration:9: WARNING: Duplicate explicit target name: "exploring the limits of transfer learning with a unified text-to-text transformer". /Users/venkat/Downloads/transformers/src/transformers/modeling_tf_t5.py:docstring of transformers.TFT5ForConditionalGeneration:25: WARNING: Inline interpreted text or phrase reference start-string without end-string. /Users/venkat/Downloads/transformers/src/transformers/modeling_tf_t5.py:docstring of transformers.TFT5ForConditionalGeneration.call:60: WARNING: Unexpected indentation. /Users/venkat/Downloads/transformers/src/transformers/modeling_tf_t5.py:docstring of transformers.TFT5ForConditionalGeneration.call:61: WARNING: Block quote ends without a blank line; unexpected unindent. /Users/venkat/Downloads/transformers/src/transformers/modeling_t5.py:docstring of transformers.T5Model:1: WARNING: Duplicate target name, cannot be used as a unique reference: "exploring the limits of transfer learning with a unified text-to-text transformer". /Users/venkat/Downloads/transformers/src/transformers/modeling_t5.py:docstring of transformers.T5ForConditionalGeneration:1: WARNING: Duplicate target name, cannot be used as a unique reference: "exploring the limits of transfer learning with a unified text-to-text transformer". /Users/venkat/Downloads/transformers/src/transformers/modeling_tf_t5.py:docstring of transformers.TFT5Model:1: WARNING: Duplicate target name, cannot be used as a unique reference: "exploring the limits of transfer learning with a unified text-to-text transformer". /Users/venkat/Downloads/transformers/src/transformers/modeling_tf_t5.py:docstring of transformers.TFT5ForConditionalGeneration:1: WARNING: Duplicate target name, cannot be used as a unique reference: "exploring the limits of transfer learning with a unified text-to-text transformer". /Users/venkat/Downloads/transformers/src/transformers/configuration_xlnet.py:docstring of transformers.XLNetConfig:52: WARNING: Unexpected indentation. /Users/venkat/Downloads/transformers/src/transformers/modeling_xlnet.py:docstring of transformers.XLNetForMultipleChoice.forward:65: WARNING: Inline literal start-string without end-string. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:25: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:29: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:36: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:40: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:44: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:48: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:52: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:56: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:60: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:64: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:68: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:73: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:78: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:82: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:86: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:90: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:94: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:152: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:156: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:160: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:164: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:168: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:172: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:176: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:180: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:184: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:188: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:192: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:196: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:200: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:207: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:211: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:215: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:219: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:223: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:227: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:231: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:235: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:239: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:264: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:268: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:272: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:276: WARNING: Line block ends without a blank line. /Users/venkat/Downloads/transformers/docs/source/pretrained_models.rst:280: WARNING: Line block ends without a blank line. looking for now-outdated files... none found pickling environment... done checking consistency... done preparing documents... done writing output... [100%] usage /Users/venkat/Downloads/transformers/docs/source/examples.md:440: WARNING: None:any reference target not found: generating indices... genindexdone highlighting module code... [100%] transformers.tokenization_xlnet writing additional pages... search/Users/venkat/opt/miniconda3/envs/tf-pt/lib/python3.6/site-packages/sphinx_rtd_theme/search.html:21: RemovedInSphinx30Warning: To modify script_files in the theme is deprecated. Please insert a <script> tag directly in your theme instead. {% endblock %} done copying images... [100%] imgs/warmup_linear_schedule.png copying static files... ... done copying extra files... done dumping search index in English (code: en)... done dumping object inventory... done build succeeded, 107 warnings. ```<|||||>Yes there are a few warnings, but they're inoffensive. We're working towards removing most of them :)
transformers
3,483
closed
Tests for more examples
# Add tests for more of the examples/ We need to have more testing code for examples. This is particularly true with NER which recently had a tokenizer issue. (Self-assigning)
03-27-2020 15:03:41
03-27-2020 15:03:41
I agree. Tried for an hour last night to add coverage for `run_bart_sum.py` and got stuck on two things. 1) A `circleci` job that installs `pytorch_lightning` (and potentially other dependencies) 2) the ability to import `examples/transformer_base.py` ### Suggested Approach: - add the aforementioned circleci job - a flag like `@require_lightning` to decorate some tests - some code changes to get the imports working sanely. - Checklist/Instructions for how new examples/ contributors can add test coverage. Happy to help with this! CC: @LysandreJik , @julien-c, @thomwolf @patrickvonplaten <|||||>Cool. Most of these seem easy. Sharing code between examples is a bit harder. I am not sure if we should have and examples package, or symlink that shared file somehow. Currently we add it to the path before running the code. <|||||>With the Pytorch-Lightning examples, I've been running into an [issue](https://github.com/huggingface/transformers/pull/3437) loading trained models with `--do_predict` (when not also using `--do_train`), so it would be helpful to add some model-loading test as well :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,482
closed
Correct output shape for Bert NSP models in docs
In the docs, for Bert models that have the NSP head, the output shape for one of the params returned by the forward method, `seq_relationship_scores`, is incorrect. Fixing it.
03-27-2020 14:53:24
03-27-2020 14:53:24
transformers
3,481
closed
Do I need to pad non-fixed examples or does run_language_modeling.py already takes care of that?
I use a custom TextDataset in [examples/run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py). I can't figure if I need to pad the examples to `bucket_size` myself, or is it already been taken care of in [L221-L223](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py#L221-L223)? Thanks!
03-27-2020 14:51:17
03-27-2020 14:51:17
I believe that the `collate` function should take care of it. However, you will need to create the `attention_mask` variable when you have inputs of variable length, so that the model does not attend to the padded indices.<|||||>@Genius1237 I'm a real noob (I come from a programming background and no ML), but I'm using the GPT2 model, not a masked-language modeling like BERT. Do I still need to do such a thing? I didn't see any `attention_mask` reference in TextDataset loaders in the example file. Also, how does the `collate` function knows to pad the examples to `bucket_size` if no such variable is passed to it?<|||||>The `TextDataset` class takes text and converts it into blocks of size `block_size` (512), concatenating consecutive blocks if needed. My guess is that `LineByLineTextDataset` exists to cater to those who would like to have examples being limited to one sentence each, and thus the max size in one batch would be determined by the longest sentence in that batch. `attention_mask` is definitely needed when you have sequences of different length. Have a look at Have a look at https://github.com/huggingface/transformers/issues/1408 . Something like this should do on top of the existing version of `LineByLineTextDataset`. ``` def collate(examples: List[torch.Tensor]): padding_value = 0 if tokenizer._pad_token is None else tokenizer.pad_token_id input_ids = pad_sequence(examples, batch_first=True, padding_value=tokenizer.pad_token_id) max_length = input_ids.shape[1] attention_mask = torch.stack([torch.cat([torch.ones(len(t), dtype=torch.long), torch.zeros(max_length - len(t), dtype=torch.long)]) for t in examples]) return input_ids, attention_mask ```<|||||>Gotcha, thank you so much @Genius1237 ! I hope it would work. Do I need to unpack `attention_mask` somehow or the collate function in the DataLoader will take care of that?<|||||>And in your code you probably meant to wrote this instead, right? ```python input_ids = pad_sequence(examples, batch_first=True, padding_value=padding_value) ``` Also, the same code should be pasted in ```def evaluate():``` right?<|||||>Same code in evaluate. The collate function is called by the dataloader. The dataloader calls the __getitem__ on the dataset `batch_size` times and sends that output to the collate function. The output of the collate function is what you will get when you do `for batch in train_dataloader`. The batch in this case will be a 2 tuple, with `batch[0]` having the input_ids and `batch[1]` having the attention_mask.<|||||>I really appreciate your help, thanks a lot. I'll ping the maintainers to change the example code so others can benefit. @thomwolf @LysandreJik @patrickvonplaten Thanks!<|||||>Hey, @Genius1237, I'm getting this error using your code: ``` Traceback (most recent call last): File "run_language_modeling.py", line 974, in <module> main() File "run_language_modeling.py", line 924, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_language_modeling.py", line 508, in train inputs = inputs.to(args.device) AttributeError: 'tuple' object has no attribute 'to' ``` Using the previous code I'm not getting any error.<|||||>When you do `for batch in train_dataloader`, batch is basically whatever is returned by the `collate` function, which is this cause becomes a 2-tuple. In your case `input_ids` is a 2-tuple containing 2 tensors. You'll have to split that into 2, i.e `input_ids, attention_mask = inputs`, and move forward with that, pushing both those tensors to the required device (`input_ids = input_ids.to(args.device); attention_mask = attention_mask.to(args.device)`).<|||||>Can you please help me? I'm really clueless... I tried: ```python for _ in train_iterator: epoch_iterator = tqdm(train_dataloader, desc="Iteration", disable=args.local_rank not in [-1, 0]) for step, batch in enumerate(epoch_iterator): # Skip past any already trained steps if resuming training if steps_trained_in_current_epoch > 0: steps_trained_in_current_epoch -= 1 continue inputs, labels = mask_tokens(batch, tokenizer, args) if args.mlm else (batch, batch) input_ids, attention_mask = inputs inputs = input_ids.to(args.device) labels = labels.to(args.device) attention_mask = attention_mask.to(args.device) model.train() outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels) loss = outputs[0] # model outputs are always tuple in transformers (see doc) ``` But I get this error: ``` Traceback (most recent call last): File "run_language_modeling.py", line 976, in <module> main() File "run_language_modeling.py", line 926, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_language_modeling.py", line 510, in train labels = labels.to(args.device) AttributeError: 'tuple' object has no attribute 'to' ``` Also, the rest of the code doesn't seem to use `attention_mask` variable, wouldn't it be redundant?<|||||>``` for _ in train_iterator: epoch_iterator = tqdm(train_dataloader, desc="Iteration", disable=args.local_rank not in [-1, 0]) for step, batch in enumerate(epoch_iterator): # Skip past any already trained steps if resuming training if steps_trained_in_current_epoch > 0: steps_trained_in_current_epoch -= 1 continue input_ids, attention_mask = batch inputs, labels = mask_tokens(input_ids, tokenizer, args) if args.mlm else (input_ids, input_ids) inputs = inputs.to(args.device) labels = labels.to(args.device) attention_mask = attention_mask.to(args.device) model.train() outputs = model(inputs, masked_lm_labels=labels, attention_mask=attention_mask) if args.mlm else model(inputs, labels=labels, attention_mask=attention_mask) loss = outputs[0] # model outputs are always tuple in transformers (see doc) ```<|||||>Thank you, it works! Btw, I don't see a difference in output between training using the attention mask and the original code. Does it mean something?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||> In training a GPT-2 i have the same question too. I don't see any significant change between training using the attention mask or without. Did u have any answer to this @timsoraro ? Also my perplexity score i can say is too low, any idea for this behaviour? <|||||>@niklaras I didn't see much difference either after many experiments with or without, I got the same quality of generation.
transformers
3,480
closed
Add option to choose T5 model size.
I believe the error mentioned in #3469 is due to the example tests loading T5-large in memory. One of the workers load that model which fills up the machine's memory, and other workers crash with an oom error. This PR gives the option to choose the T5 model size, and changes the tests to only use the small model.
03-27-2020 14:33:15
03-27-2020 14:33:15
transformers
3,479
closed
Add Colab with the evaluation procedure
03-27-2020 13:57:31
03-27-2020 13:57:31
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,478
closed
add summarization and translation to notebook
Add summarization and translation to pipeline notebook
03-27-2020 12:06:47
03-27-2020 12:06:47
transformers
3,477
closed
Added CovidBERT-NLI model card
03-27-2020 11:44:09
03-27-2020 11:44:09
transformers
3,476
closed
Finetuning FlauBERT with hugging face's Transformers : WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 2, distributed training: False, 16-bits training: True
I am trying to use the script run_glue present in the transformers library to fintenue Flaubert for french data using huggingface's transformers and I am gettinh this error : ` 03/27/2020 11:54:48 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 2, distributed training: False, 16-bits training: True Traceback (most recent call last): File "/home/getalp/kelodjoe/transformers/examples/run_glue.py", line 693, in <module> main() File "/home/getalp/kelodjoe/transformers/examples/run_glue.py", line 613, in main config_class, model_class, tokenizer_class = MODEL_CLASSES[args.model_type] KeyError: 'flaubert' ` Could you help me find how to solve it ?
03-27-2020 11:01:28
03-27-2020 11:01:28
You're probably on an old transformers version.
transformers
3,475
closed
[WIP] General docs polish
Rebased from ##3461. So merge after this one. - [x] Add Bart, T5 and MMBT to main docs page - [x] Add MMBT docs - [ ] Add MMBT pretrained info - [ ] Polish MMBT docstring
03-27-2020 10:48:06
03-27-2020 10:48:06
transformers
3,474
closed
[examples] fine-tuning `bert-base-finnish-(un)cased-v1` model for Named Entity Recognition
No new features added and no modifications to the existing code has been done. Just addition of scripts for the fine-tuning.
03-27-2020 09:21:15
03-27-2020 09:21:15
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3474?src=pr&el=h1) Report > Merging [#3474](https://codecov.io/gh/huggingface/transformers/pull/3474?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e392ba6938f50655a195ea7ec8a260b1e9fc6058&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3474/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3474?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3474 +/- ## ======================================= Coverage 77.56% 77.56% ======================================= Files 100 100 Lines 16970 16970 ======================================= + Hits 13162 13163 +1 + Misses 3808 3807 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3474?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3474/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.85% <0.00%> (+0.13%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3474?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3474?src=pr&el=footer). Last update [e392ba6...d9e6d4e](https://codecov.io/gh/huggingface/transformers/pull/3474?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,473
closed
Inversion of a mask in newer pytorch versions
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): XLNet Language I am using the model on (English, Chinese ...): English The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) Finetuning on downstream task. ## To reproduce Steps to reproduce the behavior: 1.Just run the XLNet Model with a newer pytorch version. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ``` RuntimeError: Subtraction, the `-` operator, with a bool tensor is not supported. If you are trying to invert a mask, use the `~` or `logical_not()` operator instead. ``` ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.6.0 - Platform: Ubuntu - Python version: 3.6 - PyTorch version (GPU?): 1.4.0 - Tensorflow version (GPU?): - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: parallel ## Additional Comments This change was made in pytorch 1.2.0, check the release notes [here](https://github.com/pytorch/pytorch/releases/tag/v1.2.0).
03-27-2020 06:10:20
03-27-2020 06:10:20
Do you mind giving a reproducible example so that we may debug easily?<|||||>I was following this issue because I ran into the same problem. I won't use the code I ran into the problem with here but here is the gist: ```python from transformers import XLNetForSequenceClassification model = XLNetForSequenceClassification.from_pretrained("xlnet-base-cased") inputs = torch.randint(0, 100, (32, 100)) # 100 words in vocab, batch size 32, seq_len = 100 masks = torch.ones(inputs.size(), dtype=torch.bool) # none of the tokens are padding labels = torch.randint(0, 2, (32,)) # binary classification result = model(inputs, attention_mask=masks, labels=labels) ``` Error: > Traceback (most recent call last): File "<input>", line 1, in <module> File "/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/transformers/modeling_xlnet.py", line 1150, in forward inputs_embeds=inputs_embeds, File "/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/transformers/modeling_xlnet.py", line 778, in forward input_mask = 1.0 - attention_mask File "/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/tensor.py", line 394, in __rsub__ return _C._VariableFunctions.rsub(self, other) RuntimeError: Subtraction, the `-` operator, with a bool tensor is not supported. If you are trying to invert a mask, use the `~` or `logical_not()` operator instead. I use OSX Mojave, torch==1.4.0, if it helps. Edit: by converting the masks to dtype `torch.uint32`, I was able to get it to work, but I'm not sure if masking using an integer mask is the correct way of handling this.<|||||>The `torch.bool` was introduced in `torch==1.2.0` but we're looking to accommodate `torch>=1.0.0`, so we have to handle such cases with a `uint` syntax. All model inputs should be `uint`, and not `bool` for that specific reason.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,472
closed
Optimize tokenization (85-92% time reduction)
This optimization in tokenization reduces the tokenization time by ~ 85-92 % Stats: Before fix: 8%|████████▉ | 119574/1486030 [01:03<12:09, 1874.22it/s] After fix: 98%|████████████████████████████████████ | 1459590/1486030 [01:59<00:02, 12244.82it/s] Another stats: When huge number of new tokens (~58K) are added, tokenization time bumps up to ~27-28 hours With this fix the time drops to ~2.15 hours !
03-27-2020 06:02:44
03-27-2020 06:02:44
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,471
closed
TFAlbertForMaskedLM Decoding Error
# 🐛 Bug ## Information ### Model TFAlbertForMaskedLM <https://huggingface.co/transformers/model_doc/albert.html#tfalbertformaskedlm> ### Language English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) ```python import tensorflow as tf from transformers import AlbertTokenizer, TFAlbertForMaskedLM tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2') model = TFAlbertForMaskedLM.from_pretrained('albert-base-v2') input_ids = tf.constant(tokenizer.encode("This is a test!"))[ None, :] # Batch size 1 outputs = model(input_ids) prediction_scores = outputs[0] outputTokens = tf.math.argmax(prediction_scores, axis=2) outputTokens = tf.keras.backend.eval(outputTokens[0]) outputTokens = tokenizer.decode(outputTokens) print(outputTokens) ``` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Copy the above code 2. See output in terminal ``` time this is a test! your ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Output `This is a test!` ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: How to install transformers-cli? - Platform: MacOS 10.15.2 - Python version: 3.6.6 - PyTorch version (GPU?): 1.4.0 CPU - Tensorflow version (GPU?): 2.1.0 CPU - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
03-27-2020 05:25:06
03-27-2020 05:25:06
When you encode with `tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')` a [CLS] token is added in the beginning and a [SEP] token is added at the end. You can verify this by: ```python from transformers import AlbertTokenizer, TFAlbertForMaskedLM tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2') tokenizer.decode(tokenizer.encode("This is a test!")) # gives '[CLS] this is a test![SEP]' ``` Now the encoded string has two added tokens, one in the begging, one in the end. This means that the two new tokens also produce two logits whose argmax token that in your case happened to be `time` and `your`. If you don`t want to add [CLS] and [SEP] when encoding, use: ```python from transformers import AlbertTokenizer, TFAlbertForMaskedLM tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2') tokenizer.decode(tokenizer.encode("This is a test!", add_special_tokens=False)) # gives 'this is a test!' ``` <|||||>So they can't output special tokens properly.<|||||>I mean they can, but the model does not have to output "[CLS]" when you feed "[CLS]" in the model.
transformers
3,470
closed
Update README.md
Fix typo
03-27-2020 02:58:17
03-27-2020 02:58:17
transformers
3,469
closed
CircleCI ExamplesTests::test_run_squad failing
Started happening at https://github.com/huggingface/transformers/pull/3428 and has been happening consistently. scroll all the way down for [traceback](https://circleci.com/gh/huggingface/transformers/26044?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link)
03-27-2020 01:47:52
03-27-2020 01:47:52
![image](https://user-images.githubusercontent.com/6045025/77713019-b7163c80-6fab-11ea-9b78-0ed668943c18.png) Happy to help on this @LysandreJik @patrickvonplaten<|||||>The test runs fine locally on my computer. A couple of wild thoughts: - It happened after merging #3411 and #3428 which adds quite some long `t5-large` and `t5-base` tests to the test examples. Could this test fail because of some kind of time-out error? - The test consumes 10GB RAM when running locally - this doesn't seem too much to fail though.<|||||>Did you try connecting to a failing circle-ci box, to investigate? <|||||>unsubscribe ------------------&nbsp;原始邮件&nbsp;------------------ 发件人:&nbsp;"Julien Chaumond"<[email protected]&gt;; 发送时间:&nbsp;2020年3月27日(星期五) 晚上10:15 收件人:&nbsp;"huggingface/transformers"<[email protected]&gt;; 抄送:&nbsp;"Subscribed"<[email protected]&gt;; 主题:&nbsp;Re: [huggingface/transformers] CircleCI ExamplesTests::test_run_squad failing (#3469) Did you try connecting to a failing circle-ci box, to investigate? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.<|||||>I think this is solved with https://github.com/huggingface/transformers/pull/3485 .
transformers
3,468
closed
Issue in generating samples for text generation
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...):GPT2 ```python def generate_samples(args, model, prompt_text): """Generating sampling for the provided prompt using the provided model.""" set_seed(args.seed) _, _, tokenizer_class = run_language_modeling.MODEL_CLASSES[args.model_type] tokenizer = tokenizer_class.from_pretrained(args.model_name_or_path, cache_dir=None) requires_preprocessing = args.model_type in run_generation.PREPROCESSING_FUNCTIONS.keys() encoded_prompt = tokenizer.encode(prompt_text, add_special_tokens=False, return_tensors="pt") encoded_prompt = encoded_prompt.to(args.device) ``` Error: run_language_modeling has no 'MODEL_CLASSES' Language I am using the model on (English):
03-27-2020 01:12:00
03-27-2020 01:12:00
The file `run_language_modeling.py` does indeed not have a variable called `MODEL_CLASSES`. Can you explain what you are trying to do exactly?
transformers
3,467
closed
Model Cards: Fix grammar error
03-26-2020 23:27:48
03-26-2020 23:27:48
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3467?src=pr&el=h1) Report > Merging [#3467](https://codecov.io/gh/huggingface/transformers/pull/3467?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/63f4d8cad010f1972254007ad56b22fe5ed203fe&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3467/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3467?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3467 +/- ## ======================================= Coverage 77.84% 77.84% ======================================= Files 100 100 Lines 17060 17060 ======================================= + Hits 13280 13281 +1 + Misses 3780 3779 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3467?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3467/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.32% <0.00%> (+0.17%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3467?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3467?src=pr&el=footer). Last update [63f4d8c...1681ac8](https://codecov.io/gh/huggingface/transformers/pull/3467?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,466
closed
Docstring cannot be build anymore
# 🐛 Bug ## Information Docstring The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) ## To reproduce Steps to reproduce the behavior: 1. `pip install -e ".[docs]"` at `transformers` root folder 2. `cd docs` 3. `make html` ## Expected behavior It should work, but an error message is displayed: > /home/patrick/hugging_face/transformers/src/transformers/modeling_utils.py:docstring of transformers.PreTrainedModel.from_pretrained:23: WARNING: Unexpected indentation. > /home/patrick/hugging_face/transformers/src/transformers/modeling_tf_utils.py:docstring of transformers.TFPreTrainedModel.from_pretrained:20: WARNING: Definition list ends without a blank line; unexpected unindent. > > Exception occurred: > File "/home/patrick/hugging_face/transformers_venv/lib/python3.6/site-packages/sphinx/util/docfields.py", line 260, in transform > assert len(field) == 2 > AssertionError > The full traceback has been saved in /tmp/sphinx-err-mblqzztk.log, if you want to report the issue to the developers. > Please also report this if it was a user error, so that a better error message can be provided next time. > A bug report can be filed in the tracker at <https://github.com/sphinx-doc/sphinx/issues>. Thanks! > Makefile:19: recipe for target 'html' failed ## Environment info - `transformers` version: 2.6.0 - Platform: Linux-5.3.0-42-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.4.0+cpu (False) - Tensorflow version (GPU?): 2.1.0 (False) - Using GPU in script?:No - Using distributed or parallel set-up in script?: No
03-26-2020 23:05:42
03-26-2020 23:05:42
Was fixed with https://github.com/huggingface/transformers/commit/e2c05f06ef58ea77103d2c64492dd8d9a0b21c3f
transformers
3,465
closed
Add link to 16 POS tags model
03-26-2020 22:58:49
03-26-2020 22:58:49
transformers
3,464
closed
Add text shown in example of usage
03-26-2020 22:49:06
03-26-2020 22:49:06
transformers
3,463
closed
Question Answering pipeline not working
This is error while running the question answering pipeline ! convert squad examples to features: 0%| | 0/1 [00:00<?, ?it/s] --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/usr/lib/python3.6/multiprocessing/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar return list(map(*args)) File "/usr/local/lib/python3.6/dist-packages/transformers/data/processors/squad.py", line 198, in squad_convert_example_to_features p_mask = np.array(span["token_type_ids"]) KeyError: 'token_type_ids' """ The above exception was the direct cause of the following exception: KeyError Traceback (most recent call last) <ipython-input-3-3c4dd3618524> in <module>() 2 3 nlp_qa = pipeline('question-answering') ----> 4 nlp_qa(context='Hugging Face is a French company based in New-York.', question='Where is based Hugging Face ?') 8 frames /usr/local/lib/python3.6/dist-packages/transformers/data/processors/squad.py in squad_convert_example_to_features() 196 # p_mask: mask with 1 for token than cannot be in the answer (0 for token which can be in an answer) 197 # Original TF implem also keep the classification token (set to 0) (not sure why...) --> 198 p_mask = np.array(span["token_type_ids"]) 199 200 p_mask = np.minimum(p_mask, 1) KeyError: 'token_type_ids'
03-26-2020 21:29:10
03-26-2020 21:29:10
Had the same Issue, I think it was already fixed in an older commit, but the pip package doesnt seem to be updated. Try installing transformers from this repo: `git clone https://github.com/huggingface/transformers` `cd transformers` `pip install .`<|||||>I also had the same issue as @paras55, and the solution provided by @mowoe fixed it! Thanks!<|||||>this is still an issue in 2.7.0.<|||||>Are you sure? I think it was fixed in this commit: [c76c3ce](https://github.com/huggingface/transformers/commit/c76c3cebed3c707178d9f721349c5abd5206a57f). <|||||>I cloned the repo, checked out the 2.7.0 tag and built and installed the wheel and was running into the same issue. Result of `pip list` ``` Package Version ---------------------- ------------ absl-py 0.9.0 astor 0.8.1 astroid 2.3.3 asttokens 2.0.3 attrs 19.3.0 autopep8 1.5 boto3 1.12.27 botocore 1.15.27 cachetools 4.0.0 certifi 2019.11.28 chardet 3.0.4 click 7.1.1 dataclasses 0.7 decorator 4.4.2 docutils 0.15.2 entrypoints 0.3 filelock 3.0.12 flake8 3.7.9 flake8-aaa 0.7.1 gast 0.2.2 google-auth 1.11.3 google-auth-oauthlib 0.4.1 google-pasta 0.2.0 grpcio 1.27.2 h5py 2.10.0 idna 2.9 importlab 0.5.1 importlib-metadata 1.5.0 isort 4.3.21 jmespath 0.9.5 joblib 0.14.1 Keras-Applications 1.0.8 Keras-Preprocessing 1.1.0 lazy-object-proxy 1.4.3 Markdown 3.2.1 mccabe 0.6.1 more-itertools 8.2.0 mypy 0.761 mypy-extensions 0.4.3 networkx 2.4 ninja 1.9.0.post1 numpy 1.18.2 oauthlib 3.1.0 opt-einsum 3.2.0 packaging 20.1 pandas 1.0.3 pip 20.0.2 pluggy 0.13.1 protobuf 3.11.3 py 1.8.1 pyasn1 0.4.8 pyasn1-modules 0.2.8 pycodestyle 2.5.0 pyflakes 2.1.1 pylint 2.4.4 pyparsing 2.4.6 pytest 5.3.5 python-dateutil 2.8.1 pytype 2020.2.20 pytz 2019.3 PyYAML 5.3.1 regex 2020.2.20 requests 2.23.0 requests-oauthlib 1.3.0 rsa 4.0 s3transfer 0.3.3 sacremoses 0.0.38 scikit-learn 0.22.2.post1 scipy 1.4.1 sentencepiece 0.1.85 setuptools 45.2.0 six 1.14.0 tensorboard 2.0.2 tensorflow 2.0.0 tensorflow-determinism 0.3.0 tensorflow-estimator 2.0.1 termcolor 1.1.0 tokenizers 0.5.2 tqdm 4.43.0 transformers 2.7.0 typed-ast 1.4.1 typing-extensions 3.7.4.1 urllib3 1.25.8 wcwidth 0.1.8 Werkzeug 1.0.0 wheel 0.34.2 wrapt 1.11.2 zipp 3.0.0 ``` the code I ran (unnecessary parts excluded): ``` for a, b in zip(train_text_a, train_text_b): tokens_dict = tokenizer.encode_plus(a, b, max_length=10, pad_to_max_length=True) train_input_ids.append(np.asarray([tokens_dict["input_ids"]])) train_input_masks.append(np.asarray([tokens_dict["attention_mask"]])) train_input_segment_ids.append(np.asarray([tokens_dict["token_type_ids"]])) ``` and the traceback I received: ``` Traceback (most recent call last): File "<input>", line 104, in <module> KeyError: 'token_type_ids' ```<|||||>Yeah I think we are talking about a completely different problem here. @amoux and @paras55 had this exception in the squad.py of this repo (Which shouldnt raise one, when used this way). Your Exception however is a KeyError in your code (line 104). Maybe try adding `return_token_type_ids=True` as argument to your tokenizer. This is still an issue, as this should be default. Maybe open a new one, as this isnt the same problem.<|||||>You're definitely correct. `return_token_type_ids` solved the problem. Much appreciated @mowoe!<|||||> An example for question answering with DistilBERT ``` from transformers import DistilBertTokenizer, DistilBertForQuestionAnswering import torch tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased',return_token_type_ids = True) model = DistilBertForQuestionAnswering.from_pretrained('distilbert-base-uncased-distilled-squad') context = "The US has passed the peak on new coronavirus cases, President Donald Trump said and predicted that some states would reopen this month.The US has over 637,000 confirmed Covid-19 cases and over 30,826 deaths, the highest for any country in the world." question = "What was President Donald Trump's prediction?" # question = "How many deaths have been reported from the virus?" encoding = tokenizer.encode_plus(question, context) input_ids, attention_mask = encoding["input_ids"], encoding["attention_mask"] start_scores, end_scores = model(torch.tensor([input_ids]), attention_mask=torch.tensor([attention_mask])) ans_tokens = input_ids[torch.argmax(start_scores) : torch.argmax(end_scores)+1] answer_tokens = tokenizer.convert_ids_to_tokens(ans_tokens , skip_special_tokens=True) all_tokens = tokenizer.convert_ids_to_tokens(input_ids) print ("\nAnswer Tokens: ") print (answer_tokens) answer_tokens_to_string = tokenizer.convert_tokens_to_string(answer_tokens) print ("\nFinal Answer : ") print (answer_tokens_to_string) ``` Output is - Answer Tokens: ['some', 'states', 'would', 're', '##open', 'this', 'month'] Final Answer : some states would reopen this month <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,462
closed
SyntaxError when fine-tuning ALBERT on NER
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): ALBERT Language I am using the model on (English, Chinese ...): The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) ## To reproduce Steps to reproduce the behavior: Running on a GCP VM: 1. ``` python3 ${REPO_DIR}/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path ${BERT_CKPT_DIR} --albert_config_file ${BERT_CKPT_DIR}/config.json --pytorch_dump_path ${BERT_CKPT_DIR}/pytorch_model.bin ``` 2. ``` python3 ${REPO_DIR}/examples/ner/run_ner.py --model_type albert --model_name_or_path ${BERT_CKPT_DIR} --do_train --do_eval --data_dir ${NER_DATA_DIR} --labels ${NER_DATA_DIR}/labels.txt --max_seq_length 128 --num_train_epochs 3 --per_gpu_train_batch_size 32 --output_dir ${NER_MODEL_CKPT_DIR} --seed 1 --do_predict --save_steps 750 ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ``` Traceback (most recent call last): File "transformers/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py", line 23, in <module> from transformers import AlbertConfig, AlbertForMaskedLM, load_tf_weights_in_albert File "<frozen importlib._bootstrap>", line 969, in _find_and_load File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 664, in _load_unlocked File "<frozen importlib._bootstrap>", line 634, in _load_backward_compatible File "/home/manuto/.local/lib/python3.5/site-packages/transformers-2.6.0-py3.5.egg/transformers/__init__.py", line 23, in <module> from .benchmark_utils import ( File "<frozen importlib._bootstrap>", line 969, in _find_and_load File "<frozen importlib._bootstrap>", line 954, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 896, in _find_spec File "<frozen importlib._bootstrap_external>", line 1147, in find_spec File "<frozen importlib._bootstrap_external>", line 1123, in _get_spec File "<frozen importlib._bootstrap_external>", line 1104, in _legacy_get_spec File "<frozen importlib._bootstrap>", line 444, in spec_from_loader File "<frozen importlib._bootstrap_external>", line 541, in spec_from_file_location File "/home/manuto/.local/lib/python3.5/site-packages/transformers-2.6.0-py3.5.egg/transformers/benchmark_utils.py", line 44 filename: str ^ SyntaxError: invalid syntax Exception ignored in: <function WeakValueDictionary.__init__.<locals>.remove at 0x7fc729d8a598> Traceback (most recent call last): File "/usr/lib/python3.5/weakref.py", line 117, in remove TypeError: 'NoneType' object is not callable Traceback (most recent call last): File "transformers/examples/ner/run_ner.py", line 33, in <module> from transformers import ( File "<frozen importlib._bootstrap>", line 969, in _find_and_load File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 664, in _load_unlocked File "<frozen importlib._bootstrap>", line 634, in _load_backward_compatible File "/home/manuto/.local/lib/python3.5/site-packages/transformers-2.6.0-py3.5.egg/transformers/__init__.py", line 23, in <module> File "<frozen importlib._bootstrap>", line 969, in _find_and_load File "<frozen importlib._bootstrap>", line 954, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 896, in _find_spec File "<frozen importlib._bootstrap_external>", line 1147, in find_spec File "<frozen importlib._bootstrap_external>", line 1123, in _get_spec File "<frozen importlib._bootstrap_external>", line 1104, in _legacy_get_spec File "<frozen importlib._bootstrap>", line 444, in spec_from_loader File "<frozen importlib._bootstrap_external>", line 541, in spec_from_file_location File "/home/manuto/.local/lib/python3.5/site-packages/transformers-2.6.0-py3.5.egg/transformers/benchmark_utils.py", line 44 filename: str ^ SyntaxError: invalid syntax Exception ignored in: <function WeakValueDictionary.__init__.<locals>.remove at 0x7f1bd863ed08> Traceback (most recent call last): File "/usr/lib/python3.5/weakref.py", line 117, in remove TypeError: 'NoneType' object is not callable ``` ## Expected behavior Convert ALBERT TF checkpoint to PyTorch `model.bin` (works in 2.5.1 version of transformers) and fine-tune the model on NER (error in 2.5.1 mentioned [here](https://github.com/huggingface/transformers/issues/3412) which is the reason why I switched to 2.6.0). <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.6.0 (installed with setup.py) - Platform: Linux Ubuntu 18.04 - Python version: 3.5.3 - PyTorch version (GPU?): 1.4.0 - Tensorflow version (GPU?): 1.14.0 - Using GPU in script?: - Using distributed or parallel set-up in script?:
03-26-2020 18:00:46
03-26-2020 18:00:46
`transformers` 2.6.0 has dropped support for Python 3.5: https://github.com/huggingface/transformers/releases/tag/v2.6.0<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,461
closed
Add T5 to docs
- [x] Copy past from bart docs - [x] polish main docs page - [x] improve docstring in `modeling_t5.py` and `modeling_tf_t5.py`
03-26-2020 17:44:25
03-26-2020 17:44:25
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3461?src=pr&el=h1) Report > Merging [#3461](https://codecov.io/gh/huggingface/transformers/pull/3461?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3ee431dd4c720e67e35a449b453d3dc2b15ccfff&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3461/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3461?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3461 +/- ## ======================================= Coverage 77.79% 77.80% ======================================= Files 100 100 Lines 17049 17051 +2 ======================================= + Hits 13264 13267 +3 + Misses 3785 3784 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3461?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3461/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `97.58% <ø> (ø)` | | | [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3461/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.83% <ø> (ø)` | | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3461/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `81.29% <100.00%> (+0.08%)` | :arrow_up: | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3461/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `94.98% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3461/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.32% <0.00%> (+0.17%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3461?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3461?src=pr&el=footer). Last update [3ee431d...d57c03b](https://codecov.io/gh/huggingface/transformers/pull/3461?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,460
closed
Error : forward() got an unexpected keyword argument 'inputs_embeds'
Hello, I am trying to train a GPT2 from scratch using ```modeling_gpt2.py``` as a base. I declared the model as follow : ```Python config = GPT2Config(vocab_size = VELSIZE, n_positions = SEQLEN, n_embd = EMBEDSIZE, n_layer = NUMLAYER,n_ctx = SEQLEN, n_head = NUMHEAD) model = GPT2Model(config) ``` I don't need to use the buid-in embeddings for my application and would like to pass my input tensor as-is, but trying ```model(inputs_embeds = test)```, ```model.forward(inputs_embeds = test)```, ```model(input_ids=None, past=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=test)``` or any other variant I can think of always results in the following error : ```Python Traceback (most recent call last): File "<ipython-input-52-616a2eb9b3f4>", line 7, in <module> inputs_embeds=testx) File "C:\Users\cnelias\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__ result = self.forward(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'inputs_embeds' ``` Is this a bug or am I doing it wrong ?
03-26-2020 17:36:46
03-26-2020 17:36:46
I can run the following code succesfully: ``` from transformers import GPT2Model, GPT2Tokenizer import torch model = GPT2Model.from_pretrained('gpt2') tokenizer = GPT2Tokenizer.from_pretrained('gpt2', pad_token='<PAD>') input_ids = tokenizer.encode("Hello, how are you?", return_tensors='pt') inputs_embeds = model.wte(input_ids) model(inputs_embeds=inputs_embeds) # runs without error ``` Can you update `transformers` to the most current version and verify that you can run the code snippet I posted? <|||||>Running ```conda update transformers``` returned that I already have the latest version. As for you snippet, I still get the same error : ```Python Traceback (most recent call last): File "<ipython-input-136-d0df910b9d57>", line 10, in <module> model(inputs_embeds=inputs_embeds) # runs without error File "C:\Users\cnelias\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 547, in __call__ result = self.forward(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'inputs_embeds' ``` Is this because I installed ```transformers``` with ```conda``` instead of ```pip``` ? Edit : this is indeed probably a conda issue. When I run the snippet in ```atom``` (with the python depedency and not anaconda) instead of ```spyder```, then it works.<|||||>I get the same error . upgrading transformers vai pip doesn't solve the problem. any solution?
transformers
3,459
closed
[Docs] Add better explanation to check `docs` locally.
03-26-2020 17:34:23
03-26-2020 17:34:23
transformers
3,458
closed
WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 2, distributed training: False, 16-bits training: True
I am trying to use the script run_glue to fintenue Flaubert for french data using huggingface's transformers and I am gettinh this error : ` 03/27/2020 11:54:48 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 2, distributed training: False, 16-bits training: True Traceback (most recent call last): File "/home/getalp/kelodjoe/transformers/examples/run_glue.py", line 693, in <module> main() File "/home/getalp/kelodjoe/transformers/examples/run_glue.py", line 613, in main config_class, model_class, tokenizer_class = MODEL_CLASSES[args.model_type] KeyError: 'flaubert' ` Could you help me find how to solve it ?
03-26-2020 16:52:35
03-26-2020 16:52:35
?how to solve it
transformers
3,457
closed
ImportError: cannot import name 'TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING'
Wanted to use the ner demo, but after running: `!python3 ./ner/run_tf_ner.py --data_dir /data --model_type bert --labels .data/labels.txt --model_name_or_path bert-base-multilingual-cased --output_dir germeval-model --max_seq_length 128 --num_train_epochs 3 --per_device_train_batch_size 32 --save_steps 750 --seed 1 --do_train --do_eval --do_predict` i encounter the ImportError Any suggestions?
03-26-2020 15:58:19
03-26-2020 15:58:19
Importing `TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING` from `transformers` works for me. Can you update your transformers library and see whether you still get the import error.<|||||>Hi, I have the same issue here. I have installed transformers from source with the latest update. But still this **ImportError: cannot import name 'TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING'**<|||||>Somehow, on a second try `pip install transformers` worked for me. But cant tell why it didnt worked at the beginning. <|||||>Thanks for the quick response. I will try again by installing it not from source. ... Nope, still not working. <|||||>Make sure you really update and install from source. If in doubt, recreate your virtual env.<|||||>`pip uninstall transformers ` and `pip install transformers ` make sure your transformers version is update.<|||||>Thanks, I got around the problem by converting it to pytorch model. But now I am facing other issues ! Anyway thank you again for your helps
transformers
3,456
closed
Fine-tuning with BertForSequenceClassification on custom dataset yields a model that outputs only the label with highest support in training set
Hello! I have a custom dataset that I wish to fine tune BERT on for classification. The examples consist of 3 sequences each and the label set is {0, 1, ..., 9}. The training data has 570 examples, and validation has 150 examples. The encoding of the input examples is as follows: `tmp_enc = sequence_a + ' [SEP] ' + sequence_b + ' [SEP] ' + sequence_c` `enc = tokenizer.encode(tmp_enc, add_special_tokens=True)` where `tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')` my training loop is: ``` optimizer = torch.optim.SGD(model.parameters(), lr=0.001) scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=0, num_training_steps=5*len(X_train)) train_losses = [] valid_losses = [] for epoch in range(5): #train model.train(); epoch_train_loss = 0 for i in range(len(X_train)): model.zero_grad() loss, logits = model(input_ids=X_train[i], labels=y_train[i].unsqueeze(0)) loss.backward() optimizer.step() scheduler.step() optimizer.zero_grad() nn.utils.clip_grad_norm_(model.parameters(), 1.0) epoch_train_loss += loss.item() if i % 100 == 0: print(f'example {i + 1}, loss: {loss.item()}') avg_loss = epoch_train_loss / len(X_train) train_losses.append(avg_loss) print(f'epoch {epoch + 1}, average train loss: {avg_loss}') #validation model.eval(); epoch_valid_loss = 0 for i in range(len(X_test)): with torch.no_grad(): loss, logits = model(input_ids=X_test[i], labels=y_test[i].unsqueeze(0)) epoch_valid_loss += loss.item() avg_loss = epoch_valid_loss / len(X_test) valid_losses.append(avg_loss) print(f'epoch {epoch + 1} done. average valid loss: {avg_loss}') ``` where `model = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=10)` and all layers but the classifier are frozen: ``` for param in model.bert.parameters(): param.requires_grad = False ``` My issue is that loss is not decreasing and the model eventually always outputs a single label which is the one with highest support in the data. (validated by removing the examples with highest occurring label and getting similar results). Also, the logits for all examples, passed to the model after training are quite similar. e.g.: ``` logits: tensor([[ 1.5107, -0.0595, 0.3490, -0.8669, -0.8848, -0.8097, 0.2685, 0.7246, -0.3133, 0.4215]]), true label: 4 logits: tensor([[ 1.4187, -0.3009, 0.3776, -0.5615, -0.7881, -0.5849, 0.3391, 0.5756, -0.3861, 0.3639]]), true label: 6 logits: tensor([[ 1.3919, -0.4227, 0.3455, -0.4626, -0.7795, -0.5608, 0.2996, 0.5791, -0.4275, 0.3700]]), true label: 8 ``` I have verified that a linear classifier can achieve better results (65% accuracy as opposed to 30%) on the embeddings from pytorch_transformers' BertModel.
03-26-2020 14:31:24
03-26-2020 14:31:24
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@royeis did you find solution for this? I am facing the same issue. <|||||>@SarikGhazarian Ended up solving this with a workaround. Constructed a Pytorch model that had huggingface's BertModel as a module and another linear layer that received Bert's outputs and acted as a classifier. Bert module parameters were frozen and training worked properly from there.
transformers
3,455
closed
Tokenizers: Start cleaning examples a little
03-26-2020 14:10:22
03-26-2020 14:10:22
Merging this as discussed (preliminarily) with @LysandreJik last week<|||||>This is great.
transformers
3,454
closed
NER pipeline usage examples
In the NER usage examples: https://huggingface.co/transformers/usage.html#named-entity-recognition Can you explain why the examples give only I- entities? For example: ('New', 'I-LOC'), ('York', 'I-LOC'), ('City', 'I-LOC') ('Hu', 'I-ORG'), ('##gging', 'I-ORG'), ('Face', 'I-ORG'), ('Inc', 'I-ORG') Why are there no B-s? As in: ('New', 'B-LOC'), ('York', 'I-LOC'), ('City', 'I-LOC') etc
03-26-2020 13:55:45
03-26-2020 13:55:45
The German and English CoNLL datasets use the IOB1 tagging scheme. `B-` is only used to separate two adjacent entities of the same type.
transformers
3,453
closed
Create card for the model: GPT-2-finetuned-covid-bio-medrxiv
03-26-2020 13:43:12
03-26-2020 13:43:12
transformers
3,452
closed
Write With Transformer returning a 502 on gpt2/xl model
When setting the model size to **gpt2/xl**, WwT gets stuck on loading the autocomplete. Checking Chrome's console tells me "Failed to load resource: the server responded with a status of 502 (Bad Gateway)" Having a quick look through the older tickets, I saw that this has happened before. #2121
03-26-2020 13:34:16
03-26-2020 13:34:16
We had to turn that model off because it was really expensive/hard to operationalize. We'll add a warning to that particular webpage. (cc @LysandreJik)
transformers
3,451
closed
[examples] SummarizationDataset cleanup
- factor out redundant tokenization logic - For both articles and summaries, batches are "trimmed" such that no columns are full of `pad_token_id`. - The max sizes are 1024 for source and 56 for target. This ensures that truncation never happens for summaries, and rarely happens for articles. These values are unchanged, but converted to command line arguments for ease of use. - added small unittest (just for the dataset). - Verified manually on GPU. Loss goes down, peak memory usage identical, speed improved by ~10\%. Summary Statistics on summary lengths (in # tokens) for cnn_dm test data: ![image](https://user-images.githubusercontent.com/6045025/77860710-94b83500-71de-11ea-8d09-e952f3dd9e8c.png)
03-26-2020 13:28:41
03-26-2020 13:28:41
@acarrera94 does this look OK?<|||||>This looks great! Something else we might want to figure out is the best configuration of max_seq_length and max_target_length. For example, the tesla V100 in google colab can for sure handle about max_seq_length=768 and max_target_length=56 with a batch size of 4. It can't handle max_seq_length=1028 with the same configuration, since it will run out of memory. <|||||>Going to address by trimming batches so that they don't add extra padding. (EDIT: this is not enough, still need to truncate on both sides.)<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3451?src=pr&el=h1) Report > Merging [#3451](https://codecov.io/gh/huggingface/transformers/pull/3451?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b0ad06951708b782e45b02a4d092f6fcde68a9b9&el=desc) will **increase** coverage by `0.01%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3451/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3451?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3451 +/- ## ========================================== + Coverage 78.02% 78.03% +0.01% ========================================== Files 104 104 Lines 17709 17709 ========================================== + Hits 13817 13819 +2 + Misses 3892 3890 -2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3451?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3451/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.10% <0.00%> (+0.12%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3451/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.96% <0.00%> (+0.16%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3451?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3451?src=pr&el=footer). Last update [b0ad069...94a0baa](https://codecov.io/gh/huggingface/transformers/pull/3451?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hard to unittest cause of the PL dependency but I verified that the script runs locally.<|||||>@sshleifer, in MT code it is common to group the batches together to minimize padding. Do you think this is worth implementing? One downside of this method is that is doesn't really work on TPU or architectures that expect fixed sizes. <|||||>Yes, is something like [SortishSampler](https://github.com/fastai/fastai/blob/master/fastai/text/data.py#L99) the right idea? It seems easy to implement if you only consider padding on the `source` side. Do you know of an intelligent `key_func` to sort examples that considers both sides? <|||||>lgtm
transformers
3,450
closed
Create card for model GPT-2-finetuned-CORD19
03-26-2020 13:07:20
03-26-2020 13:07:20
transformers
3,449
closed
revert unpin isort commit
This PR reverts the change of https://github.com/huggingface/transformers/commit/fbc5bf10cfe4d4ca81f8daacc148b0abd51dda5a Using the unpinned version of `isort` makes black and `isort` disagree in some cases. In this PR the unpinned version of `isort` leads to a falied code quality test: https://github.com/huggingface/transformers/pull/3411
03-26-2020 11:53:02
03-26-2020 11:53:02
This syntax is not support by PyPi unfortunately, so we had to revert this.<|||||>> This syntax is not support by PyPi unfortunately, so we had to revert this. I see...is there another way of getting the pinned version? In general, do we need isort? Doesn't black also sort the import statements? <|||||>Other ways are: - bug the isort maintainer to release a version containing this commit - push a new forked package to PyPI, like `isort-pvp` or `isort-black-compat` or whatever. Yes we need isort.<|||||>I have also observed this issue with #3402
transformers
3,448
closed
Failure to load checkpoints saved during distributed training
# 🐛 Bug ## Information Model I am using: Bert Language I am using the model on: English, German, Swedish The problem arises when using: * [x] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) using distributed training: e.g. `python3 -m torch.distributed.launch --nproc_per_node=4 --nnodes=1 --node_rank=0 run_language_modeling.py` --output_dir output_dir [--args...] 2. When the training is over, try to load the final model: BertModel.from_pretrained('output_dir'): this works. 3. Then, try to load a checkpoint: e.g., `BertModel.from_pretrained('output_dir/checkpoint-1000')`: this gives a runtime error: ` Traceback (most recent call last): File "/cluster/shared/nlpl/software/modules/in5550/202002/lib/python3.7/site-packages/transformers/modeling_utils.py", line 470, in from_pretrained state_dict = torch.load(resolved_archive_file, map_location="cpu") File "/cluster/shared/nlpl/software/modules/pytorch/1.4.0/lib/python3.7/site-packages/torch/serialization.py", line 529, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/cluster/shared/nlpl/software/modules/pytorch/1.4.0/lib/python3.7/site-packages/torch/serialization.py", line 709, in _legacy_load deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly) RuntimeError: storage has wrong size: expected 4434893008627221919 got 2359296 ` ## Expected behavior We should be able to load from the checkpoints also when the training is distributed. ## Suggested solution Check `if args.local_rank == -1 or torch.distributed.get_rank() == 0` on line 370 (just like on line 736). ## Environment info - `transformers` version: 2.5.0 - Platform: UNIX - Python version: Python 3.5.3 - PyTorch version (GPU?): 1.4.0 - Tensorflow version (GPU?): No - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Distributed (1 node, 4 GPUs)
03-26-2020 11:29:28
03-26-2020 11:29:28
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,447
closed
Save models after each epoch
# 🚀 Feature request In the various training scripts in `examples`, would it be better to checkpoint the model at the end of each epoch, as well as every `save_steps` iterations as specified by the user? ## Motivation I suppose for language modelling, saving the model after each epoch is not as important, but for anything supervised (and some other applications) it seems natural to want checkpoints after each epoch. There are plenty of examples in the literature where one wants to inspect models when they have seen each training example some specific number of times. I have been doing some experiments recently where this was the case (with `run_languagemodeling.py` actually) and found myself having to manually enter the number of iterations per epoch as `save_steps to get the desired checkpoints. ## Your contribution Would be a simple change to the various training scripts. Happy to do a PR.
03-26-2020 10:54:44
03-26-2020 10:54:44
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I also need this. Did you figure out a better way of doing it other than counting the number of checkpoints required for an epoch manually?<|||||>> I also need this. Did you figure out a better way of doing it other than counting the number of checkpoints required for an epoch manually? Without changing the code, no I don't think there's an alternative to counting manually. Looks like it would be a simple matter of repeating [this line](https://github.com/huggingface/transformers/blob/0866669e751bef636fa693b704a28c1fea9a17f3/src/transformers/trainer.py#L521) a few lines further down at the end of the epoch loop.<|||||>not sure what you mean by manually counting..but, if you just add this line before the start of each epoch(i.e [here](https://github.com/huggingface/transformers/blob/0866669e751bef636fa693b704a28c1fea9a17f3/src/transformers/trainer.py#L463)), you can make the model save after each epoch. The len(epoch_iterator) is the number of batches in an epoch. `self.args.save_steps = len(epoch_iterator)`<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,446
closed
Special tokens to pre-trained BART model
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details Is it possibile to add special tokens to the pre-trained BART model? My text has `<s>` as sequence separator for sentences. I would like that the encoder will handle it as a whole token, otherwise the model will break it in codes and learn like `<s` or `s>` etc. in the same we did for other tokenizers like `GPT2Tokenizer`? ```python tokenizer = GPT2Tokenizer.from_pretrained(args.out, unk_token="<unk>", bos_token="<s>", eos_token="</s>", pad_token = "<pad>", additional_special_tokens=["<startoflyrics>", "<endoflyrics>", "<nl>"]) ```
03-26-2020 10:53:25
03-26-2020 10:53:25
Hi! Two points that might be helpful. - The `add_special_tokens` functionality should work the same as `RobertaTokenizer`. - `<s>` is already the `bos` token, so I don't expect it to be broken up. Let me know if that resolves your issue, thanks!<|||||> Not sure how to go about doing this ? @sshleifer any code example i see BartTokenizer is essentially RobertaTokenizer which is GPT2Tokenizer for fine-tuning BART in lightning base we have ``` self.tokenizer = AutoTokenizer.from_pretrained( self.hparams.tokenizer_name if self.hparams.tokenizer_name else self.hparams.model_name_or_path, cache_dir=cache_dir, ) ``` Can we add the list of special tokens here ? If then how ?<|||||>Are you trying to add tokens to the vocab and give them new ids? A specific example with what you expect the tokenizer to produce would be helpful. I tried the following and it doesn't work as OP intended, afaict. ```python from transformers import BartTokenizer tokenizer = BartTokenizer.from_pretrained('facebook/bart-large', additional_special_tokens=["<startoflyrics>", 'dringus']) encoded = tokenizer.encode_plus(' <startoflyrics> dringus')['input_ids'] # [0, 3, 3, 2] tokenizer.decode(encoded) # '<s><unk><unk></s>' ``` <|||||>Yes @sshleifer i want to add new tokens to the vocab and give them new ids How to go about doing it ?<|||||>@patrickvonplaten @LysandreJik what is the canonical way to add new non-special tokens? (1) Is there an easier way than making a new vocab and merges file? (2) If not, is there an example of how to do that?<|||||>I am only familiar with the `add_special_tokens` functionality for new tokens that get the "special tokens" treatment. For normal tokens, one can use `add_tokens` as far as I know. <|||||>``` self.tokenizer = AutoTokenizer.from_pretrained( self.hparams.tokenizer_name if self.hparams.tokenizer_name else self.hparams.model_name_or_path, cache_dir=cache_dir, ) self.model = MODEL_MODES[mode].from_pretrained( self.hparams.model_name_or_path, from_tf=bool(".ckpt" in self.hparams.model_name_or_path), config=self.config, cache_dir=cache_dir, ) self.tokenizer.add_tokens(['multi-sentence', ':snt1', ':snt2', ':snt3', ':snt4', ':snt5', ':snt5', ':snt6', ':snt7', ':snt8', ':snt9', ':root', ':ARG1', ':mod', ':op1', ':ARG0', ':ARG0-of', ':name', ':op2', ':ARG2', ':ARG1-of', ':purpose', ':prep-in', ':time', ':li', ':quant', ':unit', ':poss', ':ARG3', ':location', ':domain', ':part-of', ':manner', ':polarity', ':condition', ':ARG4', ':extent', ':time-of', ':location-of', ':op3', ':beneficiary', ':topic', ':degree', ':ARG2-of', ':example', ':extent-of', ':month', ':day', ':op4', ':ARG5', ':manner-of', ':concession', ':duration', ':path', ':mode', ':medium', ':ord', ':value', ':destination', ':source', ':direction', ':instrument-of', ':consist-of', ':dayperiod', ':frequency', ':year', ':quant-of', ':weekday', ':compared-to', ':prep-on', ':ARG3-of', ':degree-of', ':prep-as', ':instrument', ':op5', ':prep-from', ':prep-to', ':century', ':era', ':condition-of', ':op6', ':op7', ':concession-of', ':polite', ':age', ':prep-with', ':decade', ':poss-of', ':prep-without', ':prep-in-addition-to', ':accompanier', ':ord-of', ':direction-of', ':prep-against', ':prep-at', ':subevent-of', ':snt10', ':snt11', ':duration-of', ':prep-for', ':source-of', ':frequency-of', ':topic-of', ':season', ':path-of', ':op8', ':op9', ':prep-among', ':prep-on-behalf-of', ':subevent', ':part', ':ARG4-of', ':beneficiary-of', ':scale', ':example-of', ':prep-by', ':range', ':purpose-of', ':destination-of', ':op10', ':op1-of', ':name-of', ':medium-of', ':prep-along-with', ':conj-as-if', ':timezone', ':prep-under', ':accompanier-of', ':age-of', ':op11', ':op12', ':op13', ':op14', ':op15', ':prep-amid', ':prep-toward', ':prep-out-of', ':prep-into', ':domain-of', ':ARG7', ':quarter', ':ARG5-of', ':op16', ':op17', ':op18', ':op19', ':op20', ':ARG8', ':ARG9', ':calendar', ':year2', ':ARG6', ':subset-of', ':prep-with-of']) self.model.resize_token_embeddings(len(self.tokenizer)) ``` This worked for me <|||||>Yes, the `add_tokens` that @patrickvonplaten and @tuhinjubcse mentionned should get the job done<|||||>I need to add tokens that will serve as separator in the text generation. For instance: ^Input:^Bitocoin price went down for 10 percent^Caption:^10% OFF^output:^10% reduce of the bitcoin price. So in this example is ^input:^, ^Caption:^ and ^output:^. The idea is when I give to the train model the sentence: ^Input:^Bitocoin price went down for 10 percent^Caption:^ it should generate text but the model should learn the static tokens in the example. Should I use add_tokens or add_special_tokens?
transformers
3,445
closed
run_lm_finetuning on multiple training files
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> Hi, I would like to fine-tune huggingface's pre-trained BERT model on a relatively big text data. I split the data into multiple files (~10k files) in .raw format in the same folder. First, I succeeded to run run_lm_finetuning.py on one of the raw files I generated. Now, I would like to run that in my 10k raw files. I realised that that train_data_file argument does not accept a folder name (only the path to a single raw file). I believe that we can do a loop over the 10k train files and incrementally fine-tuning the model, but it does not look like a best solution for me... Could you please tell me if it exists a simple way to achieve the above? Thank you very much for your help.
03-26-2020 09:35:33
03-26-2020 09:35:33
Hi, this is not currently supported, you would need to implement this yourself. Feel free to open a PR if you do
transformers
3,444
closed
Import error in example script `run_language_modeling.py`
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): RobertaForMaskedLM Language I am using the model on (English, Chinese ...): English The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. pip install transformers 2. run `run_language_modeling.py`, which is the example script 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> Error message: > Traceback (most recent call last): File "run_language_modeling.py", line 42, in <module> from transformers import ( ImportError: cannot import name 'MODEL_WITH_LM_HEAD_MAPPING' ## Expected behavior The script should run.. <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.5.1 - Platform: Ubuntu 18.04 - Python version: 3.7 - PyTorch version (GPU?): 1.4.0 - Tensorflow version (GPU?): na - Using GPU in script?: y - Using distributed or parallel set-up in script?: n ## Note The work around is to use > from transformers.modeling_auto import MODEL_WITH_LM_HEAD_MAPPING > from transformers.file_utils import WEIGHTS_NAME Can you please update the example script? It is confusing ...
03-26-2020 06:12:34
03-26-2020 06:12:34
You need to upgrade your version of transformers (to 2.6), or better, to [install from source](https://github.com/huggingface/transformers#run-the-examples).<|||||>I just pulled the `huggingface/transformers-tensorflow-gpu:2.10.0` docker image, went to the `examples/language-modeling/` folder and ran the following, and I got the same error: ``` python3 run_language_modeling.py --output_dir=/app/data --model_type=distilbert --model_name_or_path=distilbert-base-uncased --do_train --train_data_file=/app/data/train_data.txt --do_eval --eval_data_file=/app/data/eval_data.txt --mlm ``` Haven't tried the workaround above yet. Steps: - `docker run -it -v `pwd`/data:/app/data huggingface/transformers-tensorflow-gpu:2.10.0` - `cd workspace/examples/language-modeling/` - try to run example command using `python3` `python3 -m pip show transformers` reports `2.10.0` is installed. <|||||>I get the issue (the `master` branch being checked out in the docker build) it just seems like it'd be cool for there to be a simpler way to run the examples in docker. If you wanted to use the `2.9.0` image, you'd have to pull the image and have your script first check out master as of the tag `2.9.0` then install from source, right? It'd be a nice feature if the docker images could run the examples without modification<|||||>I get the same issue when I `pip install transformers`. When I downgrade to `2.6.0`, it can't import `CONFIG_MAPPING`. Anything from `2.7.0` to `2.10.0` up I get the `MODEL_WITH_LM_HEAD_MAPPING` error<|||||>Okay, I got it to work for `2.10.0`. I just had to reinstall PyTorch ``` pip3 install torch ```
transformers
3,443
closed
`run_language_modeling` fails with community model (BioClinicalBERT)
# 🐛 Bug The `run_language_modeling.py` fails when using community models. ## Information Model I am using (Bert, XLNet ...): BioClinical_BERT Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. download wikitext-2 raw 2. run exactly the same 3. This code command line yields the following error: ``` python run_language_modeling.py \ --output_dir=output \ --model_type=BioClinicalBERT \ --model_name_or_path=emilyalsentzer/Bio_ClinicalBERT \ --output_dir=/kaggle/working/model \ --do_train \ --line_by_line \ --train_data_file=wikitext-2-raw/wiki.train.raw \ --do_eval \ --eval_data_file=wikitext-2-raw/wiki.valid.raw \ --num_train_epochs=4 \ --mlm ``` the following error will appear: ``` File "run_language_modeling.py", line 782, in <module> main() File "run_language_modeling.py", line 732, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_language_modeling.py", line 333, in train outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/modeling_bert.py", line 987, in forward encoder_attention_mask=encoder_attention_mask, File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/modeling_bert.py", line 790, in forward encoder_attention_mask=encoder_extended_attention_mask, File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/modeling_bert.py", line 407, in forward hidden_states, attention_mask, head_mask[i], encoder_hidden_states, encoder_attention_mask File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/modeling_bert.py", line 368, in forward self_attention_outputs = self.attention(hidden_states, attention_mask, head_mask) File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/modeling_bert.py", line 314, in forward hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.6/site-packages/transformers/modeling_bert.py", line 234, in forward attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2)) RuntimeError: cublas runtime error : the GPU program failed to execute at /opt/conda/conda-bld/pytorch_1579022034529/work/aten/src/THC/THCBlas.cu:368 ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info transformers v2.6.0 - `transformers` version: - Platform: Kaggle - Python version: 3.7 - PyTorch version (GPU?): 1.4.0 - Tensorflow version (GPU?): - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
03-26-2020 03:01:54
03-26-2020 03:01:54
Seems like scibert is also broken. When I run ``` python run_language_modeling.py \ --output_dir=output \ --model_type=scibert \ --model_name_or_path=allenai/scibert_scivocab_cased \ --output_dir=/kaggle/working/model \ --do_train \ --line_by_line \ --train_data_file=wikitext-2-raw/wiki.train.raw \ --do_eval \ --eval_data_file=wikitext-2-raw/wiki.valid.raw \ --num_train_epochs=4 \ --mlm ``` I get this: ``` Epoch: 0%| | 0/4 [00:00<?, ?it/s] Iteration: 0%| | 0/5942 [00:00<?, ?it/s] Iteration: 0%| | 1/5942 [00:00<33:48, 2.93it/s] Iteration: 0%| | 2/5942 [00:00<29:32, 3.35it/s] Iteration: 0%| | 3/5942 [00:00<25:44, 3.84it/s] Iteration: 0%| | 4/5942 [00:00<24:19, 4.07it/s] Iteration: 0%| | 5/5942 [00:01<22:20, 4.43it/s]Traceback (most recent call last): File "run_language_modeling.py", line 782, in <module> main() File "run_language_modeling.py", line 732, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_language_modeling.py", line 345, in train loss.backward() File "/opt/conda/lib/python3.6/site-packages/torch/tensor.py", line 195, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/opt/conda/lib/python3.6/site-packages/torch/autograd/__init__.py", line 99, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: CUDA error: device-side assert triggered ```<|||||>This is because the tokenizers you mention do not have a `tokenizer_config.json` file on the S3. There should be one limiting the maximum length to 512 tokens. Here it is set to 1e12 because it doesn't detect a maximum length in the configuration. cc @julien-c <|||||>Yes, in that case you would need to pass a `max_block` arg to the script. Let us know if it fixes your issue.<|||||>I see, thanks a lot! Here it would be `max_block=512` right (or whatever is the maximum length supported by that model)?<|||||>Yes, that's right!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I haven't had the chance to test `max_block=512`, but I hope this solves the problem. If it still persists, I'll re-open this issue.
transformers
3,442
closed
can't import TFBertModel from transformers
this is the log when I imported the TFBertModel from transformers ``` from transformers import TFBertModel ImportError: cannot import name 'TFBertModel' from 'transformers' (/home/cally/.local/lib/python3.7/site-packages/transformers/__init__.py) ``` my enviroment is transformers 2.4 linux python 3.7
03-26-2020 01:44:00
03-26-2020 01:44:00
I try it in Mac, it had same error my enviroment is transformers 2.4 mac python 3.7<|||||>That's because you don't have TensorFlow 2 installed.<|||||>Conda users: For whom the problem is not solved with re/installing TensorFlow, update the modules in conda. `conda install tensorflow`<|||||>You might want to try this: `conda install -c huggingface transformers` (ref: https://pypi.org/project/transformers/)<|||||>Thank you @mzackaria for the answer. The problem is solved (a long time ago). as mentioned above :)<|||||>I'm on Tensorflow 1.x and can't upgrade to 2.x Installing an older version of transformers worked for me. `pip install transformers==4.2.2`
transformers
3,441
closed
Add support for the null answer in `QuestionAnsweringPipeline`
03-26-2020 01:12:03
03-26-2020 01:12:03
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3441?src=pr&el=h1) Report > Merging [#3441](https://codecov.io/gh/huggingface/transformers/pull/3441?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/010e0460b22ddd7f74e31163f69ab3da2e9741ba&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `66.66%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3441/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3441?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3441 +/- ## ========================================== - Coverage 77.61% 77.60% -0.01% ========================================== Files 100 100 Lines 16972 16978 +6 ========================================== + Hits 13172 13175 +3 - Misses 3800 3803 +3 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3441?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/3441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `72.44% <66.66%> (-0.09%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3441/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `83.94% <0.00%> (-0.18%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3441?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3441?src=pr&el=footer). Last update [010e046...8671bc3](https://codecov.io/gh/huggingface/transformers/pull/3441?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Hi @bryant1410, thanks for pushing this into the QA pipeline. My only concern is about the name of the parameter `version_2_with_negative` introduced in this PR. It seems very tight up to SQuAD2 and might be hard for newcomers to directly understand what it does. Would you mind changing the name of the parameter `version_2_with_negative` to `handle_impossible_answer` ? <|||||>Sure. Btw, is it okay for it to default to `False`?<|||||>LGTM too, thanks @bryant1410
transformers
3,440
closed
feat: config what's trainable in Bert layers
Make it possible to configure trainability for specific subparts of a `TFBertMainLayer`. This is minimal and Bert-only. Really should be added in more model types, not just Bert, but it probably necessarily differs quite a bit between model types, so probably can't be done in a very “DRY” way. See https://github.com/tensorflow/tensorflow/issues/37541 which makes this necessary: if we set `l.trainable = False` on a layer _after_ initializing it, then training and serialization proceeds without apparent problems but then deserialization will fail, because: * the `trainable` attribute doesn't get persisted * parameter values are deserialized and batch-assigned to model parameters _in order_ — with the implicit assumption that the ordering of parameters is the same as in the model before serialization * that assumption doesn't hold if some layers were not trainable before serialization, because the ordering of parameters depends on the `trainable` attribute, which is `True` by default because it wasn't persisted
03-26-2020 00:36:18
03-26-2020 00:36:18
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3440?src=pr&el=h1) Report > Merging [#3440](https://codecov.io/gh/huggingface/transformers/pull/3440?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/010e0460b22ddd7f74e31163f69ab3da2e9741ba&el=desc) will **increase** coverage by `0.01%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3440/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3440?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3440 +/- ## ========================================== + Coverage 77.61% 77.62% +0.01% ========================================== Files 100 100 Lines 16972 16985 +13 ========================================== + Hits 13172 13185 +13 Misses 3800 3800 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3440?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JlcnQucHk=) | `100.00% <100.00%> (ø)` | | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.16% <100.00%> (+0.08%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3440?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3440?src=pr&el=footer). Last update [010e046...2597d20](https://codecov.io/gh/huggingface/transformers/pull/3440?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,439
closed
Force the return of token type IDs
close #3313 close #3227 These two issues happen because the `token_type_ids` are not generated by the `encode_plus` method when the SQuAD and multiple choice scripts expect it. The GLUE script was patched by #3240
03-25-2020 21:45:04
03-25-2020 21:45:04
transformers
3,438
closed
Same probability from fine-tuning custom pre-trained LM
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers If your question wasn't answered after a period of time on Stack Overflow, you can always open a question on GitHub. You should then link to the SO question that you posted. --> ## Details <!-- Description of your issue --> I have a dataset with QA sentence pairs. I pre-trained a bert model from scractch using run_language_modeling.py where I concatenated the QA pairs and trained the dataset line_by_line. I tried to fine-tune the custom pre-trained model, but am getting the same probability output for different inputs. I also tried to reduce the learning rate, but the constant probability problem remains. What could I be doing wrong? <!-- You should first ask your question on SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
03-25-2020 21:20:58
03-25-2020 21:20:58
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
3,437
closed
[Bug fix] Using loaded checkpoint with --do_predict (instead of random init)
Without this fix, I'm getting near-random validation performance for a trained model, and the validation performance differs per validation run. I think this happens since the `model` variable isn't set with the loaded checkpoint, so I'm using a randomly initialized model. Looking at the model activations, they differ each time I run evaluation (but they don't with this fix).
03-25-2020 20:23:47
03-25-2020 20:23:47
Tagging @srush @nateraw from the original [Lightning GLUE PR](https://github.com/huggingface/transformers/pull/3290) to check I'm not missing something?<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3437?src=pr&el=h1) Report > Merging [#3437](https://codecov.io/gh/huggingface/transformers/pull/3437?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ccbe839ee0b78a17e74dab218bfae7efe904ac3b&el=desc) will **increase** coverage by `0.04%`. > The diff coverage is `88.88%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3437/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3437?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3437 +/- ## ========================================== + Coverage 77.56% 77.60% +0.04% ========================================== Files 100 100 Lines 16970 16967 -3 ========================================== + Hits 13162 13167 +5 + Misses 3808 3800 -8 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3437?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/3437/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `24.68% <88.88%> (+2.94%)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3437/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.85% <0.00%> (+0.13%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3437?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3437?src=pr&el=footer). Last update [83272a3...f12d585](https://codecov.io/gh/huggingface/transformers/pull/3437?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>I'll check this out later tonight! I'm on mobile so I've just looked at your commit quickly...looks like you're right. I know in the past I've instantiated the model then called `model.load_from_checkpoint(loaded_ckpt)` so what you've got probably gets the same job done. The benefit of doin it the way I just mentioned is that if you already have a model object available from training, you can just load the best ckpt into that. Either way works though! <|||||>That was fast :smile: Looks good to me!<|||||>Thanks for checking :) I'm still not able to reproduce my in-training validation performance though with the --do_predict flag, any ideas? I'm getting identical validation accuracy on different runs now, but the accuracy is still near random<|||||>@ethanjperez I just [checked the docs](https://pytorch-lightning.readthedocs.io/en/latest/weights_loading.html), and it looks like the way we were doing it originally was correct. ```python model = MyLightingModule.load_from_checkpoint(PATH) model.eval() y_hat = model(x) ``` The way that I was explaining to do it would require you to use `torch.load` on the checkpoint path, which you would then pass to `model.load_state_dict`. The above method (what we had originally) is probably supposed to do that for you. I haven't had the chance to recreate the issue, so I'll have to take a look.<|||||>Cool thanks! Even with the original way, I was still not able to reproduce my in-training validation performance (just something to look out for when you try) - In particular, I'm loading/running an already trained model with the `--do_predict` flag without using the `--do_train` flag (I don't think you'd see the issue if you use both `--do_predict` and `--do_train`)<|||||>@nateraw @sshleifer Are you guys able to load a trained model successfully with the pytorch-lightning scripts? Even after this patch, I am having issues loading an already trained model, i.e., if I just use `--do_eval` without also using `--do_train`<|||||>Sorry for taking so long. I will try to reproduce this today if there is no update on your end! Filing an issue with what you ran/expected would help :) @ethanjperez <|||||>@sshleifer Just seeing this - were you able to reproduce the issue? I can't remember what exact command I ran, but it was a standard evaluation command (the same as the training command I used, but with a few flags tweaked, e.g. drop the `--do-train` flag and add the `--do-eval` flag)<|||||>This is fixed now.
transformers
3,436
closed
TFXLMRoberta impossible to load base and large model with pretrained weight ?
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): XLMRoberta Language I am using the model on (English, Chinese ...): The problem arises when using: * [ x] the official example scripts: (give details below) The tasks I am working on is: my own task or dataset: (give details below) ## To reproduce launch the command : `model = TFXLMRobertaForSequenceClassification.from_pretrained(pretrained_model_name_or_path="xlm-roberta-large" )` Steps to reproduce the behavior: 1. from transformers import* 2. launch the command above 3. error : > TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType ## Expected behavior model should be load. ## Environment info - `transformers` version: 2.6.0 - Platform: colab - Tensorflow version (GPU?):2.1 gpu - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no
03-25-2020 20:05:42
03-25-2020 20:05:42
maybe u can use from_pt=True TFXLMRobertaForSequenceClassification.from_pretrained(pretrained_model_name_or_path="xlm-roberta-large", **_from_pt=True_**)<|||||>Hi, thanks for your answer. I got exactly the same message with "from_pt=True"<|||||>I am suspecting that the file does not exist. I succeed to load with TFXLMRoberta doing that : ``` m = XLMRobertaForSequenceClassification.from_pretrained("xlm-roberta-base", num_labels=1) m.save_pretrained("./") del m model = TFXLMRobertaForSequenceClassification.from_pretrained("./", , num_labels=1, from_pt=True) ```<|||||>Hello, indeed there is no official XLM-R checkpoint. You can use @jplu's [checkpoint from the modelhub](https://huggingface.co/models/?search=jplu%2Ftf-xlm): ```py m = TFXLMRobertaForSequenceClassification.from_pretrained("jplu/tf-xlm-roberta-base", num_labels=1) ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,435
closed
Updated/added model cards
Trained four models, adding/updating model cards to make consistent!
03-25-2020 19:16:16
03-25-2020 19:16:16
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3435?src=pr&el=h1) Report > Merging [#3435](https://codecov.io/gh/huggingface/transformers/pull/3435?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ccbe839ee0b78a17e74dab218bfae7efe904ac3b&el=desc) will **increase** coverage by `0.04%`. > The diff coverage is `88.88%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3435/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3435?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3435 +/- ## ========================================== + Coverage 77.56% 77.60% +0.04% ========================================== Files 100 100 Lines 16970 16967 -3 ========================================== + Hits 13162 13167 +5 + Misses 3808 3800 -8 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3435?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/3435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `24.68% <88.88%> (+2.94%)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.85% <0.00%> (+0.13%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3435?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3435?src=pr&el=footer). Last update [83272a3...d486459](https://codecov.io/gh/huggingface/transformers/pull/3435?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,434
closed
How to detokenize a BertTokenizer output?
For example, let's tokenize a sentece "why isn't Alex' text tokenizing": tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True) tokens = tokenizer.tokenize("why isn't Alex' text tokenizing") We are getting the next output: ['why', 'isn', "'", 't', 'Alex', "'", 'text', 'token', '##izing'] I want to convert it to: why isn't Alex' text tokenizing It seems the tokenizer doesn't do its job in the best way. If after tokenization we would have something like that: ['why', 'isn', "##'", '##t', 'Alex', "##'", 'text', 'token', '##izing'] It would be easy to convert: tokens = ['why', 'isn', "##'", '##t', 'Alex', "##'", 'text', 'token', '##izing'] restored_text = [None] * len(tokens) tokens.extend(['#', '#']) for i in range(len(tokens )-2): if re.findall("#{2}", tokens[i+1]): restored_text [i] = tokens[i] + tokens[i+1].replace('##', '') if re.findall("#{2}", tokens [i+2]): restored_text[i] = restored_text[i] + tokens[i+2].replace('##', '') else: restored_text[i] = tokens[i] restored_text_without_masks = [] for i in range(len(restored_text)): if not restored_text[i].startswith('#'): restored_text_without_masks.append(restored_text[i])
03-25-2020 14:56:02
03-25-2020 14:56:02
I came across the same problem some days ago. I like your code, however it can be faster: ```python def is_subtoken(word): if word[:2] == "##": return True else: return False tokens = ['why', 'isn', "##'", '##t', 'Alex', "##'", 'text', 'token', '##izing'] restored_text = [] for i in range(len(tokens)): if not is_subtoken(tokens[i]) and (i+1)<len(tokens) and is_subtoken(tokens[i+1]): restored_text.append(tokens[i] + tokens[i+1][2:]) if (i+2)<len(tokens) and is_subtoken(tokens[i+2]): restored_text[-1] = restored_text[-1] + tokens[i+2][2:] elif not is_subtoken(tokens[i]): restored_text.append(tokens[i]) ```<|||||>@GuillemGSubies Did you solve this problem in your task?<|||||>> @GuillemGSubies Did you solve this problem in your task? I used a modification of the code I posted above (in my use case I needed to interact with some NER labels also). However if you want to detokenize I think your code works perfectly.<|||||>@GuillemGSubies I just want to clarify something in my question. Let's consider two sentences: "why isn't Alex's text tokenizing? The house on the left is the Smiths' house" Now let's tokenize and decode: from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True) tokenizer.decode(tokenizer.convert_tokens_to_ids(tokenizer.tokenize("why isn't Alex's text tokenizing? The house on the left is the Smiths' house"))) We get: "why isn't alex's text tokenizing? the house on the left is the smiths'house" **My question is how dealing with missing space in some possessives like *smiths'house*?** For me, it seems that the process of tokenization in Transformers is done not right. Let's consider output of tokenizer.tokenize("why isn't Alex's text tokenizing? The house on the left is the Smiths' house") we get: ['why', 'isn', "'", 't', 'alex', "'", 's', 'text', 'token', '##izing', '?', 'the', 'house', 'on', 'the', 'left', 'is', 'the', 'smith', '##s', "'", 'house'] So in this step, we already have lost important information about the last apostrophe. It would be much better if tokenization was done in the another way: ['why', 'isn', "##'", '##t', 'alex', "##'", '##s', 'text', 'token', '##izing', '?', 'the', 'house', 'on', 'the', 'left', 'is', 'the', 'smith', '##s', "##'", 'house'] In this way, tokenization keeps all information about apostrophes, and we will not have problems with possessives. <|||||>This goes beyond of my understanding of the library, I am not a dev, sorry<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,433
closed
Extend config with task specific configs.
As discussed and proposed by @thomwolf in PR #3413, another step towards a combined tokenizer/model config is this PR. It extends the normal config with the following parameters: ``` { .... prefix = "", # generic generation HP max_length: 100, length_penalty: 1.0, task_specific_params: { "summarization": { # task id (e.g. name of the pipeline?) max_length: 140, length_penalty: 2.0 }, "translation_en_to_de": { prefix: "translate English to German: " max_length: 160, length_penalty: 3.0 }, }, } ``` In terms of hierarchy for a task-specific generation it would go as follows: 1) Is the parameter provided as an argument to the `generate` method ? Yes use these. No - go to 2. 2) Is the parameter provided in the `task_specific_params dict` ? Yes use these. No - go to 3. 3) Is the parameter provided in the default `config dict`? Yes use these. No - go to 4. 4) Is the parameter provided hard-coded in the model's config file? Yes use these. No - use the very default parameters of `PretrainedConfig` These were our arguments in favor of this: - This removes a lot of hard coded parameters in pipelines and examples - Another step towards a combined tokenizer / model config - A lot of weird if-else statements can be saved ("If task is en-de translation then do X" won't be necessary as the en-de specific parameters will override the default ones) ### TODO If you guys are fine with this structure: - [ ] I will add the `task_specific_params` for Bart and T5s configs on S3 - [ ] clean up the examples and pipelines. - [ ] rebase all the T5s PRs: #3428, #3419, #3413, #3411
03-25-2020 14:41:47
03-25-2020 14:41:47
> I think this makes sense, giving where we've been going up to now. > > I would like to understand what is our philosophy with the growing size of the configuration files; for example the `bert-base-cased` configuration on S3 looks like this: > > ``` > { > "architectures": [ > "BertForMaskedLM" > ], > "attention_probs_dropout_prob": 0.1, > "hidden_act": "gelu", > "hidden_dropout_prob": 0.1, > "hidden_size": 768, > "initializer_range": 0.02, > "intermediate_size": 3072, > "max_position_embeddings": 512, > "num_attention_heads": 12, > "num_hidden_layers": 12, > "type_vocab_size": 2, > "vocab_size": 28996 > } > ``` > > (which is readable imo) and once it's saved it now looks like this: > > ``` > { > "_num_labels": 2, > "architectures": [ > "BertForMaskedLM" > ], > "attention_probs_dropout_prob": 0.1, > "bos_token_id": null, > "do_sample": false, > "early_stopping": false, > "eos_token_id": null, > "finetuning_task": null, > "hidden_act": "gelu", > "hidden_dropout_prob": 0.1, > "hidden_size": 768, > "id2label": { > "0": "LABEL_0", > "1": "LABEL_1" > }, > "initializer_range": 0.02, > "intermediate_size": 3072, > "is_decoder": false, > "is_encoder_decoder": false, > "label2id": { > "LABEL_0": 0, > "LABEL_1": 1 > }, > "layer_norm_eps": 1e-12, > "length_penalty": 1.0, > "max_length": 20, > "max_position_embeddings": 512, > "min_length": 0, > "model_type": "bert", > "no_repeat_ngram_size": 0, > "num_attention_heads": 12, > "num_beams": 1, > "num_hidden_layers": 12, > "num_return_sequences": 1, > "output_attentions": false, > "output_hidden_states": false, > "output_past": true, > "pad_token_id": 0, > "pruned_heads": {}, > "repetition_penalty": 1.0, > "temperature": 1.0, > "top_k": 50, > "top_p": 1.0, > "torchscript": false, > "type_vocab_size": 2, > "use_bfloat16": false, > "vocab_size": 28996 > } > ``` > > (which is less readable), are we planning to keep them growing as the tokenizer and model configurations are merged? I feel like adding all those attributes to the configuration saves an "experiment" more than a "model". Is this something we're aiming for? Might it be possible to only save parameters that are different from the default config of the corresponding model? This would keep it readable. <|||||>LGTM and I agree with what @LysandreJik and you just said above. Serialized `config.json` should be more minimal. For instance I've always disliked the `id2label` and `label2id` being serialized even for models that don't have a classification head.<|||||>After this is merged I can open a new PR that serializes only the non-default values.<|||||>I agree with what @LysandreJik and @julien-c says about serializing only non-default values by the way.
transformers
3,432
closed
how to use transformers to get all pretraining model names in transformers hub
03-25-2020 13:46:16
03-25-2020 13:46:16
```python from transformers.hf_api import HfApi api = HfApi() models = api.list_models() ``` This is not very well documented yet so feel free to add to the doc.
transformers
3,431
closed
Error ImportError: cannot import name 'MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING' from 'transformers' (C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\site-packages\transformers\__init__.py)
# 🐛 Bug ## Information I am doing the tutorial ["Quick tour of the fine-tuning/usage scripts"](https://github.com/huggingface/transformers#quick-tour-of-the-fine-tuningusage-scripts) I downloaded the Glue dataset. When I try to run this command from pytorch ``` python ./examples/run_glue.py \ --model_type bert \ --model_name_or_path bert-base-uncased \ --task_name MRPC \ --do_train \ --do_eval \ --do_lower_case \ --data_dir C:/Git/RemoteDGX/MRPC/glue_data/MRPC \ --max_seq_length 128 \ --per_gpu_eval_batch_size=8 \ --per_gpu_train_batch_size=8 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir /tmp/MRPC/ ``` I am getting this error: ImportError: cannot import name 'MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING' from 'transformers' (C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\site-packages\transformers\__init__.py) : ## To reproduce Steps to reproduce the behavior: 1.download the Glue database 2. execute the script run_glue.py this is the stacktrace: ``` 2020-03-25 14:09:19.698135: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll Traceback (most recent call last): File "C:/Git/RemoteDGX/transformers/examples/run_glue.py", line 32, in <module> from transformers import ( ImportError: cannot import name 'MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING' from 'transformers' (C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\site-packages\transformers\__init__.py) ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: windows - Python version: 3.7 - PyTorch version 1.4.0 without GPU:
03-25-2020 12:10:52
03-25-2020 12:10:52
You're not running the latest version of transformers.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,430
closed
Problem saving and/or loading fine-tuned model
# ❓ Questions & Help I have finetuned bert-base-cased for multi-class classification with BertForSequenceClassification and got great results (about 88% accuracy on my dev data). However, when I save the finetuned model, load it and run the evaluation on the exact same dev data, I got awful results (about 0.17 accuracy). At first glance, it seems that either I am wrongly saving the fine-tuned model OR wrongly loading it after training. Would it be possible that save_pretrained only save the weights of the BERT model without the ones of the classifier above ? @patrickvonplaten ## Details Here is how I save the fine-tuned model after training: `model_to_save = model.module if hasattr(model, "module") else model # Take care of distributed/parallel training` `model_to_save.save_pretrained(args.output_dir)` `tokenizer.save_pretrained(args.output_dir)` And here is how I load the fine-tuned model for running evaluation: `model = BertForSequenceClassification.from_pretrained( args.model_name_or_path, num_labels = args.num_labels, output_attentions = False, output_hidden_states = False, cache_dir = args.cache_dir)` where `args.model_name_or_path` is the path of my .bin checkpoint, and args.num_labels stays unchanged during all process. ## Full code ```python def train(args, model, tokenizer, dataset, tb_writer, categories): # Load training dataset. if args.do_eval and args.eval_filepath is None: print("No validation file given: splitting dataset to train/test datasets...\n") train_dataset, validation_dataset = split_data(dataset, args.test_percent, args.seed) else: train_dataset = dataset print("Creating training dataloader...\n") train_data, train_sampler, train_dataloader = create_dataloader(train_dataset, args.batch_size, training_data=True) # Setting up Optimizer & Learning Rate Scheduler. optimizer = AdamW(model.parameters(), lr = args.learning_rate, eps = args.adam_epsilon ) total_steps = len(train_dataloader) * args.num_epochs scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps = 0, # Default value in run_glue.py num_training_steps = total_steps) # Init some useful variables. global_step = 0 tr_loss, logging_loss = 0.0, 0.0 # For each epoch... t = time.time() for epoch_i in range(0, args.num_epochs): # Perform one full pass over the training set. print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, args.num_epochs)) print('Training...') # Measure how long the training epoch takes. t0 = time.time() # Put the model into training mode. Don't be mislead--the call to # `train` just changes the *mode*, it doesn't *perform* the training. # `dropout` and `batchnorm` layers behave differently during training # vs. test (source: https://stackoverflow.com/questions/51433378/what-does-model-train-do-in-pytorch) model.train() # For each batch of training data... for step, batch in enumerate(train_dataloader): # Unpack this training batch from our dataloader. # As we unpack the batch, we'll also copy each tensor to the GPU using the `to` method. # `batch` contains three pytorch tensors: # [0]: input ids # [1]: attention masks # [2]: labels b_input_ids = batch[0].to(args.device) b_input_mask = batch[1].to(args.device) b_labels = batch[2].to(args.device) # Always clear any previously calculated gradients before performing a backward pass. # PyTorch doesn't do this automatically because accumulating the gradients is "convenient while training RNNs". # (source: https://stackoverflow.com/questions/48001598/why-do-we-need-to-call-zero-grad-in-pytorch) model.zero_grad() # Perform a forward pass (evaluate the model on this training batch). # This will return the loss (rather than the model output) because we have provided the `labels`. # The documentation for this `model` function is here: # https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) # The call to `model` always returns a tuple, so we need to pull the loss value out of the tuple. loss = outputs[0] if args.n_gpu > 1: loss = loss.mean() # mean() to average on multi-gpu parallel training # Accumulate the training loss over all of the batches so that we can calculate the average loss at the end. # `loss` is a Tensor containing a single value; the `.item()` function just returns the Python value from the tensor. tr_loss += loss.item() # Perform a backward pass to calculate the gradients. loss.backward() # Clip the norm of the gradients to 1.0. This is to help prevent the "exploding gradients" problem. torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) # Update parameters and take a step using the computed gradient. # The optimizer dictates the "update rule"--how the parameters are modified based on their gradients, the learning rate, etc. optimizer.step() # Update the learning rate. scheduler.step() # Update global step. global_step += 1 # Progress update every 'logging_steps' batches. if args.logging_steps > 0 and step != 0 and step % args.logging_steps == 0: # Calculate elapsed time in minutes. elapsed = format_time(time.time() - t0) # Compute average training loss over the last 'logging_steps'. Write it to Tensorboard. loss_scalar = (tr_loss - logging_loss) / args.logging_steps tb_writer.add_scalar('Train/Loss', loss_scalar, global_step) logging_loss = tr_loss # Print the log. print(' Batch {:>5,} of {:>5,}. Elapsed: {:}. Training loss: {:.2f}'.format(step, len(train_dataloader), elapsed, loss_scalar)) print(" Training epoch took: {:}\n".format(format_time(time.time() - t0))) if args.do_eval and args.eval_filepath is None: print("Running Validation...") # After the completion of each training epoch, measure our performance on our validation set. t0 = time.time() result, df_wrong, df_right = evaluate(args, model, validation_dataset, categories) # Write results to tensorboard. tb_writer.add_scalar('Test/Accuracy', result[0], epoch_i + 1) tb_writer.add_scalar('Test/Recall', result[1], epoch_i + 1) tb_writer.add_scalar('Test/Precision', result[2], epoch_i + 1) tb_writer.add_scalar('Test/F1 score', result[3], epoch_i + 1) tb_writer.add_scalar('Test/MCC', result[4], epoch_i + 1) # Plot confusion matrix. plot_confusion_matrix(result[5], categories, args.output_dir) # Save dataframes of wrong and right predictions for further analysis. df_wrong.to_csv(os.path.join(args.output_dir, 'preds_wrong.csv')) df_right.to_csv(os.path.join(args.output_dir, 'preds_right.csv')) print(" Validation took: {:}\n".format(format_time(time.time() - t0))) print("Training complete! Took: {}\n".format(format_time(time.time() - t))) print("Saving model to {}...\n.".format(args.output_dir)) model_to_save = model.module if hasattr(model, 'module') else model # Take care of distributed/parallel training model_to_save.save_pretrained(args.output_dir) tokenizer.save_pretrained(args.output_dir) torch.save(args, os.path.join(args.output_dir, 'training_args.bin')) # Good practice: save your training arguments together with the trained model return def evaluate(args, model, validation_dataset, categories): # Creating validation dataloader. validation_data, validation_sampler, validation_dataloader = create_dataloader(validation_dataset, args.batch_size, training_data=False) # Get validation sentences. validation_sentences = validation_dataset[3] # Tracking variables nb_eval_steps = 0 preds = None out_label_ids = None # Put the model in evaluation mode--the dropout layers behave differently during evaluation. model.eval() # Evaluate data for one epoch for batch in validation_dataloader: # Add batch to GPU. b_input_ids, b_input_mask, b_labels = tuple(t.to(args.device) for t in batch) # Telling the model not to compute or store gradients, saving memory and speeding up validation with torch.no_grad(): # Forward pass, calculate logit predictions. # This will return the logits rather than the loss because we have not provided labels. # token_type_ids is the same as the "segment ids", which differentiates sentence 1 and 2 in 2-sentence tasks. outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask) # Get the "logits" output by the model. The "logits" are the output values prior to applying an activation function like the softmax. logits = outputs[0] # Move logits and labels to CPU and store them. if preds is None: preds = logits.detach().cpu().numpy() out_label_ids = b_labels.detach().cpu().numpy() else: preds = np.append(preds, logits.detach().cpu().numpy(), axis=0) out_label_ids = np.append(out_label_ids, b_labels.detach().cpu().numpy(), axis=0) # Track the number of batches nb_eval_steps += 1 # Take the max predicitions. preds = np.argmax(preds, axis=1) # Report results. result = compute_metrics(preds, out_label_ids, categories) print(" * Accuracy: {0:.4f}".format(result[0])) print(" * Recall: {0:.4f}".format(result[1])) print(" * Precision: {0:.4f}".format(result[2])) print(" * F1 score: {0:.4f}".format(result[3])) print(" * MCC: {0:.4f}".format(result[4])) # Get wrong and right predictions. df_wrong, df_right = analyze_predictions(preds, out_label_ids, validation_sentences) return result, df_wrong, df_right def main(args): # Create tensorboard summarywriter. tb_writer = SummaryWriter() # Create output dir if none mentioned. if args.output_dir is None: model_name = os.path.splitext(os.path.basename(args.model_name_or_path))[0] args.output_dir = "./output/" + model_name + '/' if not os.path.exists(args.output_dir): os.makedirs(args.output_dir) # Set the seed value all over the place to make this reproducible. set_seed(args.seed) print("\n========================================") print(' Load model ') print("========================================\n") print("Loading BertForSequenceClassification model...\n") model = BertForSequenceClassification.from_pretrained( args.model_name_or_path, # Use the 12-layer BERT model, with an cased vocab. num_labels = args.num_labels, # The number of output labels output_attentions = False, # Whether the model returns attentions weights. output_hidden_states = False, # Whether the model returns all hidden-states. cache_dir = args.cache_dir, ) #model = BertForSequenceClassification.from_pretrained(args.model_name_or_path) print('Loading BertTokenizer...\n') tokenizer = BertTokenizer.from_pretrained(args.model_name_or_path, do_lower_case=False) print("Setting up CUDA & GPU...") if torch.cuda.is_available(): if args.gpu_id: torch.cuda.set_device(args.gpu_id) args.n_gpu = 1 print("-> GPU training available! As '--gpu_id' was set, only GPU {} {} will be used (no parallel training).\n".format(torch.cuda.get_device_name(args.gpu_id), args.gpu_id)) else: args.n_gpu = torch.cuda.device_count() gpu_ids = list(range(0, args.n_gpu)) if args.n_gpu > 1: model = torch.nn.DataParallel(model, device_ids=gpu_ids, output_device=gpu_ids[-1]) print("-> GPU training available! Training will use GPU(s) {}\n".format(gpu_ids)) args.device = torch.device("cuda") else: args.device = torch.device("cpu") args.n_gpu = 0 print("-> No GPU available, using the CPU instead.\n") model.to(args.device) # Tell pytorch to run the model on the device. print("\n========================================") print(' Processing data ') print("========================================\n") df, categories = load_data(args) print("Tokenizing sentences...") tokenized = tokenize_sentences(tokenizer, df) attention_masks = create_masks(tokenized) dataset = (tokenized, df.Class_id.values, attention_masks, df.Sentence.values) if args.do_train: print("\n========================================") print(' Launching training ') print("========================================\n") train(args, model, tokenizer, dataset, tb_writer, categories) elif args.do_eval and args.eval_filepath is not None: print("\n========================================") print(' Launching validation ') print("========================================\n") result, df_wrong, df_right = evaluate(args, model, dataset, categories) # Save dataframes of wrong and right predictions for further analysis. df_wrong.to_csv(os.path.join(args.output_dir, 'wrong_preds.csv')) df_right.to_csv(os.path.join(args.output_dir, 'right_preds.csv')) ```
03-25-2020 11:19:42
03-25-2020 11:19:42
Hi @antoilouis, Thanks for posting this. A quick wild guess might be that after training you save the model in `args.output_dir` but then load it from a different `args.model_name_or_path` (also since you said that if you evaluate the model right after training you get good results. My advice to solve this wolud be the following: Train for 1 epoch and a tiny part of the dataset. Print out / Save some model weights, then save load it again and check whether the weights are equal. They should be equal. <|||||>Oh and also please post your environment information here. Make sure that you have the newest version of `transformers`. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I meet the same question . do you have any advice now? Thanks!<|||||>I am currently facing the same issue with the `BertForTokenClassification` model, the model behaves differently each time it is loaded. Are there solutions?
transformers
3,429
closed
Confusion in understanding the output of BERTforTokenClassification class from Transformers library
It is the example given in the documentation of transformers pytorch library ``` from transformers import BertTokenizer, BertForTokenClassification import torch tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForTokenClassification.from_pretrained('bert-base-uncased') input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1 labels = torch.tensor([1] * input_ids.size(1)).unsqueeze(0) # Batch size 1 outputs = model(input_ids, labels=labels) loss, scores, hidden_states,attentions = outputs ``` Here hidden_states is a tuple of length 13 and contains hidden-states of the model at the output of each layer plus the initial embedding outputs. **I would like to know, whether hidden_states[0] or hidden_states[12] represent the final hidden state vectors**? Thanks in advance @thomwolf @nreimers
03-25-2020 10:58:34
03-25-2020 10:58:34
AFAIK, `12` does<|||||>For detailed explanation for this, refer https://stackoverflow.com/questions/60847291/confusion-in-understanding-the-output-of-bertfortokenclassification-class-from-t
transformers
3,428
closed
Add wmt translation example
PR adds translation example for T5. It uses the `sacrebleu` BLEU scorer. I adapted the README.md a bit so that users are aware that models in official paper were attained with finetuned T5 @craffel
03-25-2020 10:47:35
03-25-2020 10:47:35
> long Yeah we should definitely try to run it on a GPU - will take a look at that :-) <|||||>Not sure whether we need fp16 and multi-gpu training. I think single GPU training is enough and t5 + wmt does not take much memory. But happy to take a look into it if you guys think it's worth it :-) @thomwolf @LysandreJik @julien-c <|||||>Code quality test fails because of unpinned isort library (see https://github.com/huggingface/transformers/pull/3449)<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3428?src=pr&el=h1) Report > Merging [#3428](https://codecov.io/gh/huggingface/transformers/pull/3428?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b4fb94fe6d831b17c0df364b2848c80ef3add154?src=pr&el=desc) will **not change** coverage. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3428/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/3428?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3428 +/- ## ======================================= Coverage 52.51% 52.51% ======================================= Files 100 100 Lines 17051 17051 ======================================= Hits 8954 8954 Misses 8097 8097 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3428?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3428?src=pr&el=footer). Last update [b4fb94f...713524e](https://codecov.io/gh/huggingface/transformers/pull/3428?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,427
closed
I want to create a tokenizer which the vocab file in my computer
I want to create a tokenizer in my computer, which the vocab file is below, my code as below ``` self.model_config = AutoConfig.from_pretrained(r'/home/cally/Awake/Code/bert/pre_model/bert_config.json') self.tokenizer = AutoTokenizer.from_pretrained(r'/home/cally/Awake/Code/bert/pre_model/vocab.txt') ``` it appeared this error, what's wrong?, my enviroment is linux python 3.7 transformer 2.6 ``` OSError: Couldn't reach server at '/home/cally/Awake/Code/bert/pre_model/vocab.txt' to download configuration file or configuration file is not a valid JSON file. Please check network or file content here: /home/cally/Awake/Code/bert/pre_model/vocab.txt. ```
03-25-2020 10:32:48
03-25-2020 10:32:48
Your config.json file should be named `config.json`, and then you should be able to do: ``` self.model_config = AutoConfig.from_pretrained(r'/home/cally/Awake/Code/bert/pre_model/') self.tokenizer = AutoTokenizer.from_pretrained(r'/home/cally/Awake/Code/bert/pre_model/') ``` (just the folder name). You might have to add `model_type: "bert"` to your config.json.<|||||>> > > Your config.json file should be named `config.json`, and then you should be able to do: > > ``` > self.model_config = AutoConfig.from_pretrained(r'/home/cally/Awake/Code/bert/pre_model/') > self.tokenizer = AutoTokenizer.from_pretrained(r'/home/cally/Awake/Code/bert/pre_model/') > ``` > > (just the folder name). > > You might have to add `model_type: "bert"` to your config.json. I encountered the same problem and sorry to bother, but where should i add this line to?
transformers
3,426
closed
Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_selec
I wrapped the code base into a flask container and tried to run it on a GPU and I am running into a GPU device copy issue. Looking for pointers. I have verified that the model actually gets moved to the GPU: ` self.model.to(torch.device("cuda" if torch.cuda.is_available() and not "store_true" else "cpu")) self.model.eval() # TO HERE` ``` backend_1 | File "/app/run_generation.py", line 285, in hook backend_1 | num_return_sequences=args.num_return_sequences, backend_1 | File "/usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad backend_1 | return func(*args, **kwargs) backend_1 | File "/app/transformers/modeling_utils.py", line 979, in generate backend_1 | attention_mask=attention_mask, backend_1 | File "/app/transformers/modeling_utils.py", line 1016, in _generate_no_beam_search backend_1 | outputs = self(**model_inputs) backend_1 | File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__ backend_1 | result = self.forward(*input, **kwargs) backend_1 | File "/app/transformers/modeling_gpt2.py", line 599, in forward backend_1 | inputs_embeds=inputs_embeds, backend_1 | File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__ backend_1 | result = self.forward(*input, **kwargs) backend_1 | File "/app/transformers/modeling_gpt2.py", line 465, in forward backend_1 | inputs_embeds = self.wte(input_ids) backend_1 | File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__ backend_1 | result = self.forward(*input, **kwargs) backend_1 | File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/sparse.py", line 114, in forward backend_1 | self.norm_type, self.scale_grad_by_freq, self.sparse) backend_1 | File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1484, in embedding backend_1 | return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) backend_1 | RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_selec ```
03-25-2020 05:19:36
03-25-2020 05:19:36
Are the inputs also being cast to GPU?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,425
closed
Update model card huseinzol05/bert-base-bahasa-cased
03-25-2020 04:21:07
03-25-2020 04:21:07
Hi @huseinzol05! can you rebase on master so that it's easy to merge?<|||||>@julien-c , done! thank u very much!<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3425?src=pr&el=h1) Report > Merging [#3425](https://codecov.io/gh/huggingface/transformers/pull/3425?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9c683ef01e19c4dc1216dcd1ae3c8e7c44d7b2b9&el=desc) will **increase** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3425/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3425?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3425 +/- ## ======================================= Coverage 77.76% 77.76% ======================================= Files 100 100 Lines 16995 16995 ======================================= + Hits 13216 13217 +1 + Misses 3779 3778 -1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3425?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3425/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.32% <0.00%> (+0.17%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3425?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3425?src=pr&el=footer). Last update [9c683ef...33d0e10](https://codecov.io/gh/huggingface/transformers/pull/3425?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@julien-c , added xlnet-base README
transformers
3,424
closed
Where is the code of Bart fine-tuning?Thanks
03-25-2020 01:54:34
03-25-2020 01:54:34
Hi, the code is in `transformers/examples/summarization/bart/`. Read the `README.md` file. > To use your own data, copy that files format. Each article to be summarized is on its own line. Or look at issue #3672
transformers
3,423
closed
Experiment w/ dataclasses (including Py36)
03-25-2020 00:14:53
03-25-2020 00:14:53
transformers
3,422
closed
[BART] add bart-large-xsum weights
- conversion script can now take a path, which is required since this model is not on `torch.hub`. Finetuning with fairseq and then converting to huggingface should work. I also cleaned it up a bit - Config in S3 is already updated with author-recommended generation parameters: `(num_beams=6, length_penalty=1., min_length=11, max_length=62` Context: These weights are from bart finetuned on the XSum abstractive summarization challenge, which encourages shorter (more abstractive) summaries. It achieves state of the art. Discussion: - I propose changing the SummarizationPipeline default to this model in a separate PR, since the summarizations are shorter (and high quality)!
03-24-2020 22:32:45
03-24-2020 22:32:45
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3422?src=pr&el=h1) Report > Merging [#3422](https://codecov.io/gh/huggingface/transformers/pull/3422?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/17dceae7a1de5577cd0c07a97dcd5821a08af07c&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3422/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3422?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3422 +/- ## ========================================== - Coverage 77.80% 77.79% -0.01% ========================================== Files 100 100 Lines 17051 17051 ========================================== - Hits 13266 13265 -1 - Misses 3785 3786 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3422?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `100.00% <ø> (ø)` | | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `97.58% <ø> (ø)` | | | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100.00% <100.00%> (ø)` | | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.81% <0.00%> (-0.14%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3422?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3422?src=pr&el=footer). Last update [17dceae...71fcbc9](https://codecov.io/gh/huggingface/transformers/pull/3422?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,421
closed
Added BioBERT-NLI model card
03-24-2020 22:22:01
03-24-2020 22:22:01
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3421?src=pr&el=h1) Report > Merging [#3421](https://codecov.io/gh/huggingface/transformers/pull/3421?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d0c36a7b7270f114322c191866d29abea383e5da&el=desc) will **increase** coverage by `0.01%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3421/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3421?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3421 +/- ## ========================================== + Coverage 77.55% 77.56% +0.01% ========================================== Files 100 100 Lines 16970 16970 ========================================== + Hits 13161 13163 +2 + Misses 3809 3807 -2 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3421?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3421/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.85% <0.00%> (+0.27%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3421?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3421?src=pr&el=footer). Last update [d0c36a7...e17e0d2](https://codecov.io/gh/huggingface/transformers/pull/3421?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,420
closed
Reading files takes for ever in language modeling
# 🐛 Bug ## Information Model I am using (GPT2): Language I am using the model on (Music Midifiles tokens): The problem arises when using: * [ ] the official example scripts: (give details below) The tasks I am working on is: * language modeling, running the script run_langauge_modling.py ## To reproduce Steps to reproduce the behavior: ``` python -m torch.distributed.launch \ --nproc_per_node 4 run_language_modeling.py \ --train_data_file /nethome/abashir/data/train.txt \ --output_dir /data/users/abashir/model \ --model_type gpt2 --tokenizer_name /nethome/abashir/data/PianoAI \ --do_train --line_by_line --learning_rate 1e-4 --num_train_epochs 5 \ --save_total_limit 2 --save_steps 1000 --per_gpu_train_batch_size 8 \ --seed 42 --overwrite_cache --block_size 128 ``` The output freeze at this stage for more than a day. train file size is less than 1 GB: `03/23/2020 19:43:11 - INFO - __main__ - Creating features from dataset file at /nethome/abashir/data/train.txt` ## Expected behavior Start the training right away after adding --line_by_line ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.5.1 - Platform: Linux-4.4.0-45-generic-x86_64-with-debian-jessie-sid - Python version: 3.7.6 - PyTorch version (GPU?): 1.4.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes
03-24-2020 21:04:01
03-24-2020 21:04:01
the same issue happens with me when running on 30 GB English Text, with the same parameters given to the script<|||||>@julien-c any recommendations on this issue?<|||||>Maybe launch on a single node and in a debugger, to see what's happening?<|||||>I have run the debugging both with pre-trained tokenizer using --model_name_or_path flag and with a trained tokenizer and config In both cases, this line is where the conde hangs `self.examples = tokenizer.batch_encode_plus(lines, add_special_tokens=True, max_length=block_size)["input_ids"]` <|||||>One suggestion is that you could create the cached dataset file once locally and then copy it over to wherever (cluster etc..) you're gonna be using it to train.<|||||>I have the same issue. I was training XML-Roberta and the training gets stuck in the step of creating features. Does anyone have a solution? Thanks!<|||||>@Genius1237 the cluster that I am using have better specs than my local machine. but also, I have tried this before and ended up in the same problem<|||||>@abdallah197 Can you do something like this (https://github.com/Microsoft/ptvsd/issues/1354#issuecomment-487289774). It basically allows you to debug installed packages. This way, you can debug into the `batch_encode_plus` function and see if it's failing for one particular example or just slow in general.<|||||>Visual Studio Code also has a pretty neat and easy-to-use debugger that you can even run on a remote machine. Let us know if you find root causes for your issue.<|||||>I am facing the same issue, may I ask if anyone got a solution? Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,419
closed
Adds translation pipeline
The API for translation is as follows: ``` en_fr_translation = pipeline("translation_en_to_fr") en_fr_translation("How old are you?") ``` for English to French translation. PR adds tests and gives an example in the docstring. PR builds on #3413 and should be merged after this one. Example: ![Screenshot from 2020-03-26 11-10-52](https://user-images.githubusercontent.com/23423619/77637172-899ea400-6f55-11ea-83a6-0a476fd33430.png)
03-24-2020 20:44:25
03-24-2020 20:44:25
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3419?src=pr&el=h1) Report > Merging [#3419](https://codecov.io/gh/huggingface/transformers/pull/3419?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9c683ef01e19c4dc1216dcd1ae3c8e7c44d7b2b9&el=desc) will **increase** coverage by `0.03%`. > The diff coverage is `94.44%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3419/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3419?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3419 +/- ## ========================================== + Coverage 77.76% 77.80% +0.03% ========================================== Files 100 100 Lines 16995 17025 +30 ========================================== + Hits 13216 13246 +30 Misses 3779 3779 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3419?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/3419/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.92% <ø> (ø)` | | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/3419/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `74.78% <94.44%> (+1.51%)` | :arrow_up: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3419/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.86% <0.00%> (+0.13%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3419?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3419?src=pr&el=footer). Last update [9c683ef...a5160a6](https://codecov.io/gh/huggingface/transformers/pull/3419?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,418
closed
Unused function in squad metrics
In the squad metrics file, I noticed an unused function `find_all_best_thresh_v2` https://github.com/huggingface/transformers/blob/master/src/transformers/data/metrics/squad_metrics.py#L167 It looks to be pertaining to the squad v2.0 dataset type, which has 'impossible' questions for contexts. It looks like it was meant to be used something like ``` if no_answer_probs: if version_2_with_negative: find_all_best_thresh_v2(evaluation, preds, exact, f1, no_answer_probs, qas_id_to_has_answer) else: find_all_best_thresh(evaluation, preds, exact, f1, no_answer_probs, qas_id_to_has_answer) ``` here https://github.com/huggingface/transformers/blob/master/src/transformers/data/metrics/squad_metrics.py#L236
03-24-2020 19:56:42
03-24-2020 19:56:42
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,417
closed
Fix XLNet batch generation bug
When doing batch generation with XLNet, only the first element in the batch contains any predictions. This seems to be caused by the target_mapping being improperly initialized in prepare_inputs_for_generation.
03-24-2020 19:41:54
03-24-2020 19:41:54
Hi @neonbjb, sorry to answer waaaay to late. Batch generation currently does not work. See #3021 We are not sure yet when and how to add this feature.<|||||>Uhhhh.. but it could with this PR? I have it working on my cloned repo and used it in this writeup. Your call though. https://nonint.com/2020/03/27/fine-tuning-xlnet-for-generation-tasks/
transformers
3,416
closed
XLNet model on S3 not set up correctly?
Hi all, **I'm trying to initialize an XLNetForSequenceClassification model as follows:** ``` from transformers import XLNetForSequenceClassification, XLNetConfig configuration = XLNetConfig() model = XLNetForSequenceClassification.from_pretrained("xlnet-base-cased", config=configuration) ``` **However I'm getting an error:** > RuntimeError: Error(s) in loading state_dict for XLNetForSequenceClassification: > size mismatch for transformer.mask_emb: copying a param with shape torch.Size([1, 1, 768]) from checkpoint, the shape in current model is torch.Size([1, 1, 1024]). > size mismatch for transformer.word_embedding.weight: copying a param with shape torch.Size([32000, 768]) from checkpoint, the shape in current model is torch.Size([32000, 1024]). > size mismatch for transformer.layer.0.rel_attn.q: copying a param with shape torch.Size([768, 12, 64]) from checkpoint, the shape in current model is torch.Size([1024, 16, 64]). > ... **I can get my code to run by specifying in my config:** ``` configuration.d_model = 768 # hidden size --> should be 1024 configuration.n_head = 12 # number of attention heads --> should be 16 configuration.d_inner = 3072 # FFN inner hidden size --> should be 4096 ``` But I am somewhat confused: These adjustments to the model are not in line with XLNet as introduced in the [XLNet paper](https://arxiv.org/pdf/1906.08237.pdf) (page 13). Am I understanding something wrong here, or is the XLNet model on S3 not set up correctly?
03-24-2020 19:28:52
03-24-2020 19:28:52
After looking into this some more. It looks like my issue was arising from the fact that the XLNet authors published both a 'base' and 'large' version of their model: ![image](https://user-images.githubusercontent.com/61122332/77469528-84cfd800-6e0f-11ea-8995-8a569bd2dc16.png) It seems like the config defaults to the 'large' version, but I am loading the 'base' version. Adjustments to the parameters as outlined make sense in this light. <|||||>There are simpler ways to do what you describe; first of all, you don't have to specify the configuration file, the model will load it automatically: ```py from transformers import XLNetForSequenceClassification, XLNetConfig model = XLNetForSequenceClassification.from_pretrained("xlnet-base-cased") ``` Secondly, you can also instantiate a configuration from a pre-trained checkpoint: ```py from transformers import XLNetForSequenceClassification, XLNetConfig configuration = XLNetConfig.from_pretrained("xlnet-base-cased") model = XLNetForSequenceClassification.from_pretrained("xlnet-base-cased", config=configuration) ```
transformers
3,415
closed
Problem with running Transformer Notebook: How to train a language model
Hello, in the page: https://github.com/huggingface/transformers/blob/master/notebooks/README.md I clicked "open in colab" on the notebook "How to train a language model". When the last run cell was run: %%time !{cmd} The following error was presented: Traceback (most recent call last): File "run_language_modeling.py", line 40, in <module> from transformers import ( ImportError: cannot import name 'CONFIG_MAPPING' CPU times: user 42.2 ms, sys: 18 ms, total: 60.2 ms Wall time: 5.82 s Any suggestion? Thank you!
03-24-2020 17:08:19
03-24-2020 17:08:19
Importing CONFIG_MAPPING from https://github.com/huggingface/transformers/blob/master/src/transformers/configuration_auto.py to https://github.com/huggingface/transformers/blob/master/src/transformers/__init__.py fixes the problem (for the last version of the package)<|||||>This should be fixed on master by f8823bad9a23f6623e91e71719e65342de877cb9. Can you please try again, and re-open if necessary? (in a Colab notebook, you'll need to re-download the `run_language_modeling.py` script using `!wget https://raw.githubusercontent.com/huggingface/transformers/master/examples/run_language_modeling.py`)<|||||>It is now showing the following error, Traceback (most recent call last): File "run_language_modeling.py", line 782, in <module> main() File "run_language_modeling.py", line 677, in main config = AutoConfig.from_pretrained(args.config_name, cache_dir=args.cache_dir) File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_auto.py", line 198, in from_pretrained "in its name: {}".format(pretrained_model_name_or_path, ", ".join(CONFIG_MAPPING.keys())) ValueError: Unrecognized model in /content/models/RoBERTa_GPT/. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: t5, distilbert, albert, camembert, xlm-roberta, bart, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl CPU times: user 41.1 ms, sys: 8.81 ms, total: 49.9 ms Wall time: 6.8 s<|||||>@Yamantaka01 Your config.json in `/content/models/RoBERTa_GPT/config.json` should contain a model_type key
transformers
3,414
closed
Add custom rules for sampling from GPT-2 Generator
Hi @patrickvonplaten, I've read your [blogpost](https://huggingface.co/blog/how-to-generate) and it's really interesting. Thanks! I have a question for that. We have recently trained a GPT-2 generator with HF on general text and it works well. But to have better results on my custom task, I would like to add a custom function of sampling that is able to maximize a certain behaviour of the text I am generating. I would like to add a custom sampling applied to top-k words that are choosen by GPT-2 in order to maximize , for example, the fact that in the text I am generating there must be the max number of vocals. Could you help me to think about this solution? Thanks
03-24-2020 15:41:24
03-24-2020 15:41:24
Hey @simonefrancia happy that you liked the blog post :-) This sounds like quite a special sampling function, so the best you can do would be to fork/clone the repo and add this functionality yourself. If the sampling function is too special we probably will not include it into the master branch. But feel free to open a PR if you think it adds value and is quite general.
transformers
3,413
closed
Add t5 to pipeline(task='summarization')
This PR: - adds T5 to summarization piplines. - adds warnings and better defaults to Bart/T5 summarization - removes unnecessary assert in generate() function
03-24-2020 14:26:20
03-24-2020 14:26:20
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3413?src=pr&el=h1) Report > Merging [#3413](https://codecov.io/gh/huggingface/transformers/pull/3413?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e392ba6938f50655a195ea7ec8a260b1e9fc6058&el=desc) will **increase** coverage by `0.02%`. > The diff coverage is `93.75%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3413/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3413?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3413 +/- ## ========================================== + Coverage 77.56% 77.58% +0.02% ========================================== Files 100 100 Lines 16970 16993 +23 ========================================== + Hits 13162 13184 +22 - Misses 3808 3809 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3413?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.71% <ø> (-0.02%)` | :arrow_down: | | [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/3413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `73.05% <93.10%> (+0.52%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.44% <100.00%> (+0.52%)` | :arrow_up: | | [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.89% <100.00%> (+0.05%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3413?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3413?src=pr&el=footer). Last update [e392ba6...23778d1](https://codecov.io/gh/huggingface/transformers/pull/3413?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,412
closed
cannot import name 'MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING'
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): BERT Language I am using the model on (English, Chinese ...): ENglish The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: Run the run_ner.py script on examples/ner/ <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?:
03-24-2020 14:07:12
03-24-2020 14:07:12
This should be fixed on master since a8e3336a850e856188350a93e67d77c07c85b8af. Feel free to re-open if that's not the case.<|||||>You might want to upgrade your repo. `pip install --upgrade .`
transformers
3,411
closed
Add t5 summarization example
Adds TF 2.0 Example for T5 summarization. Adds dataset download file via `tensorflow_datasets` and a rouge scorer. Example is currently being tested on T5-large on GPU to see how rouge scorer performs in comparsion to `examples/summarization/bart` rouge scorer.
03-24-2020 11:53:46
03-24-2020 11:53:46
> pending my comments Very much down to share the summarization code in another PR!<|||||>Code quality test fails because of unpinned isort library (see https://github.com/huggingface/transformers/pull/3449)
transformers
3,410
closed
Added precisions in SciBERT-NLI model card
Sorry that I have to do a second PR for this model card, but I forgot to include some precisions about the training process that are undoubtedly useful in order to reproduce my results! - Added training time and training hardware - Added lowercasing and Max. Seq. Length to parameters table
03-24-2020 11:46:42
03-24-2020 11:46:42
transformers
3,409
closed
Add right model and tokenizer path in example
03-24-2020 11:03:31
03-24-2020 11:03:31
transformers
3,408
closed
[model_cards] 🇹🇷 Add new BERTurk models
Hi, this PR adds three new BERT models for Turkish: * `dbmdz/bert-base-turkish-uncased` - uncased model with a vocab size of 32k * `dbmdz/bert-base-turkish-128k-cased` - cased model with a vocab size of 128k * `dbmdz/bert-base-turkish-128k-uncased` - uncased model with a vocab size of 128k Models (incl. `tokenizer_config.json`) are already uploaded to the model hub :) Results are coming soon in the [BERTurk repository](https://github.com/stefan-it/turkish-bert)!
03-24-2020 10:22:58
03-24-2020 10:22:58
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3408?src=pr&el=h1) Report > Merging [#3408](https://codecov.io/gh/huggingface/transformers/pull/3408?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e392ba6938f50655a195ea7ec8a260b1e9fc6058&el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3408/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3408?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3408 +/- ## ========================================== - Coverage 77.56% 77.55% -0.01% ========================================== Files 100 100 Lines 16970 16970 ========================================== - Hits 13162 13161 -1 - Misses 3808 3809 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3408?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3408/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.58% <0.00%> (-0.14%)` | :arrow_down: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3408?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3408?src=pr&el=footer). Last update [e392ba6...756792d](https://codecov.io/gh/huggingface/transformers/pull/3408?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,407
closed
AdamW in HuggingFace is different from AdamW in Pytorch
# ❓ Question I just noticed that the implementation of AdamW in HuggingFace is different from PyTorch. The previous AdamW first updates the gradient then apply the weight decay. However, in the paper (Decoupled Weight Decay Regularization, link: https://arxiv.org/abs/1711.05101) and the implementation of Pytorch, the AdamW first apply the weight decay then update the gradient. I was wondering if the two approaches are the same. Thanks! (In my opinion, they are not the same procedure.) HuggingFace: ```python for group in self.param_groups: for p in group["params"]: ... # Decay the first and second moment running average coefficient # In-place operations to update the averages at the same time exp_avg.mul_(beta1).add_(1.0 - beta1, grad) exp_avg_sq.mul_(beta2).addcmul_(1.0 - beta2, grad, grad) denom = exp_avg_sq.sqrt().add_(group["eps"]) step_size = group["lr"] if group["correct_bias"]: # No bias correction for Bert bias_correction1 = 1.0 - beta1 ** state["step"] bias_correction2 = 1.0 - beta2 ** state["step"] step_size = step_size * math.sqrt(bias_correction2) / bias_correction1 p.data.addcdiv_(-step_size, exp_avg, denom) # Just adding the square of the weights to the loss function is *not* # the correct way of using L2 regularization/weight decay with Adam, # since that will interact with the m and v parameters in strange ways. # # Instead we want to decay the weights in a manner that doesn't interact # with the m/v parameters. This is equivalent to adding the square # of the weights to the loss with plain (non-momentum) SGD. # Add weight decay at the end (fixed version) if group["weight_decay"] > 0.0: p.data.add_(-group["lr"] * group["weight_decay"], p.data) ``` Pytorch: ```python for group in self.param_groups: for p in group['params']: ... # Perform stepweight decay p.data.mul_(1 - group['lr'] * group['weight_decay']) exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq'] if amsgrad: max_exp_avg_sq = state['max_exp_avg_sq'] beta1, beta2 = group['betas'] state['step'] += 1 bias_correction1 = 1 - beta1 ** state['step'] bias_correction2 = 1 - beta2 ** state['step'] # Decay the first and second moment running average coefficient exp_avg.mul_(beta1).add_(1 - beta1, grad) exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad) if amsgrad: # Maintains the maximum of all 2nd moment running avg. till now torch.max(max_exp_avg_sq, exp_avg_sq, out=max_exp_avg_sq) # Use the max. for normalizing running avg. of gradient denom = (max_exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(group['eps']) else: denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(group['eps']) step_size = group['lr'] / bias_correction1 p.data.addcdiv_(-step_size, exp_avg, denom) ```
03-24-2020 08:32:20
03-24-2020 08:32:20
@songsuoyuan , if you notice the line: p.data.mul_(1 - group['lr'] * group['weight_decay']) The multiplication factor is (1 - group['lr'] * group['weight_decay']) . All subsequent first and second order moment calculations are not using p.data anymore. This means we would get the same result if we had skipped that multiplication and introduce an addition operation at the end if weight decay > 0 with p.data._mul(-group['lr'] * group['weight_decay']) and this what has been done in the hugginFace implementation as well. So essentially both are same. Also in the paper, in-fact the weight decay term is introduced at end ( line-12 :Algorithm-2). Decay term in line-6 corresponds to L2 regularization which is not used here. Therefore it looks to me both the implementation are the same and reflect what {ilya,fh}@ proposed in the paper.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>I find this question too. Two codes are obviously different. Because the `p.data` in huggingface has changed through `p.data.addcdiv_(-step_size, exp_avg, denom)` But I can't understand why.<|||||>*bump*<|||||>Update: they are indeed the same. PyTorch's implementation is just too confusing to understand.<|||||>They are not equivalent. This should be reported as a bug, but I see the huggingface AdamW has been deprecated.<|||||>bump again. I see old code from researcher on github use AdamW with huggingface scheduler ``` from pytorch_transformers import AdamW, WarmupLinearSchedule ``` Should I replace AdamW of huggingface to AdamW of pytorch ? ``` from torch.optim import AdamW from pytorch_transformers import WarmupLinearSchedule ``` Any advise ?
transformers
3,406
closed
Model cards for CS224n SQuAD2.0 models
For the following models: * elgeish/cs224n-squad2.0-albert-base-v2 * elgeish/cs224n-squad2.0-albert-large-v2 * elgeish/cs224n-squad2.0-albert-xxlarge-v1 * elgeish/cs224n-squad2.0-distilbert-base-uncased * elgeish/cs224n-squad2.0-roberta-base
03-24-2020 07:41:35
03-24-2020 07:41:35
transformers
3,405
closed
Glue test processors and predictions
Adress #3176 - Adds a function to load the test dataset for each GLUE task processor. - Update `run_glue.py` example script to add a `--do_test` flag for producing test set predictions in a `.tsv` file, submittable to the [GLUE scoreboard](https://gluebenchmark.com/). - Adds a couple extra feature flags to `run_glue.py` that don't need to stay.
03-24-2020 06:13:30
03-24-2020 06:13:30
hm my local isort passes even on a clean env ``` (transformers) shoarora@sho-5:~/transformers ‹glue-test-processors› $ make style black --line-length 119 --target-version py35 examples templates tests src utils All done! ✨ 🍰 ✨ 243 files left unchanged. isort --recursive examples templates tests src utils (transformers) shoarora@sho-5:~/transformers ‹glue-test-processors› $ which isort /home/shoarora/miniconda3/envs/transformers/bin/isort ``` Ultimately, I ran it in the `circleci/python:3.6` docker image to get the correct formatting. This disagrees with what happens when I style locally in a clean env. <|||||>Hi @shoarora, this is a good addition but in the meantime we updated the run_glue script (and associated utilities) quite a bit in #3800. Would you like to take a stab at updating this (probably opening a new PR)? The `Trainer`'s predict method accepts non-labelled datasets now so it should be pretty straightforward to hook it. Let us know, otherwise we'll do it down the line.<|||||>Would love to see this updated and merged :)<|||||>@ZhaofengWu You can check out https://github.com/huggingface/transformers/pull/4463 which we are going to take a look at soon<|||||>Thanks!<|||||>Closed by #4463
transformers
3,404
closed
[Bart]example---BartForConditionalGeneration
**when I run your example:** from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig model = BartForConditionalGeneration.from_pretrained('bart-large-cnn') tokenizer = BartTokenizer.from_pretrained('bart-large-cnn') ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs." inputs = tokenizer.batch_encode_plus([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt') print(inputs) summary_ids = model.generate(inputs['input_ids'], attention_mask=inputs['attention_mask'], num_beams=4, max_length=5) print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids]) **model :** https://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-large-cnn/pytorch_model.bin **the results:** {'input_ids': tensor([[ 0, 1308, 964, 32, 3035, 53, 51, 3529, 350, 171, 33237, 4, 2]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])} Traceback (most recent call last): File "/home/qwh/桌面/OpenNMT/bart.py", line 17, in <module> summary_ids = model.generate(inputs['input_ids'], attention_mask=inputs['attention_mask'], num_beams=4, max_length=5) File "/home/qwh/.local/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad return func(*args, **kwargs) **TypeError: generate() got an unexpected keyword argument 'attention_mask'** thank you!
03-24-2020 03:04:38
03-24-2020 03:04:38
transformers
3,403
closed
[examples] Use AutoModels in more examples
still to-do (non-exhaustive list): - [ ] run_multiple_choice - [ ] run_xnli - [ ] test_hans - [ ] run_mmimdb - [ ] (maybe) run_generation
03-23-2020 23:47:37
03-23-2020 23:47:37
transformers
3,402
closed
[WIP] seq2seq example
This PR presents an example seq2seq use case and bug fixes necessary for this to execute with reasonable accuracy. The utils_seq2seq.py file defines the data format for training data, and the run_seq2seq.py file takes training, development, and test data and produces a model. The README.md discusses how to execute this toy problem. The specific toy problem in use here is formatting a date string to the American style, which is a trivial example. On my local setup using GPUs, this example executes within 5 minutes. Production models should include more learnings. I welcome feedback about how to strengthen performance here and the best route to increase testing. This relies on a few bug fixes which have been incorporated in this branch - Without a fix for #3038, PreTrainedEncoderDecoder won't instantiate at all. - Without a fix for #2435, BERT models fail completely on this use case as the BERT decoder isn't instantiated correctly without CrossAttention in that case. - ~I strongly suspect that the input to the decoder in the PreTrainedEncoderDecoder class is incorrect as present in the code base, and commit https://github.com/huggingface/transformers/commit/9fcf73afbcfa18918234592039da7bd409820431 has a proposed fix. It doesn't make sense to have the expected token ids as input to the decoder when the decoder needs to learn how to decode from the embeddings.~ Incomplete understanding - will fix
03-23-2020 22:24:08
03-23-2020 22:24:08
As a non-blocking question, I do note that a lot of the examples use argparse to parse comparatively long lists of arguments. I've maintained the extant style in this PR to avoid causing noise and confusion Would it be acceptable if I broke with this style to use a JSON file to store all the arguments for an experiment?<|||||>Hi @mgoldey, sorry for only responding now. Thanks a lot for adding a seq2seq example :-) I will take a look early next week and maybe we can have a quick chat how to merge this PR and https://github.com/huggingface/transformers/pull/3383. <|||||>That sounds good. I'm still tweaking things on my end for accuracy and improved logic as I get more familiar with the code base here. I'll see if I can rebase of #3383 by then, depending on my other workload. Feel free to reach out via google hangouts if you're comfortable.<|||||>Sorry, to answer only now! I'll will soon add a Encoder-Decoder google colab that shows how to use seq2seq <|||||>Thanks - fine to close. We've moved forward without using seq2seq due to poor overall accuracy with the scale of data in place.
transformers
3,401
closed
added_tokens.json is used for splitting texts
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Bert Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: Run this script to save a pre-trained vocab, add some vocab using `added_tokens.json`. Then create a new tokenizer with the combined vocabs and use it to tokenize a sentence. ``` import os from transformers import BertTokenizer model_name = 'bert-base-uncased' tokenizer_path = 'tmp' if not os.path.exists(tokenizer_path): os.makedirs(tokenizer_path) tokenizer = BertTokenizer.from_pretrained(model_name) tokenizer.save_vocabulary(tokenizer_path) with open(tokenizer_path + '/added_tokens.json', 'w') as f: f.write('{"ver": 30522, "rw": 30523}') tokenizer = BertTokenizer.from_pretrained(tokenizer_path) s = "i want to overwrite ubuntu with windows" a = tokenizer.tokenize(s) print(a) ``` Output run 1: ``` ['i', 'want', 'to', 'o', '##ve', 'rw', 'rite', 'u', '##bu', '##nt', '##u', 'with', 'windows'] ``` Ouptut run 2: ``` ['i', 'want', 'to', 'o', 'ver', 'write', 'u', '##bu', '##nt', '##u', 'with', 'windows'] ``` Cause of the problem: `added_tokens.json` is [merged](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils.py#L668) with `all_special_tokens`, and then used to [split](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils.py#L815) the input text. Since the merged tokens is stored in an unordered set, the splitting process is non-deterministic for each run. For example, these are how the text is split in different runs: Split Run 1 (use `rw`): ``` ['i want to ove', 'rw', 'rite ubuntu with windows'] ``` Split Run 2 (use `ver`): ``` ['i want to o', 'ver', 'write ubuntu with windows'] ``` Possible solution: Instead of `self.unique_added_tokens_encoder`, use `set(self.all_special_tokens)` to [split the text](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils.py#L814) ## Expected behavior Using `added_tokens.json` to split the text seems to be a bug to me. I expect only a small set of special tokens in the `special_tokens_map.json` should be used for this purpose. In general it is helpful to let a tokenizer behave deterministically across multiple runs. Otherwise it will be bad for certain downstream task such as [sentence embedding ](https://github.com/UKPLab/sentence-transformers) because one sentence can be encoded in many different ways. This is in particular problematic if the number of added_vocab is big. ## Environment info wget https://download.pytorch.org/whl/cu100/torch-1.4.0%2Bcu100-cp36-cp36m-linux_x86_64.whl wget https://files.pythonhosted.org/packages/7e/90/6141bf41f5655c78e24f40f710fdd4f8a8aff6c8b7c6f0328240f649bdbe/torchvision-0.5.0-cp36-cp36m-manylinux1_x86_64.whl virtualenv -p /usr/bin/python3.6 venv && . venv/bin/activate && find . -maxdepth 1 -name "*.whl" | xargs pip install && pip install -r requirements.txt requirements.txt: transformers==2.5.1 tensorboardX==2.0 scikit-learn==0.22.2 - `transformers` version: 2.5.1 - Platform: Ubuntu - Python version: 3.6.9 - PyTorch version (GPU?): Y - Tensorflow version (GPU?): N - Using GPU in script?: Y - Using distributed or parallel set-up in script?: N
03-23-2020 20:47:55
03-23-2020 20:47:55
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,400
closed
[Bart: example] drop columns that are exclusively pad_token_id from input_ids
reasoning: These columns slow down computation, but do not change output. impact: this reduces the runtime to compute EVAL on the CNN examples from 2h to 1:37 before any other changes. I'm open to putting this as a method on `PretrainedTokenizer` if others find it useful. @joeddav you might find this useful. ### Code for easy copy paste ```python def trim_batch( input_ids, pad_token_id, attention_mask=None, ): """Remove columns that are populated exclusively by pad_token_id""" keep_column_mask = input_ids.ne(pad_token_id).any(dim=0) if attention_mask is None: return input_ids[:, keep_column_mask] else: return (input_ids[:, keep_column_mask], attention_mask[:, keep_column_mask]) ```
03-23-2020 20:10:45
03-23-2020 20:10:45
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3400?src=pr&el=h1) Report > Merging [#3400](https://codecov.io/gh/huggingface/transformers/pull/3400?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f7dcf8fcea4d486544f221032625a97ad7dc5405&el=desc) will **not change** coverage by `%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/3400/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/3400?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #3400 +/- ## ======================================= Coverage 77.55% 77.55% ======================================= Files 100 100 Lines 16970 16970 ======================================= Hits 13161 13161 Misses 3809 3809 ``` ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3400?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3400?src=pr&el=footer). Last update [f7dcf8f...9cfd3da](https://codecov.io/gh/huggingface/transformers/pull/3400?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
3,399
closed
Trying to train a GPT2 from scratch
Hi ! I am trying to use a GPT2 architecture for musical applications and consequently need to train it from scratch. After a bit of googling I found that the issue #1714 already had "solved" the question but when I try the to run ```Python from transformers import GPT2Config, GPT2Model NUMLAYER = 4 NUMHEAD = 4 SIZEREDUCTION = 10 #the factor by which we reduce the size of the velocity argument. VELSIZE = int(np.floor(127/SIZEREDUCTION)) + 1 SEQLEN=40 #size of data sequences. EMBEDSIZE = 5 config = GPT2Config(vocab_size = VELSIZE, n_positions = SEQLEN, n_embd = EMBEDSIZE, n_layer = NUMLAYER, n_ctx = SEQLEN, n_head = NUMHEAD) model = GPT2Model(config) ``` I get the following error : ```Python Traceback (most recent call last): File "<ipython-input-7-b043a7a2425f>", line 1, in <module> runfile('C:/Users/cnelias/Desktop/PHD/Swing project/code/script/GPT2.py', wdir='C:/Users/cnelias/Desktop/PHD/Swing project/code/script') File "C:\Users\cnelias\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 786, in runfile execfile(filename, namespace) File "C:\Users\cnelias\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/Users/cnelias/Desktop/PHD/Swing project/code/script/GPT2.py", line 191, in <module> model = GPT2Model(config) File "C:\Users\cnelias\Anaconda3\lib\site-packages\transformers\modeling_gpt2.py", line 355, in __init__ self.h = nn.ModuleList([Block(config.n_ctx, config, scale=True) for _ in range(config.n_layer)]) File "C:\Users\cnelias\Anaconda3\lib\site-packages\transformers\modeling_gpt2.py", line 355, in <listcomp> self.h = nn.ModuleList([Block(config.n_ctx, config, scale=True) for _ in range(config.n_layer)]) File "C:\Users\cnelias\Anaconda3\lib\site-packages\transformers\modeling_gpt2.py", line 223, in __init__ self.attn = Attention(nx, n_ctx, config, scale) File "C:\Users\cnelias\Anaconda3\lib\site-packages\transformers\modeling_gpt2.py", line 109, in __init__ assert n_state % config.n_head == 0 ``` What does it mean and how can I solve it ? Also more generally, is there a documentation on how to do a forward call with the GPT2 ? Can I define my own ```train()``` function or do I have to use the model's build-in function ? Am I forced to use a ```Dataset``` to do the training or can I feed it individual tensors ? I looked for it but couldn't find answer to these on the doc, but maybe I missed something EDIT : Yes, I have already read the blogpost on ```huggingface.co``` but it omits too much informations and details to be usefull for my application :(
03-23-2020 15:52:26
03-23-2020 15:52:26
This [blogpost](https://huggingface.co/blog/how-to-train) might be interesting, have you seen it?<|||||>Yes, saddly the part in which I am interested in, namely, instantiating and training/testing from scratch with my own data are not almost not or not at all described. Do you know if it is possible to feed individual tensors to the model ? And if so, how should the dimensions (batch, sequence etc..) be ordered ? I would like to write my own training function for more flexibility.<|||||>To answer my own question, everything can be found in the code, by reading the docstrings : https://github.com/huggingface/transformers/blob/v2.5.1/src/transformers/modeling_gpt2.py#L99 <|||||>@johncwok Didu succeed in training the GPT2 model on your own dataset from scratch? <|||||>I did, but it didn't produce very good results. Either my data is not good enough or I need more layers, but I have reach the max. of what my computer capacity can allow me <|||||>Hi @johncwok, I plan to train gpt2 on my data. Do you mind to share your training script, along with the raw data and the code to preprocess it?
transformers
3,398
closed
[Bart] Fix: put dummy_inputs on correct device
This fixes `test_dummy_inputs`, which was failing on GPU because dummy inputs were put on CPU even if model was on GPU.
03-23-2020 14:35:28
03-23-2020 14:35:28
Merging with deep suspicion that circleci failure is spurious.
transformers
3,397
closed
Supported language information by model
Hi there, Another feature/documentation request. I am evaluating the language support of all pre-trained models distributed via huggingface :) I have quickly looked into the code and happily found that XLNet and FlauBERT models have that information: https://github.com/huggingface/transformers/search?q=lang&unscoped_q=lang Do you plan in the short term to add an `available_languages` attribute to all pre-trained models? If not, happy to do that investigation and share results. Cheers, Alex
03-23-2020 14:15:19
03-23-2020 14:15:19
Hi Alex, language support is described in the metadata in the models' [model cards](https://github.com/huggingface/transformers/tree/master/model_cards) and then rendered in a searchable way on the https://huggingface.co/models website. The mapping is not exhaustive right now because a lot of the canonical/historical models do not have a model card yet. Feel free to create them for the models you're researching. cc @thomwolf @LysandreJik @clmnt <|||||>Thanks, that's a good place to start. I am more interested in the canonical/historical models as you say. I see that some README.md in model cards have a ``` --- language: - bulgarian - czech - polish - russian - ... --- ``` Are you OK for me to do a pull request to add these? I would also like to standardize languages using the ISO 693-1 two-letter code (https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes). Do you agree on this standard? <|||||>Sounds good to me!<|||||>Super, I have forked the repo and created a branch. Expect a pull request in the new few weeks :) so we can close this.<|||||>FYI @MobiusLooper<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,396
closed
Cannnot Import from transformers
# 🐛 Bug ## Information I am trying to import TFGPT2LMHeadModel in transformers.Python giving below error. cannot import name 'TFGPT2LMHeadModel' from 'transformers ## To reproduce import tensorflow as tf from transformers import TFGPT2LMHeadModel,GPT2Tokenizer tokenizer = GPT2Tokenizer.from_pretrained("gpt2") # add the EOS token as PAD token to avoid warnings model = TFGPT2LMHeadModel.from_pretrained("gpt2", pad_token_id=tokenizer.eos_token_id) # encode context the generation is conditioned on input_ids = tokenizer.encode('I enjoy walking with my cute dog', return_tensors='tf') # generate text until the output length (which includes the context length) reaches 50 greedy_output = model.generate(input_ids, max_length=50) print("Output:\n" + 100 * '-') print(tokenizer.decode(greedy_output[0], skip_special_tokens=True)) ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?:
03-23-2020 14:04:05
03-23-2020 14:04:05
In order to import the tensorflow models, you need to have TF2+ installed. Please update your environment info if you *do* have TF2 installed in the environment in which you're running your script.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
3,395
closed
🚀 Feature request Multimodal BERT Models
Hello, it would be great if more **multimodal** BERT models are included in the library. I have noticed that MMBT from Facebook is provided; however, I was unable to find some guidelines about how to make it work with the help of 🤗 Transformers. Possible models can be [VilBERT](https://arxiv.org/abs/1908.02265), [VL-BERT](https://arxiv.org/abs/1908.08530), [VisualBERT](https://arxiv.org/abs/1908.03557), [VideoBERT](https://arxiv.org/abs/1904.01766) and so on. Best regards.
03-23-2020 12:40:32
03-23-2020 12:40:32
As for guidelines about making MMBT work, here is an example on the mm-imdb dataset: https://github.com/huggingface/transformers/blob/master/examples/mm-imdb/run_mmimdb.py.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hi seems like above examples folder has been removed, is it because the multi modal experiment is in intermediate stage.<|||||>I believe it's available here: https://github.com/huggingface/transformers/tree/master/examples/contrib/mm-imdb<|||||>Hey is there any progress with it soon? I find only the mm-imdb example: https://github.com/huggingface/transformers/tree/master/examples/contrib/mm-imdb Your LXMERT model receives only text features from what I see ("visual_feats - These are currently not provided by the transformers library.") Thanks :) <|||||>The example for mmbt on mm-imdb is also an invalid link now. <|||||>Here is a correct link for now https://github.com/huggingface/transformers/tree/master/examples/research_projects/mm-imdb<|||||>> As for guidelines about making MMBT work, here is an example on the mm-imdb dataset: https://github.com/huggingface/transformers/blob/master/examples/mm-imdb/run_mmimdb.py. The link is broken!<|||||>> The link is broken! See the reply above you :) That seems to work
transformers
3,394
closed
Add comparison table with new models
03-23-2020 11:32:41
03-23-2020 11:32:41
transformers
3,393
closed
Create README.md
03-23-2020 11:30:50
03-23-2020 11:30:50
Thanks for sharing! Any way you could format the training results as a Markdown table? Might be more readable.<|||||>I'll merge for now