repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
β | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 12,726 | closed | Unrecognized configuration class GPT2Config for AutoModelForSeq2SeqLM | Microsoft DialoGPT no longer working | ## Information
Model I am using: Microsoft's DialoGPT
The problem arises when using:
* [x] the official example scripts:
Since the morning of July 14th, the inference API has been outputting errors on [Microsoft's DialoGPT](https://huggingface.co/microsoft/DialoGPT-medium). It was working fine before July 14th.
Error
```
{'error': "Unrecognized configuration class <class 'transformers.models.gpt2.configuration_gpt2.GPT2Config'>
for this kind of AutoModel: AutoModelForSeq2SeqLM.\nModel type should be one of
BigBirdPegasusConfig, M2M100Config, LEDConfig, BlenderbotSmallConfig, MT5Config, T5Config, PegasusConfig, MarianConfig, MBartConfig, BlenderbotConfig, BartConfig, FSMTConfig, EncoderDecoderConfig, XLMProphetNetConfig, ProphetNetConfig."}
```
Query script as given on Hugging Face's site:
```python
import requests
API_URL = "https://api-inference.huggingface.co/models/microsoft/DialoGPT-medium"
headers = {"Authorization": "Bearer API_TOKEN"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query({
"inputs": {
"past_user_inputs": ["Which movie is the best ?"],
"generated_responses": ["It's Die Hard for sure."],
"text": "Can you explain why ?",
},
})
```
@patrickvonplaten, @LysandreJik
I'm mentioning these two people as the guide says they are working on gpt2. Sorry if I pinged the wrong people!
| 07-15-2021 03:01:56 | 07-15-2021 03:01:56 | Getting this error also. all fine-tuned models using Dialo giving this exact error.<|||||>Hi everyone.
Not really sure what happened here (the error is pretty confusing). It is fixed now anyway. <|||||>If someone pinned a model while it was having issues, please let me know, we might have to update them to fix them too !<|||||>Hi @Narsil
These 2 models need update:
- dbmdz/german-gpt2
- benjamin/gerpt2<|||||>dbmdz/german-gpt2 seems to be working https://huggingface.co/dbmdz/german-gpt2?text=Heute+ist+sehr+sch%C3%B6nes+Wetter+in
It doesn't seem to be defined as `conversation`, is that what you're referring to ?
I am not sure how this model was defined and so if it actually works with conversation, but it doesn't seem to be the case.
The API works with text-generation for this model and it works fine.
`benjamin/gerpt2` seems to be exactly the same.
If you want to mark them as conversational you need to update the `pipeline_tag` https://huggingface.co/docs/hub/models-widgets#enabling-a-widget
Otherwise do you mind creating a new issue with the new error you're receiving to be able to reproduce (you can ping me) ?
Hope this helps. |
transformers | 12,725 | closed | [doc] performance: batch sizes | This PR adds a brief discussion of batch sizes for performance.
@sgugger | 07-15-2021 00:29:40 | 07-15-2021 00:29:40 | |
transformers | 12,724 | closed | [doc] testing: how to trigger a self-push workflow | Took me a while to figure out how to trigger self-push github actions test, so documenting how to do it right the first time.
@sgugger, @LysandreJik | 07-14-2021 23:45:45 | 07-14-2021 23:45:45 | |
transformers | 12,723 | closed | [deepspeed] nvme test hanging experiment: take4 | As reported in https://github.com/huggingface/transformers/issues/12715 nvme CUDA extension of deepspeed fails to build and leads to a hanging test. As suggested by @tjruwase we should be clearing out the CUDA binary extensions dir, since a new release might be incompatible with the old binary and things break.
```
rm -rf ~/.cache/torch_extensions/
```
And all appears to be resolved. Replicated to the scheduled job too.
Fixes: https://github.com/huggingface/transformers/issues/12715
@sgugger, @LysandreJik | 07-14-2021 23:26:34 | 07-14-2021 23:26:34 | Thanks a lot for fixing this! Should we merge the PR or leave it like this to remember how to manage a hang with deepspeed and nvme?
Do you know if rebuilding the extensions takes a look time?<|||||>Never tried to measure the rebuilding, and it'd depend on the hardware, but probably in a ballpark of 20-30secs.
--------------
My main concern with the proposed solution in this PR is a race condition where one job rebuilds and another slightly slower one wipes the binaries out and leading to test failure. Since this is shared fs.
This is the problem with pytorch CUDA extensions. Instead of installing the binaries into the python tree which could be many on the same system (virtual env) it installs them into `~/.cache/torch_extensions` which is shared between all virtual envs - really bad idea.
So the clean solution is to not to install pure python package and have it build JIT at run time, but instead to do a pre-build which then installs the binary cuda extensions into the right python env, then there is never a collision.
So it'd be:
```
DS_BUILD_CPU_ADAM=1 DS_BUILD_AIO=1 DS_BUILD_UTILS=1 pip install deepspeed --global-option="build_ext" --global-option="-j8" --no-cache -v --disable-pip-version-check
```
Of course, this too takes time.
In theory we only need to do this once per deepspeed release, so we could also pre-build binary wheels and simply install those.
Do we have any other need for binary wheels besides `torch_scatter`?<|||||>And this would only be necessary if there's an issue with the deepspeed release, right? As there was no issue for the other machine, nor for your local machines. I wonder if we really need to implement a workaround for this or if we can't have this as a potential solution for future deepspeed issues that arise soon after a release.
There are no other binary wheels needs besides `torch_scatter`, but I'd rather keep those to a minimum as it doesn't help maintainability.<|||||>> And this would only be necessary if there's an issue with the deepspeed release, right?
We would need to do this for every release in case some Cpp code was changed.
------
Agreed, let's not do anything then and revisit this if it becomes a problem.
I wonder if we could create an index pointing to troubleshooting PRs/Issues, so e.g. this could be a start:
## Troubleshooting Github Actions CI (self-hosted box)
* Deepspeed
- if jit build hangs, clear out `rm -rf ~/.cache/torch_extensions/` reference: https://github.com/huggingface/transformers/pull/12723
and put it close to home, under `.github-actions/README.md`? or `.github-actions/TROUBLESHOOTING.md`<|||||>Oh that's a brilliant idea indeed!<|||||>closing this as it's now indexed by https://github.com/huggingface/transformers/blob/master/.github/workflows/TROUBLESHOOT.md |
transformers | 12,722 | closed | [deepspeed] nvme test hanging experiment: take3 | Trying to fix hanging test https://github.com/huggingface/transformers/issues/12715
WIP | 07-14-2021 23:22:44 | 07-14-2021 23:22:44 | grr, has to be on upstream continued in https://github.com/huggingface/transformers/pull/12723 |
transformers | 12,721 | closed | [WIP] [deepspeed] nvme test hanging experiment: take2 | Trying to fix hanging test https://github.com/huggingface/transformers/issues/12715 | 07-14-2021 23:13:35 | 07-14-2021 23:13:35 | continued in https://github.com/huggingface/transformers/pull/12722 - needed to have the branch name start with `ci_` |
transformers | 12,720 | closed | [Flax] Correct shift labels for seq2seq models in Flax | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #12719
This PR makes sure that the `shift_tokens_right` is always written in numpy as it will always be called in the data-collator
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-14-2021 22:10:37 | 07-14-2021 22:10:37 | |
transformers | 12,719 | closed | [Flax] Change all `shift_tokens_right` to numpy code | shift labels right is called usually in the data collator and therefore should not be written in jax, but in numpy to not block the TPU. We should make sure that all Encoder-Decoder models have their `shift_tokens_right` implemented in numpy as it's faster. | 07-14-2021 22:00:19 | 07-14-2021 22:00:19 | |
transformers | 12,718 | closed | [trainer] release tmp memory in checkpoint load | As discovered in https://github.com/huggingface/transformers/issues/12680#issuecomment-880194562 we had a model-size memory leak on loading checkpoint. @sgugger found a fix which is what this this PR is.
@sgugger | 07-14-2021 21:25:55 | 07-14-2021 21:25:55 | |
transformers | 12,717 | closed | [wip] [deepspeed] nvme test hanging experiment | Debugging https://github.com/huggingface/transformers/issues/12715
This PR is trying to revert to the last known to work version 0.4.2.
| 07-14-2021 21:17:07 | 07-14-2021 21:17:07 | Continued in https://github.com/huggingface/transformers/pull/12721 |
transformers | 12,716 | closed | Fix typo in Speech2TextForConditionalGeneration example | # What does this PR do?
This PR fixes a small typo in the example docstring.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patil-suraj
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-14-2021 21:12:44 | 07-14-2021 21:12:44 | |
transformers | 12,715 | closed | [testing] failing tests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_stage3_nvme_offload | So a few days ago `tests/deepspeed/test_deepspeed.py::TrainerIntegrationDeepSpeed::test_stage3_nvme_offload` started hanging and getting killed by pytest-timeout.
It gets stuck in `_jit_compile` which never completes. This is nvme-specific, as all other deepspeed tests that use jit work just fine.
If I run it on my own setup by first removing `rm -rf ~/.cache/torch_extensions/` it works just fine. So it happens only on that github-actions runner.
I went back to the logs from a few days back when it wasn't failing and checked that it's the same libaio packages installed on both cases:
```
Get:1 http://archive.ubuntu.com/ubuntu focal/main amd64 libaio1 amd64 0.3.112-5 [7184 B]
Get:2 http://archive.ubuntu.com/ubuntu focal/main amd64 libaio-dev amd64 0.3.112-5 [13.7 kB]
```
@tjruwase, any insights to why it might start hanging on building the nvme cuda extention?
The main difference is that the successful run was using deepspeed-0.4.2 and it started failing with deepspeed-0.4.3 release. I looked through the changes since 0.4.2 and I don't see anything remotely related to the op_builder other than https://github.com/microsoft/DeepSpeed/pull/1213 - could that be related?
The full log is:
```
self = <test_deepspeed.TrainerIntegrationDeepSpeed testMethod=test_stage3_nvme_offload>
@require_deepspeed_aio
def test_stage3_nvme_offload(self):
with mockenv_context(**self.dist_env_1_gpu):
# this actually doesn't have to be on NVMe, any storage will do since this test only
# runs a simple check that we can use some directory as if it were NVMe
nvme_path = self.get_auto_remove_tmp_dir()
nvme_config = dict(device="nvme", nvme_path=nvme_path)
ds_config_zero3_dict = self.get_config_dict(ZERO3)
ds_config_zero3_dict["zero_optimization"]["offload_optimizer"] = nvme_config
ds_config_zero3_dict["zero_optimization"]["offload_param"] = nvme_config
trainer = get_regression_trainer(local_rank=0, fp16=True, deepspeed=ds_config_zero3_dict)
with CaptureLogger(deepspeed_logger) as cl:
> trainer.train()
tests/deepspeed/test_deepspeed.py:321:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/trainer.py:1124: in train
deepspeed_engine, optimizer, lr_scheduler = deepspeed_init(
src/transformers/deepspeed.py:370: in deepspeed_init
model, optimizer, _, lr_scheduler = deepspeed.initialize(
/opt/conda/lib/python3.8/site-packages/deepspeed/__init__.py:126: in initialize
engine = DeepSpeedEngine(args=args,
/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py:194: in __init__
self._configure_optimizer(optimizer, model_parameters)
/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py:726: in _configure_optimizer
self.optimizer = self._configure_zero_optimizer(basic_optimizer)
/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py:940: in _configure_zero_optimizer
optimizer = FP16_DeepSpeedZeroOptimizer_Stage3(
/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py:809: in __init__
self._configure_tensor_swapping(offload_optimizer_config, aio_config)
/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py:938: in _configure_tensor_swapping
self.optimizer_swapper = swapper_type(
/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/swap_tensor/partitioned_optimizer_swapper.py:47: in __init__
aio_op = AsyncIOBuilder().load()
/opt/conda/lib/python3.8/site-packages/deepspeed/ops/op_builder/builder.py:239: in load
return self.jit_load(verbose)
/opt/conda/lib/python3.8/site-packages/deepspeed/ops/op_builder/builder.py:267: in jit_load
op_module = load(
/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py:1074: in load
return _jit_compile(
/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py:1301: in _jit_compile
baton.wait()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <torch.utils.file_baton.FileBaton object at 0x7f7418fe1fa0>
def wait(self):
'''
Periodically sleeps for a certain amount until the baton is released.
The amount of time slept depends on the ``wait_seconds`` parameter
passed to the constructor.
'''
while os.path.exists(self.lock_file_path):
> time.sleep(self.wait_seconds)
E Failed: Timeout >60.0s
/opt/conda/lib/python3.8/site-packages/torch/utils/file_baton.py:42: Failed
----------------------------- Captured stdout call -----------------------------
[2021-07-14 20:39:36,891] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.4.3, git-hash=unknown, git-branch=unknown
[2021-07-14 20:39:36,892] [INFO] [utils.py:11:_initialize_parameter_parallel_groups] data_parallel_size: 1, parameter_parallel_size: 1
[2021-07-14 20:39:36,914] [INFO] [engine.py:179:__init__] DeepSpeed Flops Profiler Enabled: False
Using /github/home/.cache/torch_extensions as PyTorch extensions root...
No modifications detected for re-loaded extension module cpu_adam, skipping build step...
Loading extension module cpu_adam...
Time to load cpu_adam op: 0.25669288635253906 seconds
Adam Optimizer #19 is created with AVX2 arithmetic capability.
Config: alpha=0.000050, betas=(0.900000, 0.999000), weight_decay=0.000000, adam_w=1
[2021-07-14 20:39:37,652] [INFO] [engine.py:708:_configure_optimizer] Using DeepSpeed Optimizer param name adamw as basic optimizer
[2021-07-14 20:39:37,653] [INFO] [engine.py:713:_configure_optimizer] DeepSpeed Basic Optimizer = DeepSpeedCPUAdam
[2021-07-14 20:39:37,653] [INFO] [utils.py:43:is_zero_supported_optimizer] Checking ZeRO support for optimizer=DeepSpeedCPUAdam type=<class 'deepspeed.ops.adam.cpu_adam.DeepSpeedCPUAdam'>
[2021-07-14 20:39:37,653] [INFO] [logging.py:68:log_dist] [Rank 0] Creating fp16 ZeRO stage 3 optimizer
[2021-07-14 20:39:37,653] [INFO] [engine.py:938:_configure_zero_optimizer] Initializing ZeRO Stage 3
[2021-07-14 20:39:37,653] [INFO] [stage3.py:633:__init__] Reduce bucket size 1
[2021-07-14 20:39:37,653] [INFO] [stage3.py:634:__init__] Allgather bucket size 0.9
Using /github/home/.cache/torch_extensions as PyTorch extensions root...
No modifications detected for re-loaded extension module utils, skipping build step...
Loading extension module utils...
Time to load utils op: 0.0005452632904052734 seconds
[2021-07-14 20:39:37,656] [INFO] [stage3.py:933:_configure_tensor_swapping] Tensor Swapping: Adding optimizer tensors
[2021-07-14 20:39:37,657] [INFO] [utils.py:30:print_object] SwapBufferManager:
[2021-07-14 20:39:37,657] [INFO] [utils.py:34:print_object] count ........................ 4
[2021-07-14 20:39:37,657] [INFO] [utils.py:34:print_object] dtype ........................ torch.float32
[2021-07-14 20:39:37,657] [INFO] [utils.py:34:print_object] free_buffer_index ............ [0, 1, 2, 3]
[2021-07-14 20:39:37,657] [INFO] [utils.py:34:print_object] gigabytes .................... 3.814697265625e-06
[2021-07-14 20:39:37,657] [INFO] [utils.py:34:print_object] num_elems .................... 256
[2021-07-14 20:39:37,657] [INFO] [utils.py:34:print_object] used_buffer_index ............ {}
Using /github/home/.cache/torch_extensions as PyTorch extensions root...
----------------------------- Captured stderr call -----------------------------
PyTorch: setting up devices
The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).
PyTorch: setting up devices
The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).
Using amp fp16 backend
+++++++++++++++++++++++++++++++++++ Timeout ++++++++++++++++++++++++++++++++++++
~~~~~~~~~~~~~~~~~~~~~ Stack of Thread-1 (140136515512064) ~~~~~~~~~~~~~~~~~~~~~~
File "/opt/conda/lib/python3.8/threading.py", line 890, in _bootstrap
self._bootstrap_inner()
File "/opt/conda/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/opt/conda/lib/python3.8/site-packages/tqdm/_monitor.py", line 59, in run
self.was_killed.wait(self.sleep_interval)
File "/opt/conda/lib/python3.8/threading.py", line 558, in wait
signaled = self._cond.wait(timeout)
File "/opt/conda/lib/python3.8/threading.py", line 306, in wait
gotit = waiter.acquire(True, timeout)
~~~~~~~~~~~~~~~~~~~~~ Stack of <unknown> (140136768341760) ~~~~~~~~~~~~~~~~~~~~~
File "/opt/conda/lib/python3.8/site-packages/execnet/gateway_base.py", line 285, in _perform_spawn
reply.run()
File "/opt/conda/lib/python3.8/site-packages/execnet/gateway_base.py", line 220, in run
self._result = func(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/execnet/gateway_base.py", line 967, in _thread_receiver
msg = Message.from_io(io)
File "/opt/conda/lib/python3.8/site-packages/execnet/gateway_base.py", line 432, in from_io
header = io.read(9) # type 1, channel 4, payload 4
File "/opt/conda/lib/python3.8/site-packages/execnet/gateway_base.py", line 400, in read
data = self._read(numbytes - len(buf))
+++++++++++++++++++++++++++++++++++ Timeout ++++++++++++++++++++++++++++++++++++
``` | 07-14-2021 21:12:31 | 07-14-2021 21:12:31 | I'm running an experiment with `deepspeed==0.4.2` https://github.com/huggingface/transformers/pull/12717
<|||||>> If I run it on my own setup by first removing `rm -rf ~/.cache/torch_extensions/` it works just fine. So it happens only on that
I have seen these kinds of DeepSpeed hangs building different extensions at different points in time, and in all cases deleting the `.cache/torch_extensions` seems to always do the trick. I have always felt that this was caused by a timing issue in the build process. What happens if you manually deleted the cache folder in the nvme unit test?<|||||>Nothing immediately comes to mind for me either. It seems like it's stuck waiting for a lock file to go away?
>
> while os.path.exists(self.lock_file_path):
Maybe the build of the extension before aio didn't delete that file during its cleanup?
Would that file get left behind if there was a problem building cpu_adam?<|||||>Thank you for the tip, @tjruwase!
I've added this clean up to the CI job, I think it should be there all the time, since deepspeed won't rebuild a new extension after it built an old one I think.
Hopefully that did the trick. I will have to weight for a while till that job gets run.
---
@adammoody, let's see if Tunji's trick works. Most likely the problem is unrelated to your PR.
<|||||>I think it did the trick, thank you @tjruwase!
https://github.com/huggingface/transformers/pull/12723
<|||||>That's very cool !!! I have been stuck here for a long time, and finally I found this solution!
The system just waiting after the log:
>Using ~/.cache/torch_extensions/py38_cu113 as PyTorch extensions root...
The debug process located at
>def wait(self):
'''
Periodically sleeps for a certain amount until the baton is released.
The amount of time slept depends on the ``wait_seconds`` parameter
passed to the constructor.
'''
while os.path.exists(self.lock_file_path):
time.sleep(self.wait_seconds)
After I remove the folder, the process became normal. Cool!
|
transformers | 12,714 | closed | layoutlm TokenClassificationPipeline | Hi, I was looking at using transformers.pipeline for TokenClassification with an instance of microsoft/layoutlm-base-uncased that I have fine tuned. I would like to use pipeline to take advantage of the entity aggregation_strategy feature for extracted entities.
However it is unclear to me how/whether TokenClassificationPipeline works with layoutlm for inference because layoutlm expects both a input text and input bounding boxes, unlike other text only models.
Do you know if TokenClassificationPipeline is supposed to work with layoutlm and are there any examples? | 07-14-2021 21:07:23 | 07-14-2021 21:07:23 | cc @NielsRogge <|||||>I'm afraid the `TokenClassificationPipeline` will not work with LayoutLM, the reason being that, as you mention, the model expects an additional input besides text, namely bounding boxes.
We are currently discussing the design of pipelines for models like LayoutLM.
<|||||>Ok thanks for the information! I'll just work around it for now.<|||||>@NielsRogge are you aware of any developments in terms of pipeline integration for LayoutLM-like models? Thanks :) <|||||>@mishig25 worked on supporting LayoutLM for the object detection pipeline, but that wasn't added in the end. Not sure if we can add it to the existing pipeline, cause the model requires a few additional inputs (`bbox`, and `pixel_values`), cc @Narsil <|||||>Hi, it probably won't be implemented directly in `transformers` because arguments are different and so on.
However you should be able to override the pipeline yourself doing something like
```python
pipe = pipeline(model="..." , pipeline_class=MyPipeline)
class MyPipeline(Pipeline):
def preprocess(self, inputs):
# Just return the inputs that will be sent to the model
return model_inputs
def _forward(self, model_inputs):
model_outputs = self.model(**model_inputs)
return model_outputs
def postprocess(self, model_outputs):
# Finalize the objects
return final_object
```
If you inherit `TokenClassificationPipeline you could definitely reuse stuff being done with aggregation_strategies
|
transformers | 12,713 | closed | Add versioning system to fast tokenizer files | # What does this PR do?
Some changes cannot be done to the fast tokenizers file without breaking backward compatibility. This PR introduces a versioning system by allowing a model repo to contain multiple tokenizer files: the `tokenizer.json` is the default one and if one (or several) `tokenizer.x.y.z.json` exist, those files are used for the version x.y.z (of Transformers) and above.
cc @n1t0 as it should be helpful to solve that longstanding bug. | 07-14-2021 21:06:39 | 07-14-2021 21:06:39 | might be cleaner if this worked in the other direction, i.e.
> multiple tokenizer files: the `tokenizer.json` is the default one, used in the most recent version of Transformers. If one or more `tokenizer-x.y.z.json` exist, those files are used for the version x.y.z (of Transformers) and below.
Makes more sense on the Hub side as well. What do you think?<|||||>@julien-c this would break repositories that rely on `transformers` versions that are earlier than the first one that will have this feature.
Were we to update the `tokenizer.json` file to the new, "fixed" one, and add a new `tokenizer-x.x.x.json` file to be used by earlier versions of `transformers`, then we would have no way of telling all versions < `4.10.0` to use that version rather than the standard `tokenizer.json` file.<|||||>I think your assertion depends on what kind of changes are made to the JSON files. If it's only new attributes for example I wouldn't expect older versions to break, but from what I understand you're actually talking about modifying the actual attributes?<|||||>Yes, the attributes actually need to be modified. For example, see this issue: https://github.com/huggingface/transformers/issues/9633
There was an offset mappings bug, which needed to be patched. However, the issue lived in the `tokenizer.json` file itself - so the recommended way to patch this was for users to recompile that file, by passing the "slow" tokenizer files, and using the newer `tokenizers` version to generate the updated file.
I believe there are other issues, and there will be other issues as the libraries continue to evolve. Implementing this here allows us to ensure that the previous versions remain completely unaffected - while offering a way to patch models for future use.<|||||>> Yes, the attributes actually need to be modified. For example, see this issue: #9633
>
> There was an offset mappings bug, which needed to be patched. However, the issue lived in the `tokenizer.json` file itself - so the recommended way to patch this was for users to recompile that file, by passing the "slow" tokenizer files, and using the newer `tokenizers` version to generate the updated file.
>
> I believe there are other issues, and there will be other issues as the libraries continue to evolve. Implementing this here allows us to ensure that the previous versions remain completely unaffected - while offering a way to patch models for future use.
going on a whim here, but what about using git branches to do this?<|||||>The problem with a new branch is that we then can't have a new version of the model in a new git branch that has to be used with one tokenizer file if versions of Transformers are old, and another one if they are more recent. And it wouldn't be compatible with the sure selecting their own branch as well (though in that case they should make sure to have the right version with tokenizers file).
The key here (for more context) is that we have tokenizers that have a "wrong" tokenizer file for more recent versions of Tokenizers (controlled by the version of Transformers) because there was a bug in the conversion from slow to fast tokenizer script. We can't touch the main branch and the tokenizer.json file otherwise every code in production using those models will suddenly break (the changes are significant sadly). |
transformers | 12,712 | closed | [doc] parallelism: Which Strategy To Use When | as requested by https://github.com/huggingface/transformers/issues/12688 adding a new section on Which Strategy To Use When
Fixes: https://github.com/huggingface/transformers/issues/12688
@sgugger | 07-14-2021 20:32:29 | 07-14-2021 20:32:29 | |
transformers | 12,711 | closed | Error while performing eval on clm using gpt2 in flax | ## Environment info
- `transformers` version: 4.9.0.dev0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (tpu)
- Jax version: 0.2.16
- JaxLib version: 0.1.68
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
@patrickvonplaten @patil-suraj
## Information
Model I am using (Bert, XLNet ...):GPT2
The problem arises when using:
* [1 ] the official example scripts: (give details below)
using the examples/flax/language-modeling/run_clm_flax.py
* [ 2] my own modified scripts: (give details below):
a run.sh file of the format -
run.sh --param1 --param2
The tasks I am working on is:
* [ 1] my own task or dataset: (give details below):
* txt file containing rap lyrics starting with <BOS> and ending with <EOS>
## To reproduce
Steps to reproduce the behavior:
1.1.Make a new directory test and change to this directory
2.Add tokenizer.json and config.json from the gpt2 repo from (https://huggingface.co/gpt2/tree/main) to this repository
3.Make a run.sh file of the type run.sh --param1 --param2 and add evaluation parameters such --do_eval and --eval_steps
4.run the file ./run.sh
## Expected behavior
When evaluation occurs you will get the following error:
File "run_clm_flax.py", line 640, in <module>
main()
File "run_clm_flax.py", line 609, in main
eval_metrics = get_metrics(eval_metrics)
File "/home/anantshankhdhar/RapAiAnant/lib/python3.8/site-packages/flax/training/common_utils.py", line 53, in get_metrics
return stack_forest(metrics_np)
File "/home/anantshankhdhar/RapAiAnant/lib/python3.8/site-packages/flax/training/common_utils.py", line 45, in stack_forest
return jax.tree_multimap(stack_args, *forest)
TypeError: tree_map() missing 1 required positional argument: 'tree'
| 07-14-2021 20:05:36 | 07-14-2021 20:05:36 | Hey @AnantShankhdhar - could you please provide the `run.sh` file?<|||||>The error was because the eval batch size was very high |
transformers | 12,710 | closed | [test] split test into 4 sub-tests to avoid timeout | This PR splits the long test into 4 sub-tests to avoid timeout, as each sub-test is relatively slow.
This supercedes https://github.com/huggingface/transformers/pull/12699
@LysandreJik, @sgugger
| 07-14-2021 18:55:27 | 07-14-2021 18:55:27 | |
transformers | 12,709 | closed | Init adds its own files as impacted | # What does this PR do?
As pointed out by @patrickvonplaten, the script that fetches the right tests does not consider the init of a submodule impacts its files. This PR addresses that. | 07-14-2021 18:00:35 | 07-14-2021 18:00:35 | |
transformers | 12,708 | closed | [Bug?] question answering - end position of each input is weird | I run "python run_qa.py" in transformers/examples/pytorch/question-answering.
In prepare_train_features function, I think "end position" is lower than expected postiion.
I tested first example in examples(squad) in "prepare_train_features" function
For example, answer text = 'Saint Bernadette Soubirous'
print(tokenizer(answer_text))
=> return: {'input_ids': [101, 3002, 16595, 9648, 4674, 2061, 12083, 9711, 2271, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
print(input_ids[tokenized_examples['start_positions'][0]:tokenized_examples['end_positions'][0]])
=> return: [3002, 16595, 9648, 4674, 2061, 12083, 9711]
=> Thus I think last token 2271 is dropped.
For other input sentences, I think last token is dropped
Isn't it bug?? | 07-14-2021 17:35:18 | 07-14-2021 17:35:18 | I believe @sgugger worked on that script<|||||>Hi there, noticed you closed this so may have come to the same conclusion, but the "end_positions" will give you the position of the last token in the answer. So you should add a +1 in your slice to include that token at "end_positions".<|||||>Thank you for your reply.
I recognized my mistakes, thus I closed the issue myself.
Thank you for checking one more time.
Next time, I'll post the issue deliberately!
2021λ
7μ 16μΌ (κΈ) μ€μ 2:13, Sylvain Gugger ***@***.***>λμ΄ μμ±:
> Hi there, noticed you closed this so may have come to the same conclusion,
> but the "end_positions" will give you the position of the last token in the
> answer. So you should add a +1 in your slice to include that token at
> "end_positions".
>
> β
> You are receiving this because you modified the open/close state.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/12708#issuecomment-880872726>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AOQH5YFSBA4AWDZPF77K3TLTX4JMJANCNFSM5AL5BOQQ>
> .
>
|
transformers | 12,707 | closed | Convert model from flax to TF | I am trying to convert my flax MT5 model to TensorFlow. I devised the following script using https://github.com/huggingface/transformers/issues/12545
```
from transformers import MT5Model, MT5TokenizerFast, TFMT5Model, MT5Config, FlaxT5ForConditionalGeneration
import numpy as np
import jax
import jax.numpy as jnp
pretrained = "../dumped/code-mt5-large-batch-mix/" # earlier missed the fact that there is no ckpt in this dir
tmp_path = "../dumped/code-mt5-large-batch-mix-tensorflow"
config = MT5Config.from_pretrained(pretrained, from_flax=True)
model = FlaxT5ForConditionalGeneration.from_pretrained(pretrained, config=config)
tokenizer = MT5TokenizerFast(pretrained, use_fast=True, extra_ids=160)
def to_f32(t):
return jax.tree_map(lambda x: x.astype(jnp.float32) if x.dtype == jnp.bfloat16 else x, t)
model.params = to_f32(model.params)
model.save_pretrained(tmp_path)
model_tf = TFMT5Model.from_pretrained(tmp_path)
model_tf.save_pretrained(tmp_path)
```
However, the conversion gets aborted with this output: https://paste.ubuntu.com/p/Ynw9Tn8NC9/
According to the output, the conversion seems to require a specific file instead of the entire model directory `../dumped/code-mt5-large-batch-mix/` (` what(): basic_filebuf::underflow error reading the file: Is a directory
`). We are not sure if this is the case and if so what is the specific file required.
The contents of `../dumped/code-mt5-large-batch-mix/` are:

Some help with this model conversion is much appreciated. Thanks! | 07-14-2021 14:50:45 | 07-14-2021 14:50:45 | At the moment we only have Flax <=> PT and TF <=> PT conversion. So you should do the following:
```python
from transformers import T5ForConditionalGeneration, MT5TokenizerFast, TFT5ForConditionalGeneration, MT5Config, FlaxT5ForConditionalGeneration
import numpy as np
import jax
import jax.numpy as jnp
pretrained = "../dumped/code-mt5-large-batch-mix/" # earlier missed the fact that there is no ckpt in this dir
tmp_path = "../dumped/code-mt5-large-batch-mix-tensorflow"
config = MT5Config.from_pretrained(pretrained, from_flax=True)
model = T5ForConditionalGeneration.from_pretrained(pretrained, config=config)
tokenizer = MT5TokenizerFast(pretrained, use_fast=True, extra_ids=160)
def to_f32(t):
return jax.tree_map(lambda x: x.astype(jnp.float32) if x.dtype == jnp.bfloat16 else x, t)
model.params = to_f32(model.params)
model.save_pretrained(tmp_path)
model_pt = T5ForConditionalGeneration.from_pretrained(tmp_path, from_flax=True)
model_pt.save_pretrained(tmp_path)
model_tf = TFT5ForConditionalGeneration.from_pretrained(tmp_path, from_pt=True)
model_tf.save_pretrained(tmp_path)
```<|||||>Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,706 | closed | Deprecate TFTrainer | This PR adds a deprecation warning to `TFTrainer`, and offers advice and a link to the new Keras examples. | 07-14-2021 14:17:00 | 07-14-2021 14:17:00 | |
transformers | 12,705 | closed | Fix uninitialized variables when `config.mask_feature_prob > 0` | When `config.mask_feature_prob > 0` AND `mask_time_indices is not None` then `batch_size` and `sequence_length` are not defined for masking over features axis.
This PR solves this. | 07-14-2021 13:48:22 | 07-14-2021 13:48:22 | Thanks a lot! |
transformers | 12,704 | closed | Where is the casual mask when using BertLMHeadModel and set config.is_decoder = True? | I hope to use BERT for the task of causal language modeling.
`BertLMHeadModel ` seems to meet my needs, but I did not find any code snippets about the causal mask, even if I set the `config.is_decoder=True`.
I only find the following related code in https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py#L968.
however, I do not have any values to pass into the argument `encoder_hidden_states` when doing causal language modeling.
So maybe the causal mask does not work?
```
if self.config.is_decoder and encoder_hidden_states is not None:
encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size()
encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
if encoder_attention_mask is None:
encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device)
encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask)
else:
encoder_extended_attention_mask = None
``` | 07-14-2021 13:15:50 | 07-14-2021 13:15:50 | Hi @Doragd, BERT is an encoder model, and is therefore ill-suited to the causal language modeling task. Is there a reason you would like to use that model specifically for causal language modeling?<|||||>Hi, @LysandreJik I just apply causal language modeling as an auxiliary task to lead stable training of our model. I should have implemented this process myself, but I found this class `BertLMHeadModel`. However, I did not find any code snippet to implement causal mask. I would like to know that if is_decoder=True is set in BERT, can causal language modeling be achieved correctly?<|||||>cc @patrickvonplaten <|||||>Setting `is_decoder=True` automatically creates a causal mask in those lines of code: https://github.com/huggingface/transformers/blob/7fae5350528474c29b664ebb4df5bbc8104b48ec/src/transformers/modeling_utils.py#L266 |
transformers | 12,703 | closed | Update TF examples README | Update the general README for all TF examples now that the Keras push is finished, as well as adding in the missing README for the token classification example. | 07-14-2021 12:32:37 | 07-14-2021 12:32:37 | |
transformers | 12,702 | closed | Examples/flax/run_clm_flax.py showing error file extension error for train_file attribute even though file has the correct extension | ## Environment info
- `transformers` version: 4.9.0.dev0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (tpu)
- Jax version: 0.2.16
- JaxLib version: 0.1.68
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help
@patrickvonplaten @patil-suraj
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* [ 1] the official example scripts: (give details below)
Used examples/flax/run_clm_flax.py for gpt 2 text generation
* [ 2] my own modified scripts: (give details below)
For the running command I modified the one given for contextual language modelling in examples/flax/language-modeling/README.md by removing dataset name parameter and instead passing the train_file argument as
--train_file = "/home/anantshankhdhar/gpt2-rap-lyric-generator/Lilgpt.txt"\ from my system
The tasks I am working on is:
* [ 1] my own task or dataset: (give details below)
I made a dataset call Lilgpt.txt which is a txt file consisting rap lyrics . Each song starts with a <BOS> token and ends with an <EOS> token
## To reproduce
Steps to reproduce the behavior:
1.Make a new directory test and change to this directory
2.Add tokenizer.json and config.json from the gpt2 repo from (https://huggingface.co/gpt2/tree/main) to this repository
3.make a run.sh file like this
<img width="1440" alt="Screenshot 2021-07-14 at 2 34 59 AM" src="https://user-images.githubusercontent.com/56432951/125616296-4fbc7b18-1432-4d35-a4e6-7563ed8edd9e.png">
4.Add a txt file as the train_file attribute in run.sh and add a txt file dataset to the directory
5. type ./run.sh in terminal
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
1. you will get the following error:-
File "./run_clm_flax.py", line 640, in <module>
main()
File "./run_clm_flax.py", line 241, in main
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
File "/home/anantshankhdhar/transformers/src/transformers/hf_argparser.py", line 191, in parse_args_into_dataclasses
obj = dtype(**inputs)
File "<string>", line 13, in __init__
File "./run_clm_flax.py", line 164, in __post_init__
assert extension in ["csv", "json", "txt"], "`train_file` should be a csv, a json or a txt file."
AssertionError: `train_file` should be a csv, a json or a txt file.
2. However our train_file is txt only so we should have not got the error
3. The ideal behavior is training begins smoothly
| 07-14-2021 11:45:26 | 07-14-2021 11:45:26 | Hey @AnantShankhdhar,
I cannot copy paste the script to run the code since it's a screenshot (please never post screenshots of code in an issue, always copy-paste & format them with:
```
run.sh --param1 --param2
```<|||||>To solve you error, you should have this format in your bash script
```bash
./run_clm_flax.py \
...
--train_file = "file.txt" \
```
but instead have the following format in your bash script
```bash
./run_clm_flax.py \
...
--train_file="file.txt" \
```
(Note how there are no whitespaces around the `=` in the bash script)<|||||>Thanks yes it worked |
transformers | 12,701 | closed | Translate README.md to Traditional Chinese | # What does this PR do?
1. Add README_zh-hant.md and links to direct users to each README.
2. Some of terms in the file can be found at [National Academy for Educational Research](https://terms.naer.edu.tw/), an official website providing bilingual translations between English and Traditional Chinese.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
cc @JetRunner
| 07-14-2021 11:30:07 | 07-14-2021 11:30:07 | I'll ask my friends who are also native in Traditional Chinese, to help double check the terms in the files. So, we can ensure the accuracy of the translation.<|||||>@JetRunner We have had the files checked. It is ready to merge if there is no other mistake.<|||||>Cool! I'll give it a look then we are ready to merge. |
transformers | 12,700 | closed | Doc - expecting `push_to_hub` method for Tokenizers to be also in the Tokenizer class doc pages | # π Doc request
The gist is in the title. I was expecting the doc/docstring for the `push_to_hub` method for Tokenizers to be also in the Tokenizer class doc pages, e.g. on the main `Tokenizer` API landing page: https://huggingface.co/transformers/main_classes/tokenizer.html
| 07-14-2021 11:24:33 | 07-14-2021 11:24:33 | cc @sgugger |
transformers | 12,699 | closed | Add a custom timeout for log replica test | Add a custom timeout for log replica test. Let's keep these outliers to a minimum. | 07-14-2021 09:16:44 | 07-14-2021 09:16:44 | Hmm, it's really slow - clocked `1m16.735s` on my machine.
Let me see first if it can be made faster.<|||||>It's like 4 tests in one - so it adds up - I guess I could just split it in several sub-tests.<|||||>What do you guys prefer here? We can also make it @slow - which will shave off ~80sec - it doesn't need to run all the time at all.<|||||>No strong opinion on my side, do what you think is best!<|||||>oh, but these are multi-gpu tests so they are @slow already as they only run on our machine only
@LysandreJik, does this impact the push workflow? or just scheduled one?
If so I'd also `@slow` all the fairscale/apex tests, as these definitely don't need to run often at all.<|||||>Here we go: https://github.com/huggingface/transformers/pull/12710 - reworked 1 to 4 subtests, shouldn't run longer than the timeout now.<|||||>merged the alternative solution, closing this then<|||||>The multi GPU tests are run every time there is a commit on `master`, so it's not only slow tests. We have fast and slow GPU & multi-GPU tests.
Thanks a lot for splitting that test, way better solution.<|||||>ok, so should we put fairscale and apex tests to @slow then? These are hardly ever used by anyone, so would be a waste to spend $$ and time on those.<|||||>We can, but it's not a super high priority, they seem to run quickly enough |
transformers | 12,698 | closed | [Examples]Flax Seq2Seq example fails when doing only eval or predict | ## Descriptions
In the flax example if we only do predict or eval step it is not flexible enough to work. It will fail at this line
https://github.com/huggingface/transformers/blob/5dd0c956a8eb492c8597e9673cc1d818f0e6b501/examples/flax/summarization/run_summarization_flax.py#L569
because the current script is written in such a way it will always need a training dataset
I have modified a version of the same file which works but I need to remove the below line in that case
https://github.com/huggingface/transformers/blob/5dd0c956a8eb492c8597e9673cc1d818f0e6b501/examples/flax/summarization/run_summarization_flax.py#L467
and i always need to pass training data and create train_dataset by preprocessing even if i am not doing training
### Who can help
@patrickvonplaten @patil-suraj @sgugger | 07-14-2021 08:37:47 | 07-14-2021 08:37:47 | yes, right now for simplicity the scripts are written such that they always expect train datasets.
Feel free to open a PR :)<|||||>Sure, I will open PR!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,697 | closed | SystemError: <built-in method run_backward of torch._C._EngineBase object at 0x7f06bfae6b30> returned NULL without setting an error> ``` | > ```
> import pickle
> from transformers import AutoModelForCausalLM
>
> pickle.dumps(AutoModelForCausalLM)
> ```
>
> I think it's comes from the fact those are autogenerated.
Thanks for your help, but I tested based on your modification in #12654, a new problem arisesοΌ
@stas00 @patrickvonplaten, @LysandreJik
Traceback (most recent call last):
File "/usr/local/anaconda3/envs/py38/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/local/anaconda3/envs/py38/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/media/cfs/gonglixing/9Nctl/gpt_v2/run_clm_v3.py", line 509, in init_process
fn(rank, size)
File "/media/cfs/gonglixing/9Nctl/gpt_v2/run_clm_v3.py", line 456, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/media/cfs/gonglixing/9Nctl/opensource/transformers-master/src/transformers/trainer.py", line 1275, in train
tr_loss += self.training_step(model, inputs)
File "/media/cfs/gonglixing/9Nctl/opensource/transformers-master/src/transformers/trainer.py", line 1778, in training_step
self.scaler.scale(loss).backward()
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/torch/tensor.py", line 245, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/torch/autograd/__init__.py", line 145, in backward
Variable._execution_engine.run_backward(
SystemError: <built-in method run_backward of torch._C._EngineBase object at 0x7f06bfae6b30> returned NULL without setting an error
_Originally posted by @lancekung in https://github.com/huggingface/transformers/issues/12621#issuecomment-878738997_ | 07-14-2021 07:34:46 | 07-14-2021 07:34:46 | Hmm, somehow this issue has never been addressed.
In such cases you will have a better luck reporting torch-land issues to https://github.com/pytorch/pytorch/issues as chances are low we will have the required understanding.
I tried to google the exception and only found this to be relevant:
https://discuss.pytorch.org/t/autograd-vague-error-returned-null-without-setting-an-error/112781/6
Are you by chance too using apex's amp?
Someone reported that building their own version of pytorch solved the problem. So perhaps you could try to switch to an older or newer pytorch and see if the problem goes away?
On my setup (pt-1.9.0)
```
python -c "import pickle; from transformers import AutoModelForCausalLM; pickle.dumps(AutoModelForCausalLM)"
```
it works w/o a problem (that is if that's the code that caused the error in OP).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,696 | closed | Refactored code to improve performance. | # What does this PR do?
Refactors several segments of code in the `scripts`,`src`,`tests`,`utils` and `setup.py` and increases performance by a bit, using compression methods and newer practices.
No new functions or methods/models were added; therefore no documentation changes were required.
## Before submitting
* [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
* [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
* [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
* [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
* [ ] Did you write any new necessary tests?
| 07-14-2021 04:36:27 | 07-14-2021 04:36:27 | @LysandreJik Sorry for the ping but I'd like to know your thoughts on this PR and whether I did whatever you had asked me to do in the [previous one](https://github.com/huggingface/transformers/pull/12639). Thanks!<|||||>Hi, I'd like to put my two cents in this PR.
At first, although most of the time we want to make the code concise and shortened, we should meanwhile have the code readable and maintainable in order to let the next contributor easily understand the purpose of a code segment.
Taking a part of your contribution (src/transformers/commands/user.py ) as an example:
```
- lines = []
- lines.append(row_format.format(*headers))
+ lines = [row_format.format(*headers)]
lines.append(row_format.format(*["-" * w for w in col_widths]))
```
Here you used a shortened expression to instantiate `lines` list with the first line at the same time, which is good when you just have one element to be appended; however, the `lines` in this place means to interact with users by displaying a block of text, and we expect that the `append` method will be used many times in order to add additional information. As a result, this change might break the consistency of the code block as it mixed instantiation and appending. On the contrary, making them separated (instantiation & appending new values) can increase the readability for the next contributor.
Here is another example (src/transformers/commands/serving.py):
```
nlp = pipeline(
task=args.task,
- model=args.model if args.model else None,
+ model=args.model or None,
config=args.config,
tokenizer=args.tokenizer,
device=args.device,
)
```
Though `or` operator can do the thing well, this change may surprise or confuse the next contributor because we usually use `or` when we have to check whether left or right operand is equivalent to `True`, and here the `None` will always be `False`. As a result, a question in my brain may appear that why we should check/test a value which is always equivalent to `False`.
The other thing is that this PR includes too many changes across numerous files (51 files changed), from setup.py to model definitions. This might lead to a difficulty to review by the maintainers. Therefore, I would suggest you choosing a part of files which are related to each other, and make sure the changes keep readable and maintainable.
Have a good luck!<|||||>> Hi, I'd like to put my two cents in this PR.
>
> At first, although most of the time we want to make the code concise and shortened, we should meanwhile have the code readable and maintainable in order to let the next contributor easily understand the purpose of a code segment.
>
> Taking a part of your contribution (src/transformers/commands/user.py ) as an example:
>
> ```
> - lines = []
> - lines.append(row_format.format(*headers))
> + lines = [row_format.format(*headers)]
> lines.append(row_format.format(*["-" * w for w in col_widths]))
> ```
>
> Here you used a shortened expression to instantiate `lines` list with the first line at the same time, which is good when you just have one element to be appended; however, the `lines` in this place means to interact with users by displaying a block of text, and we expect that the `append` method will be used many times in order to add additional information. As a result, this change might break the consistency of the code block as it mixed instantiation and appending. On the contrary, making them separated (instantiation & appending new values) can increase the readability for the next contributor.
>
> Here is another example (src/transformers/commands/serving.py):
>
> ```
> nlp = pipeline(
> task=args.task,
> - model=args.model if args.model else None,
> + model=args.model or None,
> config=args.config,
> tokenizer=args.tokenizer,
> device=args.device,
> )
> ```
>
> Though `or` operator can do the thing well, this change may surprise or confuse the next contributor because we usually use `or` when we have to check whether left or right operand is equivalent to `True`, and here the `None` will always be `False`. As a result, a question in my brain may appear that why we should check/test a value which is always equivalent to `False`.
>
> The other thing is that this PR includes too many changes across numerous files (51 files changed), from setup.py to model definitions. This might lead to a difficulty to review by the maintainers. Therefore, I would suggest you choosing a part of files which are related to each other, and make sure the changes keep readable and maintainable.
>
> Have a good luck!
Thanks for replying! I'll keep these points in mind and make changes accordingly. Is it okay if I write a few comments explaining a few of these hard-to-read codes or should I not make changes to these altogether? <|||||>Writing comments is a good idea when a code segment itself cannot directly express its purpose. However, in my opinion, the original implementations I mentioned above can clearly deliver its purpose and goal to the next contributor without any comment, and this is the best practice of coding I think (even if the performance has a minuscule difference).
Therefore, I would suggest you making sure which code segment really needs to be refactored (i.e. the change will bring a significant performance improvement such as improved time complexity or increase its readability...etc.) before refactoring. <|||||>> Writing comments is a good idea when a code segment itself cannot directly express its purpose. However, in my opinion, the original implementations I mentioned above can clearly deliver its purpose and goal to the next contributor without any comment, and this is the best practice of coding I think (even if the performance has a minuscule difference).
>
> Therefore, I would suggest you making sure which code segment really needs to be refactored (i.e. the change will bring a significant performance improvement such as improved time complexity or increase its readability...etc.) before refactoring.
Gotcha, thanks. |
transformers | 12,695 | closed | [Deepspeed] add many more models to the model zoo test | This PR continues figuring out how to make various models work with Deepspeed (a lot of fixes happen on the Deepspeed side), most models just work out of the box - the main purpose of this PR is to test as many models as possible. so there are no fixes to add.
- [x] update coverage to albert, bart, bert, bigbird_pegasus, big_bird, blenderbot, deberta, deberta_v2, distilbert, electra, flaubert, fsmt, funnel, gpt2, gptj, gpt_neo, layoutlm, led, longformer, marian, mbart, mobilebert, mpnet, pegasus, prophetnet, roberta, squeezebert, t5, t5_v1, vit, xlm_roberta, xlnet
Thanks to @LysandreJik for creating the tiny test models for many of HF models!
Some models I couldn't cover for a variety of reasons unrelated to Deepspeed (missing tokenizers, missing tiny models, missing example scripts to exercise these). But their status is documented in the script. Over time more will be tested.
Blocking events - all resolved:
- [x] https://github.com/microsoft/DeepSpeed/pull/1227 (fixes reference counting)
- [x] https://github.com/microsoft/DeepSpeed/pull/1380 (fixes zero_to_fp32 recovery of uneven param shapes)
- [x] https://github.com/huggingface/transformers/pull/13665 (fixes positional embeddings: m2m_100 and others)
- [x] https://github.com/microsoft/DeepSpeed/pull/1916#event-6563217392 (fixes tracing)
- [x] 0.6.4 Deepspeed release that includes all the merged PRs
| 07-14-2021 04:20:18 | 07-14-2021 04:20:18 | _The documentation is not available anymore as the PR was closed or merged._<|||||>nice work @stas00, have you tested Perceiver with DeepSpeed.<|||||>Would be glad to do that, @sameeravithana- in order to do that I need is a Trainer-based example script that I can test with.
As you can see from this map:
https://github.com/huggingface/transformers/blob/4a419d4995111c22d6842ee1bcd2d3f500150845/tests/deepspeed/test_model_zoo.py#L231-L270
I have each model tested by one of HF Trainer examples. Is there one that can be used with perceiver?
|
transformers | 12,694 | closed | Refactored code to improve performance | # What does this PR do?
Refactors several segments of code in the `scripts`,`src`,`tests`,`utils` and `setup.py` and increases performance by a bit, using compression methods and newer practices.
No new functions or methods/models were added; therefore no documentation changes were required.
## Before submitting
* [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
* [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
* [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
* [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
* [ ] Did you write any new necessary tests?
| 07-14-2021 02:01:40 | 07-14-2021 02:01:40 | |
transformers | 12,693 | closed | Strange output from summarization models | I am trying to get some models working for summarizing news articles, but for some reason I keep getting this strange out
Output:
"In our series of letters from African journalists, film-maker and columnist Farai Sevenzo looks at ... [subject of input article]"
This has happened on multiple models (Pegasus, Bart, and Roberta) and multiple different inputs. The output is either the correct summary for the article or this incorrect output listed above. Does anyone have any idea how to fix this problem?
code:
from transformers import PegasusTokenizer, TFPegasusModel, PegasusModel, TFPegasusForConditionalGeneration
import tensorflow as tf
src_text = """ article text put here """
tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-xsum')
model = TFPegasusForConditionalGeneration.from_pretrained('google/pegasus-xsum')
inputs = tokenizer(src_text, truncation=True, padding='longest', return_tensors="tf", )
translated = model.generate(**inputs)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
print(tgt_text)
| 07-14-2021 01:08:00 | 07-14-2021 01:08:00 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I see this happening now with pegasus models. Similar to the @Zahz1 I get the following:
"In our series of letters from African journalists, filmmaker and columnist Ahmed Rashid looks at some of the issues facing the continent."
The text that is being summarized has nothing to do with the above generated summarization. |
transformers | 12,692 | closed | Provide mask_time_indices to `_mask_hidden_states` to avoid double masking | The current behavior for training Hubert masks randomly some "spans" of time according to a `mask_time_indices` which can be provided or not to the `forward(..., mask_time_indices=Optional[torch.Tensor])` .
When providing this value (_interesting to mask the loss over non masked spans_), the mask was applied outside of the `_mask_hidden_states(...)` function.
Then, a new mask `_mask_hidden_states(...)` inside was generated potentially masking again some others tokens independently from what was provided through `mask_time_indices`.
This PR provides a fix by ensuring we only mask spans inside `_mask_hidden_states(...)` and correctly apply the masking operating one time. | 07-14-2021 00:04:03 | 07-14-2021 00:04:03 | Thanks a lot for fixing this! Can you also make the fix to `modeling_wav2vec2.py`? Think the same error is there<|||||>and for both tf_hubert and tf_wav2vec2, we need to do the change as well I think <|||||>Thanks a lot! |
transformers | 12,691 | closed | OSError: Not found: "/root/.cache/huggingface/transformers/5ec31591d9130cc9be0872e6b3dc0b276e514ab96e68404ac4a876ff03cb413b.dbd4bc2544d5c9f8f0d109844726c1600fa95cf0ba770b54c146f702be6e55dc": No such file or directory Error #2 | This happens when try to load the model on another device
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 07-13-2021 23:16:48 | 07-13-2021 23:16:48 | Hello! What's the code that triggers this error?<|||||>import os
loaded_model = torch.load("mt_luganda.pt",map_location=torch.device('cpu'))<|||||>Are you sure? That's unrelated to `tranformers` or `huggingface`, yet I do see a `transformers` cache error in your issue title.<|||||>am trying to load that model in another machine
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Same problem.<|||||>Any solution?<|||||>Same issue. Trying to load a pickled tokenizer inside of a docker container
with open(f"t5-base_tokenizer.pkl", 'rb') as f:
tok = pickle.load(f) |
transformers | 12,690 | closed | [Deepspeed] non-native optimizers are mostly ok with zero-offload | As noticed in https://github.com/huggingface/transformers/issues/11044#issuecomment-870742459 most non-DS optimizers should work with zero-offload as long as they have cpu+gpu implementation (except LAMB).
So this PR relaxes the earlier incorrectly imposed restriction.
@sgugger
| 07-13-2021 23:15:47 | 07-13-2021 23:15:47 | |
transformers | 12,689 | closed | Flax MLM: Allow validation split when loading dataset from local file | # What does this PR do?
In Flax training scripts for MLM, CLM, and T5, this PR enables the option to apply validation-split-percentage when loading datasets from local file. This option already worked when loading standard HF datasets but was missing for local files.
## Who can review?
@patrickvonplaten @patil-suraj
| 07-13-2021 21:27:18 | 07-13-2021 21:27:18 | |
transformers | 12,688 | closed | [doc] parallelism - when to use which mode | # π Feature request
Was asked to expand https://huggingface.co/transformers/master/parallelism.html to include recommendations on which mode to use when. | 07-13-2021 21:22:27 | 07-13-2021 21:22:27 | @BramVanroy, please have a look if this addresses your question and I will add it to the doc. It of course assumes that https://huggingface.co/transformers/master/parallelism.html has been read (hence the abbreviations).
If more information is needed please don't hesitate to say what you feel is missing and how things can be improved. Thank you.
I just wasn't sure about single node / multi-gpu as I haven't played much with PP/TP on a single node.
------------
## Which Strategy To Use When
Here is a very rough outlook at which parallelism strategy to use when. The first on the list is typically faster.
**β¨ Single GPU**
* Model fits onto a single GPU:
1. Normal use
* Model doesn't fit onto a single GPU:
1. ZeRO + Offload CPU and optionally NVMe
**β¨ Single Node / Multi-GPU**
* Model fits onto a single GPU:
1. DDP - Distributed DP
2. ZeRO - may or may not be faster depending on the situation and configuration used
* Model doesn't fit onto a single GPU:
1. ZeRO
2. TP
3. PP
(not sure which one will be faster here - haven't done enough experiments)
**β¨ Multi-Node / Multi-GPU**
* When you have fast inter-node connectivity:
1. ZeRO - as it requires close to no modifications to the model
2. PP+TP+DP - less communications, but requires massive changes to the model
* when you have slow inter-node connectivity:
1. DP+PP+TP+ZeRO
<|||||>This is already very useful for most people I think! I personally haven't even tried anything else but regular training and single node-muliti gpu DDP, but I can see how this small overview helps users. It makes it easier for them to "choose what to do".
Thanks! |
transformers | 12,687 | closed | Assert evaluation_strategy not no when load_best_model_at_end | # What does this PR do?
Since using `--load_best_model_at_end` overrides the `save_strategy` by the `evaluation_strategy`, this PR adds a defensive check to make sure that strategy is not "no" (otherwise nothing is ever saved).
Fixes #12685 | 07-13-2021 20:54:35 | 07-13-2021 20:54:35 | > I see that practically:
>
> load_best_model_at_end cancels out save_strategy=steps.
> load_best_model_at_end has no impact on save_strategy=epoch.
That is not completely correct. One should also add that if `evaluation_strategy=steps`, a save is done every `eval_steps` and if `evaluation_strategy=epoch`, a save is done every epoch. Basically the model need to be saved every time there is an evaluation, to keep track of the best checkpoint.
To be honest, it makes absolutely no sense to use `--load_best_model_at_end` if the `evaluation_strategy` and `save_strategy` are not the same (and in the case of steps, with the same number of steps) so perhaps this is what the assert should be. The current implementation tries to avoid having the user input the same thing twice, but maybe it is too confusing.<|||||>That works too.
My initial suggestion was to only flag to the user the silent override of `save_steps`, but if we can do better, then by all means let's do that!<|||||>Superseded by #12786 |
transformers | 12,686 | closed | No docs for v2.3.0 | ## Environment info
- `transformers` version: 2.3.0
### Who can help
Documentation: @sgugger
## To reproduce
Steps to reproduce the behavior:
Click [here](https://huggingface.co/transformers/v2.3.0/model_doc/gpt2.html#gpt2doubleheadsmodel) and see there are no docs for 2.3.0.
## Expected behavior
Documentation should be displayed when clicking [here](https://huggingface.co/transformers/v2.3.0/model_doc/gpt2.html#gpt2doubleheadsmodel) | 07-13-2021 20:31:56 | 07-13-2021 20:31:56 | @sgugger , I think all docs up to 2.9 are gone (checked 2.8 randomly and it was missing) so there might be a broader isssue.
<|||||>It might be linked to use more recent versions of sphinx when building it, though I'm not sure. This is not a priority for us, especially for such an older version (it would be different if the docs of the last major version were down), so I don't think anyone on our side will investigate this further.<|||||>@sgugger, you are making a valid point.
However, I just wanted to highlight that _a lot_ of research hinges on these older versions as, unfortunately, people do not maintain their research code once it's out there. It would be helpful if the docs did not disappear so we can still work with others' legacy code when we have to. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,685 | closed | [trainer] `--load_best_model_at_end` silently turns of `--save_steps` settings | Splitting off from https://github.com/huggingface/transformers/pull/12477#discussion_r668326212
Currently `--load_best_model_at_end` silently turns off `--save_steps` settings when `--do_eval` is off (or `--evaluation_strategy` is set to other than `"no"`, which otherwise automatically turns on `--do_eval`)
The proposal is to assert if:
`--load_best_model_at_end` is set and `--evaluation_strategy` is `"no"`
Reproducible test:
```
export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_train --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 500 --max_source_length 128 --max_target_length 128 --val_max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --predict_with_generate --sortish_sampler --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" --source_prefix "translate English to Romanian: " --warmup_steps 50 --max_train_samples 50 --save_steps 1
```
which saves checkpoints.
then adding `--load_best_model_at_end` stops saving those.
@sgugger. | 07-13-2021 18:42:08 | 07-13-2021 18:42:08 | Yes, as said in that comment, I think it's reasonable if we raise an error if `--load_best_model_at_end` is set and `--evaluation_strategy` is "no" since there is no "best model" to pick from in that case. I can do it later today if you want.<|||||>I'm still not 100% clear on how this feature's reliance on eval affects saving checkpoints, but if it solves the problem that's good enough for me.
Absolutely no rush on this one.
Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hasn't this one been resolved already?<|||||>Yes, this was fixed by #12786 in the end. |
transformers | 12,684 | closed | Add timeout to CI. | Adds a global timeout to 60 seconds for non-slow tests, and a global timeout of 5 minutes for slow tests.
These can be adjusted later on, but it prevents the two hanging suites right now and is important to merge to get feedback on the current coverage.
I've re-enabled the `-v` option on `pytest` as this was instrumental in discovering the failing test, and would have saved me a lot of time had it been activated by default.
I'm also removing the `pytest-sugar` dependency, because even if a nice QOL improvement, it was detrimental to the discoverability of the hanging test. | 07-13-2021 17:00:52 | 07-13-2021 17:00:52 | |
transformers | 12,683 | closed | confusing description in prepare_seq2seq_batch of MBart | ## Information
The model that I am using is `MBart-50`
In the description of `prepare_seq2seq_batch`, it says _Prepare model inputs for translation. For best performance, translate one sentence at a time._. Does this mean we should not do batching if we want to obtain the best performance? I am curious why it is the case since the paper itself does not mention that.
## Expected behavior
The performance should be the same doing batching or not.
@patil-suraj | 07-13-2021 16:40:09 | 07-13-2021 16:40:09 | Hi @XuhuiZhou, the `prepare_seq2seq_batch` method is now deprecated and the description is a bit outdated.
we don't recommend using it anymore. You could refer to this section to see how to prepare data for mbart-50 https://huggingface.co/transformers/model_doc/mbart.html#training-of-mbart-50<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,682 | closed | Fix minor docstring typos. | # What does this PR do?
Fix minor docstring typos in #12664
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
| 07-13-2021 16:01:32 | 07-13-2021 16:01:32 | |
transformers | 12,681 | closed | Flax - Loading pretrained model overwrites weights of different shapes | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: master
- Platform: Ubuntu
- Python version: 3.9
### Who can help
@patil-suraj @sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Custom FlaxBart
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Create a custom model by subclassing - just change output shape (lm_head & final_logits_bias)
2. use `CustomModel.from_pretrained('facebook/bart-large-c')
3. check `model.params['final_logits_bias'].shape`, it will come from the pretrained model
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The shape of weights should be checked prior to be overwritten.
Right now my approach is:
* load pre trained model
* init custom model from config
* update manually the weights needed
| 07-13-2021 13:49:24 | 07-13-2021 13:49:24 | This should be fixed by the work in #12664 <|||||>Closing because it was fixed |
transformers | 12,680 | closed | Running out of memory when resume training. | Might be similar problem as #11317, node runs out of cpu memory (512GB).
To reproduce:
(i)
```
deepspeed --hostfile myhostfile \ ${_PATH}/examples/pytorch/summarization/run_summarization.py \
--model_name_or_path hyunwoongko/blenderbot-9B \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 8 \
--deepspeed ${_PATH}/tests/deepspeed/ds_config_zero3.json \
--logging_steps 1 \
--fp16 \
--overwrite_output_dir \
--save_steps 10 \
--gradient_accumulation_steps 1 \
--evaluation_strategy="steps" \
--max_train_samples 10024 \
--max_eval_samples 32 \
--max_source_length 128
--max_target_length 128 \
--eval_steps 5
```
(ii)
Afterwards in order to resume I use the option `--resume_from_checkpoint /tmp/tst-summarization/checkpoint-10`.
A workaround is to export the FP32 weights using the script `zero_to_fp32.py` as described in [https://huggingface.co/transformers/master/main_classes/deepspeed.html#getting-the-model-weights-out](https://huggingface.co/transformers/master/main_classes/deepspeed.html#getting-the-model-weights-out) and restart directly from `pytorch_model.bin`, nevertheless it would be better to resume directly from the deepspeed checkpoint, if possible.
torch: 1.8.1+cu111
transformers: 4.9.0.dev0
deepspeed: 0.4.4+d1a7a55
log: [log.txt](https://github.com/huggingface/transformers/files/6808841/log.txt)
@stas00 | 07-13-2021 13:08:07 | 07-13-2021 13:08:07 | Thank you for the detailed report, @thies1006
I suspect that at some point we have the model allocated more than once.
I will profile the memory usage and get back to you with the findings.
I'm glad to hear that meanwhile you have a workaround.<|||||>So first I see our non-deepspeed checkpoint-loading is inefficient CPU memory-wise
```
# save
export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_train --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 500 --max_source_length 128 --max_target_length 128 --val_max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --predict_with_generate --sortish_sampler --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" --source_prefix "translate English to Romanian: " --max_train_samples 50 --save_steps 1 --skip_memory_metrics 0
# load:
export BS=16; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_train --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 500 --max_source_length 128 --max_target_length 128 --val_max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --predict_with_generate --sortish_sampler --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" --source_prefix "translate English to Romanian: " --max_train_samples 50 --save_steps 1 --skip_memory_metrics 0 --resume_from_checkpoint output_dir/checkpoint-1
```
```
# save
***** train metrics *****
epoch = 1.0
init_mem_cpu_alloc_delta = -153MB
init_mem_cpu_peaked_delta = 152MB
init_mem_gpu_alloc_delta = 230MB
init_mem_gpu_peaked_delta = 0MB
train_loss = 2.9967
train_mem_cpu_alloc_delta = 1324MB
train_mem_cpu_peaked_delta = 125MB
train_mem_gpu_alloc_delta = 933MB
train_mem_gpu_peaked_delta = 355MB
train_runtime = 0:00:03.47
train_samples = 50
train_samples_per_second = 14.386
train_steps_per_second = 0.575
```
```
# load
***** train metrics *****
epoch = 1.0
init_mem_cpu_alloc_delta = -153MB
init_mem_cpu_peaked_delta = 152MB
init_mem_gpu_alloc_delta = 230MB
init_mem_gpu_peaked_delta = 0MB
train_loss = 1.4817
train_mem_cpu_alloc_delta = 1552MB
train_mem_cpu_peaked_delta = 124MB
train_mem_gpu_alloc_delta = 931MB
train_mem_gpu_peaked_delta = 228MB
train_runtime = 0:00:03.45
train_samples = 50
train_samples_per_second = 14.472
train_steps_per_second = 0.579
```
As you can see the checkpoint loading takes ~225MB more:
```
- train_mem_cpu_alloc_delta = 1324MB
+ train_mem_cpu_alloc_delta = 1552MB
```
which is exactly the size of the t5-small (230MB) model.
That is at some point it keeps 2 full copies of the model in CPU memory.
cc: @sgugger
So the issue might not be in deepspeed, but will check that next.
<|||||>Oh that is weird. At the top of my mind the first culprit could be the `state_dict` we loaded that is not release by the `Trainer` for some reason. If you add a `del state_dict` on [this line](https://github.com/huggingface/transformers/blob/a18a17d2b6357321279190963765085a0ef4d466/src/transformers/trainer.py#L1078) does it release that copy? (Can't fully test right now which is why I'm asking you.)<|||||>Yes, that did the trick! It's the same memory usage now. Applied here: https://github.com/huggingface/transformers/pull/12718<|||||>So back to the deepspeed side of this Issue. I wasn't able to see the problem with `t5-small`, but I can see it clearly with `t5-base`
```
# save
BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 deepspeed --num_gpus 2 examples/pytorch/translation/run_translation.py --model_name_or_path t5-base --output_dir output_dir --overwrite_output_dir --max_source_length 128 --max_target_length 128 --val_max_target_length 128 --do_train --num_train_epochs 1 --per_device_train_batch_size $BS --learning_rate 3e-3 --logging_steps 0 --dataset_name wmt16 --dataset_config ro-en --source_lang en --target_lang ro --source_prefix "translate English to Romanian: " --max_train_samples 50 --deepspeed tests/deepspeed/ds_config_zero3.json --save_steps 1 --skip_memory_metrics 0
# load:
BS=16; PYTHONPATH=src USE_TF=0 deepspeed --num_gpus 2 examples/pytorch/translation/run_translation.py --model_name_or_path t5-base --output_dir output_dir --overwrite_output_dir --max_source_length 128 --max_target_length 128 --val_max_target_length 128 --do_train --num_train_epochs 1 --per_device_train_batch_size $BS --learning_rate 3e-3 --logging_steps 0 --dataset_name wmt16 --dataset_config ro-en --source_lang en --target_lang ro --source_prefix "translate English to Romanian: " --max_train_samples 50 --deepspeed tests/deepspeed/ds_config_zero3.json --save_steps 1 --skip_memory_metrics 0 --resume_from_checkpoint output_dir/checkpoint-1
```
```
# save
***** train metrics *****
train_mem_cpu_alloc_delta = 5542MB
train_mem_cpu_peaked_delta = 424MB
train_mem_gpu_alloc_delta = -394MB
train_mem_gpu_peaked_delta = 1259MB
```
```
# load
***** train metrics *****
train_mem_cpu_alloc_delta = 5109MB
train_mem_cpu_peaked_delta = 1944MB
train_mem_gpu_alloc_delta = -394MB
train_mem_gpu_peaked_delta = 804MB
```
So it's easy to see that at some point there is a temporary jump by 1.1GB as compared to the normal run - t5-base is about 850MB. Which most likely means there are several copies of it loaded into CPU memory at some point.
<|||||>OK, so I did some profiling with an even larger model: t5-large (2.7GB) so it's easier to see what's happening.
**We need to take into account that Deepspeed needs to load optimizer states, which non-Deepspeed run doesn't do! And that makes a huge difference.**
So our model has close to 0.75B params:
```
$ python -c 'from transformers import T5ForConditionalGeneration; model = T5ForConditionalGeneration.from_pretrained("t5-large"); print(sum(dict((p.data_ptr(), p.numel()) for p in model.parameters()).values()))'
737,668,096 # 737M params
```
Now the checkpoint contains 4 bytes for fp32 weights and 8 bytes for optimizer, 12 in total:
```
python -c 'print(f"{737668096*12 / 2**30 :0.2f}GB")'
8.24GB
```
Indeed if we check the checkpoint folder:
```
du -sh output_dir/checkpoint-1/global_step1/
8.3G output_dir/checkpoint-1/global_step1/
```
And this is what accounts for a huge peak CPU RAM that gets temporarily used when the checkpoint is loaded.
So as you indeed figured out if you bypass the checkpoint loading and load just the weights you extracted with `zero_to_fp32.py` you have no problem with temporarily needing more CPU memory than required to run the normal run.
In general this should be possible to fix, by not allocating the model until the checkpoint loading (see https://github.com/huggingface/transformers/issues/12274 - which was just made available in pytorch) and probably something similar with the optimizer. But I can't promise you if and when this will happen. This is very important I think!
Perhaps a simpler solution until then would be to allocate some swap memory on an nvme drive?
Please let me know if this is helpful.
<|||||>Thank you very much for the insights @stas00 !! I just wanted to bring this up because the order of magnitude was surprising to me. As I understand you, model and optimizer states are allocating memory twice (model init and checkpoint loading).
My checkpoint has the size (for Blenderbot-9B):
```
du -sh /tmp/tst-summarization/checkpoint-10/global_step10/
106G /tmp/tst-summarization/checkpoint-10/global_step10/
```
I also tried with the Blenderbot-3B, there I get 61GB size of the checkpoint folder and cpu ram consumption peaks at about 330GB (short peak, as you said).
So, in summary, I'm still wondering about the numbers. But as I understand you, this is normal and already addressed. I'll try with the nvme btw, thanks for the hint!
I think we can close this for now.<|||||>The main issue is loading optimizer states which are 2x bigger than the fp32 model.
Actually, I thought of a possible solution last night. This is staggered checkpoint loading.
So if you have 4 gpus on a node, now you get the whole checkpoint folder loaded into CPU at once. However what if we loaded one gpu at a time! That would require 1/4th extra CPU memory as when one gpu finished loading it will return the CPU memory back to the pool.
I think this approach should solve your limitation. Let me try to implement this on the deepspeed side.<|||||>After trying to implement staggered load, I discovered that each process loads zero checkpoints for all ranks in deepspeed,
Let's continue this discussion over at Deepspeed as it's not really a transformers' issue
https://github.com/microsoft/DeepSpeed/issues/1236
|
transformers | 12,679 | closed | Fix multiple choice doc examples | # What does this PR do?
The multiple choice example docstrings was fixed for PyTorch but not Flax and TensorFlow. This PR addresses that. | 07-13-2021 12:39:57 | 07-13-2021 12:39:57 | |
transformers | 12,678 | closed | Mask prediction does not work with whitespace before mask token | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.0.dev0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (tpu)
- Jax version: 0.2.16
- JaxLib version: 0.1.68
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@patrickvonplaten
Model hub:
Path of the Repository on the hub:
https://huggingface.co/Temur/qartvelian-roberta-base
## Information
The model I am using (Bert, XLNet ...): RoBERTa
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```python
from transformers import pipeline, AutoTokenizer, RobertaForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("./qartvelian-roberta-base")
model = RobertaForMaskedLM.from_pretrained("./qartvelian-roberta-base", from_flax=True)
unmask = pipeline("fill-mask", model=model, tokenizer=tokenizer)
unmask("α©ααα α‘ααα¨ααααα<mask>.")
```
I'm getting the right result when I'm passing the string like it's shown in the following snippet.
But If I pass a string with whitespace before the `<mask>` I'm getting weird results.
### How I trained the tokenizer:
This is the script I have used to train the Tokenizer:
```python
# Import libraries
from pathlib import Path
from tokenizers import trainers, Tokenizer, normalizers, ByteLevelBPETokenizer
from datasets import load_dataset
# preparing files
model_dir = "./qartvelian-roberta-base" # ${MODEL_DIR}
train_paths = [str(x) for x in Path("./corpuses/").glob("**/*.txt")]
test_path = train_paths.pop(0)
print(f"training from: {train_paths}\ntesting from: {test_path}")
# load dataset
dataset = load_dataset('text', data_files={'train': 'corpuses/pre_processed.txt', 'validation': 'corpuses/validate.txt'})
train_dataset = dataset['train']
# Instantiate tokenizer
tokenizer = ByteLevelBPETokenizer()
# Batch Generator
def batch_iterator(batch_size=1000):
for i in range(0, len(train_dataset), batch_size):
yield train_dataset[i: i + batch_size]["text"]
# Customized training
tokenizer.train_from_iterator(batch_iterator(), vocab_size=50265, min_frequency=2, special_tokens=[
"<s>",
"<pad>",
"</s>",
"<unk>",
"<mask>",
])
# Save files to disk
tokenizer.save(f"./qartvelian-roberta-base/tokenizer.json")
```
Any advice on how to fix the tokenizer?
I'd also love to know how to avoid this problem at the pretraining stage.
| 07-13-2021 12:08:43 | 07-13-2021 12:08:43 | Sure! It's not very easy to avoid before pretraining (it depends on how you set up the data collator), but if you know how the special tokens work in tokenizers and transformers you can easily fix it next time.
If you notice that `"word<mask>"` works well, but `"word <mask>"` doesn't then this means that during pretraining your model was trained on data that was processed to "word<mask>" and ideally you would like all inputs to be processed this way when using your pretrained model.
To do so we need to be sure that both `tokenizer("word<mask>")` and `tokenizer("word <mask>")` get processed to the same `input_ids` .
E.g. compare (new):
```python
from transformers import RobertaTokenizerFast
tok = RobertaTokenizerFast.from_pretrained("flax-community/qartvelian-roberta-base-fix")
tok.decode(tok.encode("Hello <mask>"))
```
to (old)
```python
from transformers import RobertaTokenizerFast
tok = RobertaTokenizerFast.from_pretrained("Temur/qartvelian-roberta-base")
tok.decode(tok.encode("Hello <mask>"))
```
-> the "new" tokenizer should strip away the whitespace while the "old" one doesn't.
If you look into your tokenizer file here: https://huggingface.co/Temur/qartvelian-roberta-base/raw/main/tokenizer.json you can see that lstrip for leftstrip for the mask_token is set to False while in https://huggingface.co/flax-community/qartvelian-roberta-base-fix/raw/main/tokenizer.json the attribute lstrip of the <mask_token> dict is set to True => so the new tokenizier strips away the left space of all <mask> tokens.
Doing this change is quite easy, all you have to do is:
```python
from transformers import RobertaTokenizerFast, AddedToken
tok = RobertaTokenizerFast.from_pretrained("Temur/qartvelian-roberta-base")
tok.mask_token = AddedToken("<mask>", lstrip=True)
``` |
transformers | 12,677 | closed | Processing custom wikipedia data with clm training script throws error when "blockifying" data | I'm loading in a wikipedia from hugginface dataset to run the CLM script.
```python
from datasets import load_dataset
import pdb
def load_and_clean_oscar():
dataset = load_dataset('oscar', 'unshuffled_deduplicated_sv', split="train")
dataset = dataset.remove_columns(['id'])
print(dataset)
pdb.set_trace()
filtered_dataset = dataset.map(filter_oscar)
filtered_dataset[:3]
print(filtered_dataset[:3])
pdb.set_trace()
return filtered_dataset
def filter_oscar(batch):
batch["text"] = " ".join(batch["text"].split("\n"))
return batch
def load_and_clean_wiki():
dataset = load_dataset('wiki40b', 'sv', beam_runner='DirectRunner', split="train")
dataset = dataset.remove_columns(['wikidata_id', 'version_id'])
filtered_dataset = dataset.map(filter_wikipedia)
# filtered_dataset[:3]
# print(filtered_dataset[:3])
return filtered_dataset
def filter_wikipedia(batch):
batch["text"] = " ".join(batch["text"].split("\n_START_SECTION_\n"))
batch["text"] = " ".join(batch["text"].split("\n_START_ARTICLE_\n"))
batch["text"] = " ".join(batch["text"].split("\n_START_ARTICLE_\n"))
batch["text"] = " ".join(batch["text"].split("\n_START_PARAGRAPH_\n"))
batch["text"] = " ".join(batch["text"].split("_NEWLINE_"))
batch["text"] = " ".join(batch["text"].split("\xa0"))
return batch
````
clm script
```python
#!/usr/bin/env python
# coding=utf-8
# Copyright 2021 The HuggingFace Team All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Pre-training/Fine-tuning the library models for causal language modeling (GPT, GPT-2, CTRL, ...) on a text file or a dataset.
Here is the full list of checkpoints on the hub that can be fine-tuned by this script:
https://huggingface.co/models?filter=causal-lm
"""
# You can also adapt this script on your own causal language modeling task. Pointers for this are left as comments.
import logging
import math
import os
import sys
import time
from dataclasses import dataclass, field
from pathlib import Path
from typing import Callable, Optional
import datasets
from datasets import Dataset, load_dataset
from tqdm import tqdm
import jax
import jax.numpy as jnp
import optax
import transformers
from load_from_hf import load_and_clean_wiki
from flax import jax_utils, traverse_util
from flax.jax_utils import unreplicate
from flax.training import train_state
from flax.training.common_utils import get_metrics, onehot, shard, shard_prng_key
from transformers import (
CONFIG_MAPPING,
FLAX_MODEL_FOR_CAUSAL_LM_MAPPING,
AutoConfig,
AutoTokenizer,
FlaxAutoModelForCausalLM,
HfArgumentParser,
TrainingArguments,
is_tensorboard_available,
)
from transformers.testing_utils import CaptureLogger
logger = logging.getLogger(__name__)
MODEL_CONFIG_CLASSES = list(FLAX_MODEL_FOR_CAUSAL_LM_MAPPING.keys())
MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)
@dataclass
class ModelArguments:
"""
Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch.
"""
model_name_or_path: Optional[str] = field(
default=None,
metadata={
"help": "The model checkpoint for weights initialization."
"Don't set if you want to train a model from scratch."
},
)
model_type: Optional[str] = field(
default=None,
metadata={"help": "If training from scratch, pass a model type from the list: " + ", ".join(MODEL_TYPES)},
)
config_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"}
)
tokenizer_name: Optional[str] = field(
default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"}
)
cache_dir: Optional[str] = field(
default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from s3"}
)
use_fast_tokenizer: bool = field(
default=True,
metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."},
)
dtype: Optional[str] = field(
default="float32",
metadata={
"help": "Floating-point format in which the model weights should be initialized and trained. Choose one of `[float32, float16, bfloat16]`."
},
)
@dataclass
class DataTrainingArguments:
"""
Arguments pertaining to what data we are going to input our model for training and eval.
"""
dataset_name: Optional[str] = field(
default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."}
)
dataset_config_name: Optional[str] = field(
default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."}
)
train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a text file)."})
validation_file: Optional[str] = field(
default=None,
metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."},
)
max_train_samples: Optional[int] = field(
default=None,
metadata={
"help": "For debugging purposes or quicker training, truncate the number of training examples to this "
"value if set."
},
)
max_eval_samples: Optional[int] = field(
default=None,
metadata={
"help": "For debugging purposes or quicker training, truncate the number of evaluation examples to this "
"value if set."
},
)
overwrite_cache: bool = field(
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
)
validation_split_percentage: Optional[int] = field(
default=5,
metadata={
"help": "The percentage of the train set used as validation set in case there's no validation split"
},
)
block_size: Optional[int] = field(
default=None,
metadata={
"help": "Optional input sequence length after tokenization. "
"The training dataset will be truncated in block of this size for training. "
"Default to the model max input length for single sentence inputs (take into account special tokens)."
},
)
overwrite_cache: bool = field(
default=False, metadata={"help": "Overwrite the cached training and evaluation sets"}
)
preprocessing_num_workers: Optional[int] = field(
default=None,
metadata={"help": "The number of processes to use for the preprocessing."},
)
def __post_init__(self):
if self.dataset_name is None and self.train_file is None and self.validation_file is None:
raise ValueError("Need either a dataset name or a training/validation file.")
else:
if self.train_file is not None:
extension = self.train_file.split(".")[-1]
assert extension in ["csv", "json", "txt"], "`train_file` should be a csv, a json or a txt file."
if self.validation_file is not None:
extension = self.validation_file.split(".")[-1]
assert extension in ["csv", "json", "txt"], "`validation_file` should be a csv, a json or a txt file."
class TrainState(train_state.TrainState):
dropout_rng: jnp.ndarray
def replicate(self):
return jax_utils.replicate(self).replace(dropout_rng=shard_prng_key(self.dropout_rng))
def data_loader(rng: jax.random.PRNGKey, dataset: Dataset, batch_size: int, shuffle: bool = False):
"""
Returns batches of size `batch_size` from truncated `dataset`, sharded over all local devices.
Shuffle batches if `shuffle` is `True`.
"""
steps_per_epoch = len(dataset) // batch_size
if shuffle:
batch_idx = jax.random.permutation(rng, len(dataset))
else:
batch_idx = jnp.arange(len(dataset))
batch_idx = batch_idx[: steps_per_epoch * batch_size] # Skip incomplete batch.
batch_idx = batch_idx.reshape((steps_per_epoch, batch_size))
for idx in batch_idx:
batch = dataset[idx]
batch = {k: jnp.array(v) for k, v in batch.items()}
batch = shard(batch)
yield batch
def write_train_metric(summary_writer, train_metrics, train_time, step):
summary_writer.scalar("train_time", train_time, step)
train_metrics = get_metrics(train_metrics)
for key, vals in train_metrics.items():
tag = f"train_{key}"
for i, val in enumerate(vals):
summary_writer.scalar(tag, val, step - len(vals) + i + 1)
def write_eval_metric(summary_writer, eval_metrics, step):
for metric_name, value in eval_metrics.items():
summary_writer.scalar(f"eval_{metric_name}", value, step)
def create_learning_rate_fn(
train_ds_size: int, train_batch_size: int, num_train_epochs: int, num_warmup_steps: int, learning_rate: float
) -> Callable[[int], jnp.array]:
"""Returns a linear warmup, linear_decay learning rate function."""
steps_per_epoch = train_ds_size // train_batch_size
num_train_steps = steps_per_epoch * num_train_epochs
warmup_fn = optax.linear_schedule(init_value=0.0, end_value=learning_rate, transition_steps=num_warmup_steps)
decay_fn = optax.linear_schedule(
init_value=learning_rate, end_value=0, transition_steps=num_train_steps - num_warmup_steps
)
schedule_fn = optax.join_schedules(schedules=[warmup_fn, decay_fn], boundaries=[num_warmup_steps])
return schedule_fn
def main():
# See all possible arguments in src/transformers/training_args.py
# or by passing the --help flag to this script.
# We now keep distinct sets of args, for a cleaner separation of concerns.
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
# If we pass only one argument to the script and it's the path to a json file,
# let's parse it to get our arguments.
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
else:
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
if (
os.path.exists(training_args.output_dir)
and os.listdir(training_args.output_dir)
and training_args.do_train
and not training_args.overwrite_output_dir
):
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty."
"Use --overwrite_output_dir to overcome."
)
# Make one log on every process with the configuration for debugging.
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO,
)
# Setup logging, we only want one process per machine to log things on the screen.
logger.setLevel(logging.INFO if jax.process_index() == 0 else logging.ERROR)
if jax.process_index() == 0:
datasets.utils.logging.set_verbosity_warning()
transformers.utils.logging.set_verbosity_info()
else:
datasets.utils.logging.set_verbosity_error()
transformers.utils.logging.set_verbosity_error()
# Set the verbosity to info of the Transformers logger (on main process only):
logger.info(f"Training/evaluation parameters {training_args}")
# Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below)
# or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/
# (the dataset will be downloaded automatically from the datasets Hub).
#
# For CSV/JSON files, this script will use the column called 'text' or the first column if no column called
# 'text' is found. You can easily tweak this behavior (see below).
#
# In distributed training, the load_dataset function guarantees that only one local process can concurrently
# download the dataset.
if data_args.dataset_name is not None:
# loading the wiki data from the load and clean file
dataset = load_and_clean_wiki()
print("the dataset is", dataset)
# if "validation" not in dataset.keys():
# dataset["validation"] = load_dataset(
# data_args.dataset_name,
# data_args.dataset_config_name,
# split=f"train[:{data_args.validation_split_percentage}%]",
# cache_dir=model_args.cache_dir,
# )
# dataset["train"] = load_dataset(
# data_args.dataset_name,
# data_args.dataset_config_name,
# split=f"train[{data_args.validation_split_percentage}%:]",
# cache_dir=model_args.cache_dir,
# )
else:
data_files = {}
if data_args.train_file is not None:
data_files["train"] = data_args.train_file
if data_args.validation_file is not None:
data_files["validation"] = data_args.validation_file
extension = data_args.train_file.split(".")[-1]
if extension == "txt":
extension = "text"
dataset = load_dataset(extension, data_files=data_files, cache_dir=model_args.cache_dir)
# See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at
# https://huggingface.co/docs/datasets/loading_datasets.html.
# Load pretrained model and tokenizer
# Distributed training:
# The .from_pretrained methods guarantee that only one local process can concurrently
# download model & vocab.
if model_args.config_name:
config = AutoConfig.from_pretrained(model_args.config_name, cache_dir=model_args.cache_dir)
elif model_args.model_name_or_path:
config = AutoConfig.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir)
else:
config = CONFIG_MAPPING[model_args.model_type]()
logger.warning("You are instantiating a new config instance from scratch.")
if model_args.tokenizer_name:
tokenizer = AutoTokenizer.from_pretrained(
model_args.tokenizer_name, cache_dir=model_args.cache_dir, use_fast=model_args.use_fast_tokenizer
)
elif model_args.model_name_or_path:
tokenizer = AutoTokenizer.from_pretrained(
model_args.model_name_or_path, cache_dir=model_args.cache_dir, use_fast=model_args.use_fast_tokenizer
)
else:
raise ValueError(
"You are instantiating a new tokenizer from scratch. This is not supported by this script."
"You can do it from another script, save it, and load it from here, using --tokenizer_name."
)
if model_args.model_name_or_path:
model = FlaxAutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path, config=config, seed=training_args.seed, dtype=getattr(jnp, model_args.dtype)
)
else:
model = FlaxAutoModelForCausalLM.from_config(
config, seed=training_args.seed, dtype=getattr(jnp, model_args.dtype)
)
# Preprocessing the datasets.
# First we tokenize all the texts.
# if training_args.do_train:
# column_names = dataset["train"].column_names
# else:
# column_names = dataset["validation"].column_names
text_column_name = "text"
# since this will be pickled to avoid _LazyModule error in Hasher force logger loading before tokenize_function
tok_logger = transformers.utils.logging.get_logger("transformers.tokenization_utils_base")
def tokenize_function(examples):
with CaptureLogger(tok_logger) as cl:
output = tokenizer(examples[text_column_name])
# clm input could be much much longer than block_size
if "Token indices sequence length is longer than the" in cl.out:
tok_logger.warning(
"^^^^^^^^^^^^^^^^ Please ignore the warning above - this long input will be chunked into smaller bits before being passed to the model."
)
return output
tokenized_datasets = dataset.map(
tokenize_function,
batched=True,
num_proc=data_args.preprocessing_num_workers,
# remove_columns=column_names,
load_from_cache_file=not data_args.overwrite_cache,
)
if data_args.block_size is None:
block_size = tokenizer.model_max_length
if block_size > config.max_position_embeddings:
logger.warning(
f"The tokenizer picked seems to have a very large `model_max_length` ({tokenizer.model_max_length}). "
"Picking 1024 instead. You can change that default value by passing --block_size xxx."
)
block_size = 1024
else:
if data_args.block_size > tokenizer.model_max_length:
logger.warning(
f"The block_size passed ({data_args.block_size}) is larger than the maximum length for the model"
f"({tokenizer.model_max_length}). Using block_size={tokenizer.model_max_length}."
)
block_size = min(data_args.block_size, tokenizer.model_max_length)
# Main data processing function that will concatenate all texts from our dataset and generate chunks of block_size.
def group_texts(examples):
# Concatenate all texts.
# print("the examples are", examples)
# import pdb
# pdb.set_trace()
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
if total_length >= block_size:
total_length = (total_length // block_size) * block_size
# Split by chunks of max_len.
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
# Note that with `batched=True`, this map processes 1,000 texts together, so group_texts throws away a remainder
# for each of those groups of 1,000 texts. You can adjust that batch_size here but a higher value might be slower
# to preprocess.
#
# To speed up this part, we use multiprocessing. See the documentation of the map method for more information:
# https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map
lm_datasets = tokenized_datasets.map(
group_texts,
batched=True,
num_proc=data_args.preprocessing_num_workers,
load_from_cache_file=not data_args.overwrite_cache,
)
if training_args.do_train:
if "train" not in tokenized_datasets:
raise ValueError("--do_train requires a train dataset")
train_dataset = lm_datasets["train"]
if data_args.max_train_samples is not None:
train_dataset = train_dataset.select(range(data_args.max_train_samples))
if training_args.do_eval:
if "validation" not in tokenized_datasets:
raise ValueError("--do_eval requires a validation dataset")
eval_dataset = lm_datasets["validation"]
if data_args.max_eval_samples is not None:
eval_dataset = eval_dataset.select(range(data_args.max_eval_samples))
# Enable tensorboard only on the master node
has_tensorboard = is_tensorboard_available()
if has_tensorboard and jax.process_index() == 0:
try:
from flax.metrics.tensorboard import SummaryWriter
summary_writer = SummaryWriter(log_dir=Path(training_args.output_dir))
except ImportError as ie:
has_tensorboard = False
logger.warning(
f"Unable to display metrics through TensorBoard because some package are not installed: {ie}"
)
else:
logger.warning(
"Unable to display metrics through TensorBoard because the package is not installed: "
"Please run pip install tensorboard to enable."
)
# Initialize our training
rng = jax.random.PRNGKey(training_args.seed)
rng, dropout_rng = jax.random.split(rng)
# Store some constant
num_epochs = int(training_args.num_train_epochs)
train_batch_size = int(training_args.per_device_train_batch_size) * jax.device_count()
eval_batch_size = int(training_args.per_device_eval_batch_size) * jax.device_count()
steps_per_epoch = len(train_dataset) // train_batch_size
total_train_steps = steps_per_epoch * num_epochs
# Create learning rate schedule
linear_decay_lr_schedule_fn = create_learning_rate_fn(
len(train_dataset),
train_batch_size,
training_args.num_train_epochs,
training_args.warmup_steps,
training_args.learning_rate,
)
# We use Optax's "masking" functionality to not apply weight decay
# to bias and LayerNorm scale parameters. decay_mask_fn returns a
# mask boolean with the same structure as the parameters.
# The mask is True for parameters that should be decayed.
# Note that this mask is specifically adapted for FlaxGPT2.
# For other models, one should correct the layer norm parameter naming
# accordingly.
def decay_mask_fn(params):
flat_params = traverse_util.flatten_dict(params)
flat_mask = {
path: (path[-1] != "bias" and path[-2:] not in [("ln_1", "scale"), ("ln_2", "scale"), ("ln_f", "scale")])
for path in flat_params
}
return traverse_util.unflatten_dict(flat_mask)
# create adam optimizer
if training_args.adafactor:
# We use the default parameters here to initialize adafactor,
# For more details about the parameters please check https://github.com/deepmind/optax/blob/ed02befef9bf81cbbf236be3d2b0e032e9ed4a40/optax/_src/alias.py#L74
optimizer = optax.adafactor(
learning_rate=linear_decay_lr_schedule_fn,
)
else:
optimizer = optax.adamw(
learning_rate=linear_decay_lr_schedule_fn,
b1=training_args.adam_beta1,
b2=training_args.adam_beta2,
eps=training_args.adam_epsilon,
weight_decay=training_args.weight_decay,
mask=decay_mask_fn,
)
# Setup train state
state = TrainState.create(apply_fn=model.__call__, params=model.params, tx=optimizer, dropout_rng=dropout_rng)
def loss_fn(logits, labels):
shift_logits = logits[..., :-1, :]
shift_labels = labels[..., 1:]
loss = optax.softmax_cross_entropy(shift_logits, onehot(shift_labels, shift_logits.shape[-1]))
return loss.mean()
# Define gradient update step fn
def train_step(state, batch):
dropout_rng, new_dropout_rng = jax.random.split(state.dropout_rng)
def compute_loss(params):
labels = batch.pop("labels")
logits = state.apply_fn(**batch, params=params, dropout_rng=dropout_rng, train=True)[0]
loss = loss_fn(logits, labels)
return loss
grad_fn = jax.value_and_grad(compute_loss)
loss, grad = grad_fn(state.params)
grad = jax.lax.pmean(grad, "batch")
new_state = state.apply_gradients(grads=grad, dropout_rng=new_dropout_rng)
metrics = {"loss": loss, "learning_rate": linear_decay_lr_schedule_fn(state.step)}
metrics = jax.lax.pmean(metrics, axis_name="batch")
return new_state, metrics
# Define eval fn
def eval_step(params, batch):
labels = batch.pop("labels")
logits = model(**batch, params=params, train=False)[0]
loss = loss_fn(logits, labels)
# summarize metrics
metrics = {"loss": loss}
metrics = jax.lax.pmean(metrics, axis_name="batch")
return metrics
# Create parallel version of the train and eval step
p_train_step = jax.pmap(train_step, "batch", donate_argnums=(0,))
p_eval_step = jax.pmap(eval_step, "batch")
# Replicate the train state on each device
state = state.replicate()
logger.info("***** Running training *****")
logger.info(f" Num examples = {len(train_dataset)}")
logger.info(f" Num Epochs = {num_epochs}")
logger.info(f" Instantaneous batch size per device = {training_args.per_device_train_batch_size}")
logger.info(f" Total train batch size (w. parallel & distributed) = {train_batch_size}")
logger.info(f" Total optimization steps = {total_train_steps}")
train_time = 0
train_metrics = []
epochs = tqdm(range(num_epochs), desc=f"Epoch ... (1/{num_epochs})", position=0)
for epoch in epochs:
# ======================== Training ================================
train_start = time.time()
# Create sampling rng
rng, input_rng = jax.random.split(rng)
# Generate an epoch by shuffling sampling indices from the train dataset
train_loader = data_loader(input_rng, train_dataset, train_batch_size, shuffle=True)
steps_per_epoch = len(train_dataset) // train_batch_size
# train
for step in tqdm(range(steps_per_epoch), desc="Training...", position=1, leave=False):
batch = next(train_loader)
state, train_metric = p_train_step(state, batch)
train_metrics.append(train_metric)
cur_step = epoch * (len(train_dataset) // train_batch_size) + step
if cur_step % training_args.logging_steps == 0 and cur_step > 0:
# Save metrics
train_metric = unreplicate(train_metric)
train_time += time.time() - train_start
if has_tensorboard and jax.process_index() == 0:
write_train_metric(summary_writer, train_metrics, train_time, cur_step)
epochs.write(
f"Step... ({cur_step} | Loss: {train_metric['loss'].mean()}, Learning Rate: {train_metric['learning_rate'].mean()})"
)
train_metrics = []
if cur_step % training_args.eval_steps == 0 and cur_step > 0:
# ======================== Evaluating ==============================
eval_metrics = []
eval_loader = data_loader(input_rng, eval_dataset, eval_batch_size)
eval_steps = len(eval_dataset) // eval_batch_size
for _ in tqdm(range(eval_steps), desc="Evaluating...", position=2, leave=False):
# Model forward
batch = next(eval_loader)
metrics = p_eval_step(state.params, batch)
eval_metrics.append(metrics)
# normalize eval metrics
eval_metrics = get_metrics(eval_metrics)
eval_metrics = jax.tree_map(jnp.mean, eval_metrics)
try:
eval_metrics["perplexity"] = math.exp(eval_metrics["loss"])
except OverflowError:
eval_metrics["perplexity"] = float("inf")
# Print metrics and update progress bar
desc = f"Step... ({cur_step} | Eval Loss: {eval_metrics['loss']} | Eval Perplexity: {eval_metrics['perplexity']})"
epochs.write(desc)
epochs.desc = desc
# Save metrics
if has_tensorboard and jax.process_index() == 0:
write_eval_metric(summary_writer, eval_metrics, cur_step)
if cur_step % training_args.save_steps == 0 and cur_step > 0:
# save checkpoint after each epoch and push checkpoint to the hub
if jax.process_index() == 0:
params = jax.device_get(unreplicate(state.params))
model.save_pretrained(
training_args.output_dir,
params=params,
push_to_hub=training_args.push_to_hub,
commit_message=f"Saving weights and logs of step {cur_step}",
)
if __name__ == "__main__":
main()
```
Error logs
```
Traceback (most recent call last):
File "./run_clm_flax.py", line 644, in <module>
main()
File "./run_clm_flax.py", line 422, in main
lm_datasets = tokenized_datasets.map(
File "/home/bmoell/gpt2/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1657, in map
return self._map_single(
File "/home/bmoell/gpt2/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/bmoell/gpt2/lib/python3.8/site-packages/datasets/fingerprint.py", line 397, in wrapper
out = func(self, *args, **kwargs)
File "/home/bmoell/gpt2/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2006, in _map_single
batch = apply_function_on_filtered_inputs(
File "/home/bmoell/gpt2/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in apply_function_on_filtered_inputs
function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "./run_clm_flax.py", line 401, in group_texts
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
File "./run_clm_flax.py", line 401, in <dictcomp>
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
TypeError: can only concatenate list (not "str") to list
``` | 07-13-2021 11:32:55 | 07-13-2021 11:32:55 | I printed the offending file (examples) which is a dict with the following keys
```
dict_keys(['attention_mask', 'input_ids', 'text'])
```
for the data. <|||||>I see! Can you try replacing:
```python
tokenized_datasets = dataset.map(
tokenize_function,
batched=True,
num_proc=data_args.preprocessing_num_workers,
# remove_columns=column_names,
load_from_cache_file=not data_args.overwrite_cache,
)
```
by
```python
tokenized_datasets = dataset.map(
tokenize_function,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=dataset.column_names,
load_from_cache_file=not data_args.overwrite_cache,
)
```
?<|||||>It seems to be loading correctly. I will wait to make sure the training starts but your fix seemed to resolve it. Thank you!<|||||> File "./run_clm_flax.py", line 431, in main
raise ValueError("--do_train requires a train dataset")
ValueError: --do_train requires a train dataset
New error when training |
transformers | 12,676 | closed | Wrong model is used in example, should be character instead of subword model | # What does this PR do?
A canine.rst fix.
In the original Google repo for CANINE there was mixup in the model names in the README.md, which was fixed 2 weeks ago. Since this transformer model was created before, it probably resulted in wrong use in this example in canine.rst.
s = subword, c = character
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-13-2021 10:40:11 | 07-13-2021 10:40:11 | It's not easy to make a file the styler is happy with, haha. |
transformers | 12,675 | open | [WIP][examples/flax] add gradient accumulation | # What does this PR do?
Adds gradient accumulation in flax language modeling scripts. | 07-13-2021 10:34:29 | 07-13-2021 10:34:29 | Thanks a lot for adding this! That's super useful! It seems to require some bigger changes to core functionality to the script so I think we should be careful here. Also I'm starting to wonder whether the examples become to complicated to read with more and more functionality being added and whether we should maybe instead creating a new training script instead?
Also wouldn't it be better to use gradient accumulation functionality from `optax` such as https://optax.readthedocs.io/en/latest/api.html?highlight=ApplyEvery#optax.apply_every ? Think a lot of the code that is written here already exists in optax classes/functions no?
@sgugger - I'd love to have your feedback on the PR as well<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale<|||||>Hi, I've just checked the flax examples on master branch and it seems that gradient accumulation is still missing, so I'm coming back to this PR :)
@patrickvonplaten mentioned to use `optax`, and I've found this (working?) implementation of gradient acc. for T5 MLM pre-training from @gsarti. This may could help here :hugs:
|
transformers | 12,674 | closed | Nothing | ## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
```python
model = FlaxGPT2ForMultipleChoice.from_pretrained('gpt2')
```
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 07-13-2021 07:17:37 | 07-13-2021 07:17:37 | |
transformers | 12,673 | closed | Too Many kernels and embeddings were randomly initialized when loading Hugging Face GPT-2 Model | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:colab
- Jax version (CPU):0.2.13
- Using GPU in script?:No
- Using distributed or parallel set-up in script?:No
### Who can help
@patrickvonplaten @patil-suraj
Models:
Used hugging face GPT2 model for multiple choice task.
Examples:
```python
self.gpt2=FlaxGPT2Model(config=self.config, dtype=self.dtype)
```
## Information
Model I am using GPT2:
The problem arises when using:
*loading the model
The tasks I am working on is:
* mutliple choice
* dataset:COSMOS
## To reproduce
https://colab.research.google.com/drive/1uTwJ1X1WTxOTDSduKqoUTg3oizehPFJB?usp=sharing
While executing this command:
```python
model = FlaxGPT2ForMultipleChoice.from_pretrained('gpt2')
```
## Expected behavior
To run the following code without any warnings and randomly initialized kernels.
| 07-13-2021 05:17:20 | 07-13-2021 05:17:20 | Hi, this is because you are using a different base model prefix `self.gpt2` to load the base model. To be able to add any heads and still be able to load the base model weights the pre-trained class expects the base model to have the same prefix.
For gpt2 it is `self.transformer`, changing `self.gpt2` to `self.transformer` should fix this.
Also, try to avoid posting screen-shots, it's usually better for us, if you post the warning/stack-trace as text. Thanks! |
transformers | 12,672 | closed | [doc] fix distil* example link | fix broken links | 07-13-2021 03:38:51 | 07-13-2021 03:38:51 | Great, thanks @songyouwei !
Could you run `make fixup` at the root of your clone to fix the code quality issue? Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,671 | closed | Update generation_logits_process.py | # What does this PR do?
If you're using type hints, then passing an `int` where a `float` is annotated is acceptable as per [PEP 484](https://www.python.org/dev/peps/pep-0484/#the-numeric-tower).
This makes life a little nicer.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
~Fixes # (issue)~
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-12-2021 23:58:29 | 07-12-2021 23:58:29 | Hi @willfrey, I've merged many of your PRs (thanks for that π€) but I don't agree with this one since there is no integer between 0 and 1 so it'll be nice to have the instance check here (in case someone passes a tensor or something).<|||||>Iβd suggest checking against numbers.Integral then because thatβs the ABC/protocol for anything like an integer.
> On Jul 29, 2021, at 3:38 PM, Kevin Canwen Xu ***@***.***> wrote:
>
> ο»Ώ
> Hi @willfrey, I've merged many of your PRs but I don't agree with this one since there is no integer between 0 and 1 so it'll be nice to have the instance check here (in case someone passes a tensor or something).
>
> β
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub, or unsubscribe.
<|||||>Sorry, not numbers.Integral but numbers.Real. <|||||>I'm just trying to not have it raise an exception if I pass in `1` instead of `1.0` because I can't rely on passing `None` to default to `1.0` because it may be overridden by a model's config.
I can change it so that it checks against that and won't yell at you for passing a `1`, if you'd prefer.<|||||>> I'm just trying to not have it raise an exception if I pass in `1` instead of `1.0` because I can't rely on passing `None` to default to `1.0` because it may be overridden by a model's config.
>
> I can change it so that it checks against that and won't yell at you for passing a `1`, if you'd prefer.
Hi @willfrey I checked again and I think the best solution is to add a try-except for the typecasting `float()`. If it throws an exception we should catch it and tell the users you should pass a number<|||||>Sure, Iβll make that change!
> On Jul 30, 2021, at 10:49 AM, Kevin Canwen Xu ***@***.***> wrote:
>
> ο»Ώ
> I'm just trying to not have it raise an exception if I pass in 1 instead of 1.0 because I can't rely on passing None to default to 1.0 because it may be overridden by a model's config.
>
> I can change it so that it checks against that and won't yell at you for passing a 1, if you'd prefer.
>
> Hi @willfrey I checked again and I think the best solution is to add a try-except for the typecasting float(). If it throws an exception we should catch it and tell the users you should pass a number
>
> β
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub, or unsubscribe.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@willfrey Hi, could you make the change as we discussed? I'm pinging again since the stale bot is pinging.<|||||>Hi @JetRunner.
Sorry, had this drop off my radar.
Do we want to re-raise an exception from just calling `float(top_p)`? That'll throw a `TypeError` if whatever the original `top_p` parameter does not support being an argument to `float(...)`.
It'd basically be:
```py3
try:
top_p = float(top_p)
except TypeError:
raise TypeError(f"cannot interpret {top_p!r} as a float")
```
which seems a little redundant.
Happy to make the change if you want, though.<|||||>You made a point! I'll just merge it as it is now. |
transformers | 12,670 | closed | Converting fairseq roberta to transformer throws ModuleAttributeError: 'RobertaHubInterface' object has no attribute 'args' | https://github.com/huggingface/transformers/blob/c523b241c2e50c3ed035bb76b938b6a944fed7e5/src/transformers/models/roberta/convert_roberta_original_pytorch_checkpoint_to_pytorch.py#L59
Had this error `ModuleAttributeError: 'RobertaHubInterface' object has no attribute 'args'` when running
```
convert_roberta_original_pytorch_checkpoint_to_pytorch.convert_roberta_checkpoint_to_pytorch(roberta_checkpoint_path='/home/ubuntu/fairseq/checkpoints/',
pytorch_dump_folder_path='./huggingface/',
classification_head=False)
```
`roberta.args.encoder_embed_dim` should now be converted to `roberta.model.encoder.args.encoder_embed_dim` to bypass this issue with the current fairseq version | 07-12-2021 22:48:18 | 07-12-2021 22:48:18 | line 81&82 `roberta_sent_encoder.emb_layer_norm` should be changed to `roberta_sent_encoder.layernorm_embedding` <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I met the same error just now, thanks for your solution, and I was wondering why attention was not paid to this bug?<|||||>I just met too. Thanks for sharing the bugs here!! |
transformers | 12,669 | closed | [tokenizer.prepare_seq2seq_batch] change deprecation to be easily actionable | Attempt to make an easier to understand and act upon deprecation by giving explicit instructions on what needs to be done:
Fixes: https://github.com/huggingface/transformers/issues/12622
@sgugger | 07-12-2021 22:42:03 | 07-12-2021 22:42:03 | Thanks for iterating on this! |
transformers | 12,668 | closed | Vocab size difference between tokenizer and config for XLMR. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.8.2
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik maybe?
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): XLM Roberta
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
>>> from transformers.models.xlm_roberta import XLMRobertaConfig
>>> XLMRobertaConfig().vocab_size
30522
>>> from transformers import AutoTokenizer
>>> AutoTokenizer.from_pretrained('xlm-roberta-base').vocab_size
250002
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I expect the vocab sizes to be the same. | 07-12-2021 20:35:01 | 07-12-2021 20:35:01 | Hello! If you want the configuration and tokenizer to match the same checkpoint, you should load them from same checkpoint:
```py
>>> from transformers import XLMRobertaConfig
>>> XLMRobertaConfig.from_pretrained('xlm-roberta-base').vocab_size
250002
>>> from transformers import AutoTokenizer
>>> AutoTokenizer.from_pretrained('xlm-roberta-base').vocab_size
250002
```
<|||||>Thanks, @LysandreJik. I guess fundamentally my question isn't just "how do I get the expected vocab size", but also "why is the default size wrong"? The vocab with size 30522 is from BERT; XLM-R has no configuration in which this vocab size is used. Why doesn't the config represent the config used in the paper?<|||||>The issue is that the configuration of this model is a simpler wrapper over RoBERTa since it's basically a copy of that model.
I do agree that this is misleading however, as it puts the wrong defaults. We should make the two configurations independent and provide the correct defaults for XLM-R.
Would you like to open a PR to propose a fix for this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,667 | closed | Adding TF translation example | 07-12-2021 19:37:17 | 07-12-2021 19:37:17 | ||
transformers | 12,666 | closed | translation with identical source and target language, for text normalization | Hi,
This is rather a general question about translation and I am aware that I don't follow exactly your guidelines, so I am sorry for that.
(We could run the examples mentioned in your readme, great tool!)
We try to conceive normalization for Dutch, as a 'translation' task.
So, is it for instance possible to use source + target language, defined as the same language, for instance
--source_lang nl_XX \
--target_lang nl_XX \
{"translation": {"nl_XX": "liefst geen energie vandaag . waar is **m'n** oplaadstation ?", "nl_XX": "liefst geen energie vandaag . waar is **mijn** oplaadstation ?"}}
or
--source_lang source\
--target_lang target \
{"translation": {"source": "liefst geen energie vandaag . waar is **m'n** oplaadstation ?", "target": "liefst geen energie vandaag . waar is **mijn** oplaadstation ?"}}
## Environment info
https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation
--model_name_or_path facebook/mbart-large-50-many-to-many-mmt
Thanks for your answer!
| 07-12-2021 19:35:00 | 07-12-2021 19:35:00 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,665 | closed | word_ids() returned by RoBERTa Tokenizer behaves inconsistently for alphanumeric tokens like '18th' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.2
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
- tokenizers: @LysandreJik (Specifically the RoBERTa / GPT tokenizer @patrickvonplaten)
## Information
Model I am using is RoBERTa.
The problem arises when using:
* [ ] my own modified scripts:
A simple script that uses RoBERTa to do NER.
The tasks I am working on is:
* [ ] my own task or dataset:
I am doing Named Entity Recognition (NER) on the ````conll2003```` dataset from the ````datasets```` library.
As such, I am using RoBERTa + a classification head on top to classify each token in the sequence.
Moreover, when the RoBERTa Tokenizer splits a word into many sub-tokens, I pass the entire sentence through RoBERTa then, using the ````word_ids```` returned by ````Tokenizer.batch_encode_plus````, pass only the contextual embeddings associated with the first sub-token of each word into my final classification head. (otherwise, the ````len(prediction) > len(label)````).
Detailed code of this can be found in the final Section below.
## The Problem
The problem is with the ````word_ids()```` returned by ````batch_encode_plus()```` for sentences that have alphanumeric tokens like ````'18th'```` or ````'1980s'````. Where the ````word_ids()```` will be as follows:
```python
['During', 'Δ the', 'Δ 1980', 's', 'Δ ,', 'Δ life', 'Δ was', 'Δ weird'] # No 'Δ ' before 's', as expected, but
word_ids = [None, 0, 1, 2, 3, 4, 5, 6, 7, None] # This causes a problem ! I expect it to be
word_ids = [None, 0, 1, 2, 2....
['An', 'Δ 18', 'th', 'Δ century', 'Δ poet'] # No 'Δ ' before 'th', as expected, but
word_ids = [None, 0, 1, 2, 3, 4, None, None, None, None] # This causes a problem ! I expect it to be
word_ids = [None, 0, 1, 1....
```
Notice that the token ````'1980s'```` was split into ````['Δ 1980', 's']```` but the ````word_ids```` did NOT indicate this, as what is returned is ````[None, 0, 1, 2, 3, 4, 5, 6, 7, None]````. Which indicates that the sub-token ````'s'```` is its own word (and NOT a sub-token of the word ````'1980s'````)
## To reproduce
Steps to reproduce the behavior:
1. Import and Initialize the RoBERTa Tokenizer (Fast)
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("roberta-base", use_fast=True)
```
2. ````batch_encode_plus```` sentences that have alphanumeric tokens like ````'18th'```` and ````'1980s'````:
```python
sentences = ["During the 1980s , life was something else", "An 18th century poet"]
e = tokenizer.batch_encode_plus(sentences, return_tensors='pt', padding=True)
```
3. Print and inspect the ````word_ids(i)````
```python
print(tokenizer.tokenize(sentences[0]))
print(e.word_ids(0))
print(tokenizer.tokenize(sentences[1]))
print(e.word_ids(1))
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The ````word_ids```` should correctly indicate whenever tokens such as ````'1980s'```` and ````'18th'```` are split:
```python
['<s>', 'An', 'Δ 18', 'th', 'Δ century', 'Δ poet', '</s>']
[None, 0, 1, 1, 2, 3, None]
```
## Detailed Code
```python
input_sentence = ["He lives joyfully"]
label = ["O", "O", "O"]
tokenizer = AutoTokenizer.from_pretrained("roberta-base", use_fast=True)
model = AutoModel.from_pretrained("roberta-base")
encoded_x = tokenizer.batch_encode_plus(input_sentence, return_tensors='pt', padding=True)
# The input sentence now becomes ["<s>", "Δ He", "Δ lives", "Δ joy", "fully", "</s>"]
contextual_embeddings = model(encoded_x.input_ids).last_hidden_state # [1, 6, 768] tensor.
# I need to pass a [1, 3, 768] tensor into my final classification head
# So, I wrote a function that takes as input the word_ids
# and returns a list of the first sub-token of each word (dropping <s> and </s>)
# Function NOT included here for brevity. Same function works perfectly for BERT
my_function( [None, 0, 1, 2, 2, None] ) -> [0, 1, 2]
first_subtoken = torch.LongTensor([0, 1, 2])
embeddings_of_interest = contextual_embeddings[:, first_subtoken, :] # [1, 3, 768] tensor
``` | 07-12-2021 18:55:12 | 07-12-2021 18:55:12 | Thanks for the very helpful reproducer! @n1t0, @SaulLu, could you take a look? Thank you!<|||||>Thank you for providing a code snippets @hos-arafat !
If I understand your request correctly, you would like to retrieve the index of the word to which each token belongs.
If this is your request, you have two ways of doing this - @n1t0 don't hesitate to correct me - :
1. **By letting your tokenizer automatically guess what a word is**
This is the option you use in the example you showed. In this case, the tokenizer uses the tokenizer's pre-tokenization component to define what a word is. On your example, you can see this breakdown by doing:
```python
sentences = ["During the 1980s , life was something else", "An 18th century poet"]
tokenizer = AutoTokenizer.from_pretrained("roberta-base", use_fast=True)
print(tokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(sentences[0]))
tokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(sentences[1])
```
And you will have as output:
```
[('During', (0, 6)),
('Δ the', (6, 10)),
('Δ 1980', (10, 15)),
('s', (15, 16)),
('Δ ,', (16, 18)),
('Δ life', (18, 23)),
('Δ was', (23, 27)),
('Δ something', (27, 37)),
('Δ else', (37, 42))]
```
```
[('An', (0, 2)),
('Δ 18', (2, 5)),
('th', (5, 7)),
('Δ century', (7, 15)),
('Δ poet', (15, 20))]
```
Indeed, there you can see that the ByteLevel pre-tokenization separates the numeric characters from the others.
2. **By specifying before the tokenization the tokens which must belong to the same word**
If ever the separation proposed by the pre-tokenizer does not suit you, you have the possibility of specifying yourself the list of "words" you wish by giving to the tokenizer a list of words instead of a sentence. The only constraint with the tokenizer you use is that you must set the `add_prefix_space` argument to `True`. On your example, if for example you want to consider that words are separated by spaces, you could do:
```python
sentences_splited_into_words = [sentence.split(" ") for sentence in sentences]
tokenizer = AutoTokenizer.from_pretrained("roberta-base", use_fast=True, add_prefix_space=True)
e = tokenizer.batch_encode_plus(
sentences_splited_into_words, return_tensors="pt", padding=True, is_split_into_words=True
)
print(e.tokens(0))
print(e.word_ids(0))
print(e.tokens(1))
print(e.word_ids(1))
```
Output:
```
['<s>', 'Δ During', 'Δ the', 'Δ 1980', 's', 'Δ ,', 'Δ life', 'Δ was', 'Δ something', 'Δ else', '</s>']
[None, 0, 1, 2, 2, 3, 4, 5, 6, 7, None]
```
```
['<s>', 'Δ An', 'Δ 18', 'th', 'Δ century', 'Δ poet', '</s>', '<pad>', '<pad>', '<pad>', '<pad>']
[None, 0, 1, 1, 2, 3, None, None, None, None, None]
```
I hope this answers your question and if it doesn't, don't hesitate to tell me! :smile: <|||||>Apologies for the late response, had to study and sit for an exam yesterday (aced it!).
Thank you for the quick response, and glad the reproducer was helpful ! @LysandreJik @SaulLu .
That's exactly right @SaulLu , I am interested in retrieving the index of every sub-token and to what "full" word it belongs to. For example:
```python
['An', 'Δ 18', 'th', 'Δ century', 'Δ poet'] # the tokenizer splits '18' and 'th' so len = 5
# This sentence will have labels:
[ 'O', 'O', 'O', 'O'] # len = 4
# Using the word_ids(), I get the index of the first sub-token of each word
# and create the following list:
['An', 'Δ 18', 'Δ century', 'Δ poet'] # I DROP the sub-token 'th' so len = label_len = 4
# When the word_ids() is incorrect (does NOT tell me what tokens were split)
# I end up doing loss(predictions, labels)
# which throws an error cuz len(predictions) > len(labels)
```
Thank you for the solutions you offered ! They are both helpful. I can do two things:
1. Instead of using the ````word_ids()```` I can use the output of / tuples returned by ````pre_tokenize_str()```` in order to figure out what words were split into many sub-tokens and only take the first subtoken
2. Since the ````word_ids()```` are returned correctly when I split the string, I can keep using them and just split my sentences based on whitespaces using ````split()```` and add the argument ````is_split_into_words=True```` to ````batch_encode_plus() ````
I am wondering why ````word_ids()```` is returned incorrectly as I highlighted in the reproducer though. Will try to investigate the ````GPT2Tokenizer```` class and ````tokenize()```` and see if I can spot something and contribute a fix! Would love to give back to this awesome library!
Thanks again for your help!
<|||||>Glad it helped :hugs: and great that your exam went well!
> I am wondering why word_ids() is returned incorrectly as I highlighted in the reproducer though. Will try to investigate the GPT2Tokenizer class and tokenize() and see if I can spot something and contribute a fix! Would love to give back to this awesome library!
That is really nice of you! Personally, I think that the `word_ids` tokenizer method behaves in the desired way. However, I think we could be more specific in [documenting](https://huggingface.co/transformers/main_classes/tokenizer.html?highlight=word_ids#transformers.BatchEncoding.word_ids) the `word_ids` method in the :hugs: transformers library so that it gives as much information as the underlying function used about the role of the pre-tokenizer which is in the :hugs: tokenizers library and is documented [here](https://huggingface.co/docs/tokenizers/python/latest/api/reference.html?highlight=word_ids#tokenizers.Encoding.word_ids). Would you like to propose a reformulation of the documentation in the transformers library :slightly_smiling_face: ?
In order to make it easier to read my answer, I put a copy of the two documentations below.
- `word_ids` method in the :hugs: transformers:
``` python
def word_ids(self, batch_index: int = 0) -> List[Optional[int]]:
"""
Return a list mapping the tokens to their actual word in the initial sentence for a fast tokenizer.
Args:
batch_index (:obj:`int`, `optional`, defaults to 0): The index to access in the batch.
Returns:
:obj:`List[Optional[int]]`: A list indicating the word corresponding to each token. Special tokens added by
the tokenizer are mapped to :obj:`None` and other tokens are mapped to the index of their corresponding
word (several tokens will be mapped to the same word index if they are parts of that word).
"""
```
- `word_ids` method in the :hugs: tokenizers:
``` python
def word_ids(self):
"""
The generated word indices.
They represent the index of the word associated to each token.
When the input is pre-tokenized, they correspond to the ID of the given input label,
otherwise they correspond to the words indices as defined by the
:class:`~tokenizers.pre_tokenizers.PreTokenizer` that was used.
For special tokens and such (any token that was generated from something that was
not part of the input), the output is :obj:`None`
Returns:
A :obj:`List` of :obj:`Optional[int]`: A list of optional word index.
"""
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,664 | closed | Add option to load a pretrained model with mismatched shapes | # What does this PR do?
Sometimes, users want to load a checkpoint for a given task with a new head for the same task but different shapes. For instance, they may want to use a checkpoint that does text classification on 2 labels to initialize a model that does text classification on 5 labels.
This PR enables that by adding a new argument to the `from_pretrained` method of `PreTrainedModel`, `TFPreTrainedModel` and `FlaxPreTrainedModel` named `ignore_mismatched_sizes`. When set to True, this argument will ignore the weights from the checkpoint that do not have the same shape as the ones inside the model and leave the randomly initialized weights. | 07-12-2021 18:53:13 | 07-12-2021 18:53:13 | |
transformers | 12,663 | closed | Fix typo in README_zh-hans.md | 07-12-2021 17:49:48 | 07-12-2021 17:49:48 | ||
transformers | 12,662 | closed | [Flax Generation] Correct inconsistencies PyTorch/Flax | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The Flax greedy & beam search generation & Marian model had a couple of issues that are addressed here:
- greedy search now correctly pads **after** the eos token & test against PyTorch is added
- beam search now correctly computes the finished beam scores
- marian correctly makes use of bias
- more beam search tests for marian are added
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-12-2021 17:13:35 | 07-12-2021 17:13:35 | The Spanish Marian model should be fixed as well on TPU (cc @gchhablani )
```python
from transformers import FlaxMarianMTModel, MarianTokenizer
import torch
model_name = "Helsinki-NLP/opus-mt-en-es"
model_fx = FlaxMarianMTModel.from_pretrained(model_name)
tokenizer = MarianTokenizer.from_pretrained(model_name)
input_ids = tokenizer("Living Room, The Sheridan House! Your Minneapolis Home!", return_tensors="np").input_ids
sequences_fx = model_fx.generate(input_ids, max_length=64, num_beams=2).sequences
decoded_fx = tokenizer.batch_decode(sequences_fx, skip_special_tokens=True)
print("Out Fx", decoded_fx)
``` |
transformers | 12,661 | closed | 'TransfoXLLMHeadModelOutput' object has no attribute 'loss' | ## Environment info
- `transformers` version: 4.6.1
- Platform: Linux-5.11.0-7620-generic-x86_64-with-glibc2.10
- Python version: 3.8.10
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): not installed (NA)
Models:
- Transformer XL
## To reproduce
Just to run the example from the documentation: https://huggingface.co/transformers/model_doc/transformerxl.html#transfoxllmheadmodel
Steps to reproduce the behavior:
```python
import torch
from transformers import TransfoXLTokenizer, TransfoXLLMHeadModel
tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')
model = TransfoXLLMHeadModel.from_pretrained('transfo-xl-wt103')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
loss = outputs.loss
```
## Expected behavior
I should get a loss, but an exception is thrown instead:
AttributeError: 'TransfoXLLMHeadModelOutput' object has no attribute 'loss'
| 07-12-2021 17:10:00 | 07-12-2021 17:10:00 | Indeed, there's an issue with the docstring! `TransfoXL` has two losses, here's the correct snippet:
```py
import torch
from transformers import TransfoXLTokenizer, TransfoXLLMHeadModel
tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')
model = TransfoXLLMHeadModel.from_pretrained('transfo-xl-wt103')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
losses = outputs.losses
```
Would you like to open a PR to fix the docstring?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,660 | closed | Updates timeline for project evaluation | 07-12-2021 16:39:45 | 07-12-2021 16:39:45 | ||
transformers | 12,659 | closed | Can't load pretrained model when working in virtual environment | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.2
- Platform: Pytorch
- Python version: 3.7.6
- PyTorch version (GPU?): 1.9.0 no GPU
- Using GPU in script?: No.
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
## Information
Using Bert, specifically BertForSequenceClassification
## To reproduce
Steps to reproduce the behavior:
1. Create a virtual environment `python -m venv <name_of_env>`
2. `pip install transformers`
3. `source /path/to/venv/bin/activate`
4. Try to load the BertForSequenceClassification model
Here is a code snippet:
```python
from transformers import BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
```
Below is the error message I get:
```python
HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-base-uncased/resolve/main/config.json (Caused by ProxyError('Cannot connect to proxy.', OSError(0, 'Error')))
Traceback (most recent call last):
File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/connectionpool.py", line 696, in urlopen
self._prepare_proxy(conn)
File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/connectionpool.py", line 964, in _prepare_proxy
conn.connect()
File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/connection.py", line 359, in connect
conn = self._connect_tls_proxy(hostname, conn)
File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/connection.py", line 506, in _connect_tls_proxy
ssl_context=ssl_context,
File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/util/ssl_.py", line 450, in ssl_wrap_socket
sock, context, tls_in_tls, server_hostname=server_hostname
File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "/home/aclifton/anaconda3/lib/python3.7/ssl.py", line 423, in wrap_socket
session=session
File "/home/aclifton/anaconda3/lib/python3.7/ssl.py", line 870, in _create
self.do_handshake()
File "/home/aclifton/anaconda3/lib/python3.7/ssl.py", line 1139, in do_handshake
self._sslobj.do_handshake()
OSError: [Errno 0] Error
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/connectionpool.py", line 756, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/urllib3/util/retry.py", line 574, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-base-uncased/resolve/main/config.json (Caused by ProxyError('Cannot connect to proxy.', OSError(0, 'Error')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/transformers/configuration_utils.py", line 505, in get_config_dict
user_agent=user_agent,
File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/transformers/file_utils.py", line 1337, in cached_path
local_files_only=local_files_only,
File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/transformers/file_utils.py", line 1499, in get_from_cache
r = requests.head(url, headers=headers, allow_redirects=False, proxies=proxies, timeout=etag_timeout)
File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/requests/api.py", line 104, in head
return request('head', url, **kwargs)
File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/requests/adapters.py", line 510, in send
raise ProxyError(e, request=request)
requests.exceptions.ProxyError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-base-uncased/resolve/main/config.json (Caused by ProxyError('Cannot connect to proxy.', OSError(0, 'Error')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1086, in from_pretrained
**kwargs,
File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/transformers/configuration_utils.py", line 440, in from_pretrained
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home/aclifton/venvs/hf_test/lib/python3.7/site-packages/transformers/configuration_utils.py", line 517, in get_config_dict
raise EnvironmentError(msg)
OSError: Can't load config for 'bert-base-uncased'. Make sure that:
- 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'bert-base-uncased' is the correct path to a directory containing a config.json file
```
## Expected behavior
I had expected that the virtual environment would not affect the download or declaring the `model` variable.
If I don't run the virtual environment, the above code works. I believe I have located the models in the `~/.cache/huggingface/transformers` directory so if there is a particular place those should be copied to in the `/path/to/venv/` directory let me know. I tried just copying `~/.cache/huggingface` to `/path/to/venv/` and still get the same error.
I will also mention that I am working behind a proxy, but setting the `proxies` parameter doesn't seem to help either. That being said, I do have the model in `~/.cache/huggingface/transformers` and the proxy does not affect the above code snippet when running without the virtual environment. Thanks in advance for your help!
**UPDATE**
I changed from `transformers 4.8.2` to `transformers 4.4.2` and the problem goes away.
| 07-12-2021 16:03:34 | 07-12-2021 16:03:34 | Hello! There shouldn't be any difference between using the system-wide environment vs the virtual environment. We mostly use virtual environments to work on `transformers` and we heavily recommend using one when working with `transformers`.
Are you sure the error comes from the virtual environment and now from another setup issue?<|||||>@LysandreJik If I run:
```python
from transformers import BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained('bert-base-uncased')
```
using the system-wide environment, then everything works fine. If I activate the venv and run the exact same code, I get the above error. I'm not sure if there is other information I can provide you that would be useful, but I don't change anything in the setup.
Making the system-wide and venv `transformers` version `4.4.2` resolves the error. Making the venv `transformers` version `4.8.2` reproduces the error.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,658 | closed | Autotokenizer error "Already borrowed" when used on thread pool | ## Environment info
- `transformers` version: 4.8.2
- Platform: Databricks
- Python version: 3.7.10
- PyTorch version (GPU?): GPU
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Yes
## Information
Model I am using : camembert-base:
The problem arises when i try to use a tokenizer (from whatever model in my experiments) on multiple thread pools with an Autotokenizer, the error **RuntimeError: Already borrowed** get raised, i haven't tried if the same issues occure whith AutoModel, but i suspect it would, this makes it completly inefficient as it requires to duplicate the tokenizer on each thread (same for model), and is a real problem for packages like Petastorm / Horovod.
## To reproduce
Below you'll find a simple snippet of code to reproduce the error:
Steps to reproduce the behavior:
```Python
from multiprocessing.dummy import Pool as ThreadPool
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path="camembert-base")
def tokenizer_test(text):
print(tokenizer(text))
pool = ThreadPool(10)
data_list = ['this is a test'] * 10
pool.map(tokenizer_test, data_list)
pool.close()
pool.join()
```
However this works fine if i switch the Autotokenizer with CamembertTokenizer.from_pretrained("camembert-base") for example. | 07-12-2021 15:14:55 | 07-12-2021 15:14:55 | Duplicate of https://github.com/huggingface/tokenizers/issues/537<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,657 | closed | Remove SageMaker documentation | # What does this PR do?
This PRs removes the SageMaker documentation from huggingface.co/transformers since there is a new documentation of hf.co/docs/sagemaker.
Not sure if the deprecation "warning" should be displayed or just a comment for us. What do you think? | 07-12-2021 15:07:21 | 07-12-2021 15:07:21 | |
transformers | 12,656 | closed | Pipeline should be agnostic | The pipeline test was PyTorch only when ran on both PT and TF, the slow test was failing. | 07-12-2021 14:52:23 | 07-12-2021 14:52:23 | Nice catch ! |
transformers | 12,655 | closed | **encode_plus() shouldn't run for W2V2CTC | The W2V2CTC shouldn't be used to create the input values for W2V2, so the output of `encode_plus` shouldn't be used as raw input for the model. | 07-12-2021 14:33:56 | 07-12-2021 14:33:56 | Indeed, there was a typo. Thanks! |
transformers | 12,654 | closed | Pickle auto models | # What does this PR do?
The auto-generated classes for the Auto models are not picklable, because they are dynamically generated (so pickle can't trace them properly). This PR changes a little bit the way to create the Auto classes in each of their modeling files like proper classes, then update them to add the right methods. As a result, the auto classes are now picklable.
Fixes #12621 | 07-12-2021 14:21:29 | 07-12-2021 14:21:29 | |
transformers | 12,653 | closed | [WIP] Patch BigBird tokenization test | This patches the BigBird integration test.
The core of the issue is that the `[MASK]` token is an `AddedToken` with `lstrip=True`. It, therefore, gobbles up the spaces on the left without getting a sentence piece underline.
Therefore, when decoding, the internal sentence piece tokenizer is unaware that it should add a space in front of the `[MASK]` token.
However, the original tokenizer does correctly decode with the space, so I believe there's an issue with our implementation.
@vasudevgupta7 do you know of the difference between the two implementations? Also cc @n1t0 and @SaulLu
Do not merge this as this isn't the correct fix :) | 07-12-2021 13:58:55 | 07-12-2021 13:58:55 | Hey @LysandreJik,
Even original tokenizer is not introducing space before `[MASK]`, so I think tokenizer is alright & the test is wrong instead.
```
wget https://huggingface.co/google/bigbird-roberta-base/resolve/main/spiece.model
s = spm.SentencePieceProcessor(model_file='spiece.model')
s.decode([7434, 9894, 67, 9894, 7434])
```
<|||||>Great, then merging this! Thanks @vasudevgupta7 |
transformers | 12,652 | closed | Fix transfo xl integration test | Skipping test until https://github.com/huggingface/transformers/issues/12651 is resolved | 07-12-2021 13:33:16 | 07-12-2021 13:33:16 | |
transformers | 12,651 | closed | TF TransfoXL doesn't work with the `generate` method | The TF TransfoXL model does not output `logits` but `prediction_scores` which are different due to the `AdaptiveEmbedding`.
The TF version of `generate` requires the `logits` to be output, therefore the model doesn't work with the `generate` method. | 07-12-2021 13:27:34 | 07-12-2021 13:27:34 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,650 | closed | The extended trainer tests should require torch | The extended trainer tests have no global torch requirements. Some tests have no decorator at all and therefore get run in the TF CI, failing because of a lack of PyTorch.
This adds a requirement for torch for all extended trainer tests. | 07-12-2021 13:21:38 | 07-12-2021 13:21:38 | |
transformers | 12,649 | closed | Skip TestMarian_MT_EN | Skip the test until #12647 is resolved. | 07-12-2021 12:58:53 | 07-12-2021 12:58:53 | |
transformers | 12,648 | open | Inconsistency between the tokenization of `CLIPTokenizer` and `CLIPTokenizerFast` with `openai/clip-vit-base-patch32` | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.2
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@patil-suraj, I think you worked on CLIP, maybe you could help me by confirming that this behavior is not normal. If it is and no one can deal with it first, I'd be happy to try to fix it.
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): CLIP
## To reproduce
The easiest way to reproduce is to open [this google colab](https://colab.research.google.com/drive/1JzlYtuG4MdAKl8lPI5PkqGcYbdM3N24x?usp=sharing)
Steps to reproduce the behavior:
1. Import the slow and fast CLIP tokenizers from the transformers library and eventualy the tokenizer of https://github.com/openai/CLIP
```
from transformers import CLIPTokenizer, CLIPTokenizerFast
tokenizer_slow = CLIPTokenizer.from_pretrained("openai/clip-vit-base-patch32")
tokenizer_fast = CLIPTokenizerFast.from_pretrained("openai/clip-vit-base-patch32")
```
```
from CLIP import clip as clip_orig
```
2. Tokenize the same text with the 3 tokenizers
```
text = "A photo of a cat"
context_length = 77
```
```
tokens_ids_orig = clip_orig.tokenize(text)
tokens_ids_slow = tokenizer_slow.encode(text, padding="max_length", max_length=context_length, return_tensors='pt')
tokens_ids_fast = tokenizer_fast.encode(text, padding="max_length", max_length=context_length, return_tensors='pt')
```
3. Compare the outputs
```
(tokens_ids_orig == tokens_ids_slow).sum() == context_length
```
Output: `True`
```
(tokens_ids_orig == tokens_ids_fast).sum() == context_length
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I think I would have expected the slow and fast versions to tokenize the text in the same way.
<!-- A clear and concise description of what you would expect to happen. -->
| 07-12-2021 12:34:17 | 07-12-2021 12:34:17 | Great catch!
Indeed, this is not normal. Feel free to give it a try to fix this as I won't be able to assign time for it this week, thanks :) <|||||>3 issues that are causing this in-consistency
- The fast tokenizer was using `ByteLevel` `decoder ` which was not removing the end of word suffix `</w>`. Using `BPEDecoder` fixes this
- CLIP uses `bos` and `eos` tokens, but the current post-processor is `ByteLevel` processor which does not add these, using `TemplateProcessing` instead fixes this.
- Unlike GPT2's BPE tokenizer, CLIP's BPE does not represent space with `Δ `. It instead repalces `</w>` with space during decoding. But the `BPE` tokenizer in `tokenizers` always seems to replace space with `Δ `, which is the only remaining issue.
```python
tokenizer_slow = CLIPTokenizer.from_pretrained("openai/clip-vit-base-patch32")
tokenizer_fast = CLIPTokenizerFast.from_pretrained("openai/clip-vit-base-patch32", from_slow=True)
text = "A photo of a cat"
tokenizer_slow.tokenize(text)
# ['a</w>', 'photo</w>', 'of</w>', 'a</w>', 'cat</w>']
tokenizer_fast.tokenize(text)
# ['a</w>', 'Δ ', 'photo</w>', 'Δ ', 'of</w>', 'Δ ', 'a</w>', 'Δ ', 'cat</w>']
```
Is there any way to disable this behavior @n1t0 @SaulLu ?
<|||||>@patil-suraj Hi, I wonder if this issue is solved? When will be this fix becomes official? Thanks!<|||||>I'm really sorry for the delay. I have investigated a bit and I think that unfortunately the last problem is not limited to the fact that spaces are replaced by `Δ `.
For example, here is the output on another example:
```python
tokenizer_slow = CLIPTokenizer.from_pretrained("openai/clip-vit-base-patch32")
tokenizer_fast = CLIPTokenizerFast.from_pretrained("openai/clip-vit-base-patch32", from_slow=True)
text = "A\n'll 11p223RFβho!!to? of a cat"
tokenizer_slow.tokenize(text)
# ['a</w>', "'ll</w>", '1</w>', '1</w>', 'p</w>', '2</w>', '2</w>', '3</w>', 'rf</w>', 'Γ’ΔΊΔ¨</w>', 'ho</w>', '!!</w>', 'to</w>', '?</w>', 'of</w>', 'a</w>', 'cat</w>']
tokenizer_fast.tokenize(text)
# ['a</w>', 'Δ ', "'</w>", 'll</w>', 'Δ ', '1', '1</w>', 'p</w>', '2', '2', '3</w>', 'rf</w>', 'Γ’ΔΊΔ¨</w>', 'ho</w>', '!!</w>', 'to</w>', '?</w>', 'Δ ', 'of</w>', 'Δ ', 'a</w>', 'Δ ', 'cat</w>']
```
I think that we also need a pre tokenizer that reproduces the split induced in [this line](https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py#L124) thanks to this regex: `r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+"""`. I think we could use [`tokenizers.pre_tokenizers.Split`](https://huggingface.co/docs/tokenizers/python/latest/api/reference.html#tokenizers.pre_tokenizers.Split) with [tokenizers.pre_tokenizers.Sequence](https://huggingface.co/docs/tokenizers/python/latest/api/reference.html#tokenizers.pre_tokenizers.Sequence) but for the moment I couldn't make it work.
At this point, the only solution I can propose that comes close (but doesn't match entirely) to the correct behavior is to replace the `tokenizer.pre_tokenizer=pre_tokenizers.ByteLevel(add_prefix_space=False)` line of the `CLIPConverter` class into `convert_slow_tokenizer.py` with :
```python
tokenizer.pre_tokenizer = pre_tokenizers.Sequence(
[
pre_tokenizers.pre_tokenizers.WhitespaceSplit(),
pre_tokenizers.ByteLevel(
add_prefix_space=False,
),
]
)
```
This would give on the previous example:
```
tokenizer_slow = CLIPTokenizer.from_pretrained("openai/clip-vit-base-patch32")
tokenizer_fast = CLIPTokenizerFast.from_pretrained("openai/clip-vit-base-patch32", from_slow=True)
text = "A\n'll 11p223RFβho!!to? of a cat"
tokenizer_slow.tokenize(text)
# ['a</w>', "'ll</w>", '1</w>', '1</w>', 'p</w>', '2</w>', '2</w>', '3</w>', 'rf</w>', 'Γ’ΔΊΔ¨</w>', 'ho</w>', '!!</w>', 'to</w>', '?</w>', 'of</w>', 'a</w>', 'cat</w>']
tokenizer_fast.tokenize(text)
# ['a</w>', "'ll</w>", '1', '1</w>', 'p</w>', '2', '2', '3</w>', 'rf</w>', 'Γ’ΔΊΔ¨</w>', 'ho</w>', '!!</w>', 'to</w>', '?</w>', 'of</w>', 'a</w>', 'cat</w>']
```
<|||||>@SaulLu Thanks for providing this temporal solution. I hope this issue could be fixed soon and merged into the huggingface official release by @patil-suraj @n1t0 <|||||>Thank you for investigating this @SaulLu ! There is one more difference which I'm not sure how to handle in fast tokenizers.
Since CLIP is trained on noisy web alt text, it uses `ftfy` to fix the text which also changes the tokenization.
@n1t0 Would be nice if you let us know if this is something that can be supported in fast tokenizers.<|||||>I just thought about this issue and I think it would be important to fix it quickly because a user who would use the fast version of this tokenizer could really have bad surprises.
1. in the very short term, it is probably safer to remove the fast version of the tokenizer from the library. Indeed I think that fixing this tokenizer will require a lot of discussions (or even a new release of the Tokenizers library)
2. I tried to work to create a fast tokenizer as faithful as possible to the slow version in [this PR](https://github.com/huggingface/transformers/pull/15067). Nevertheless, I really need to discuss this fix with you. I explain in more detail the points to discuss in the PR. :smile:
<|||||>Hey, is this fix as of now? |
transformers | 12,647 | open | `TestMarian_MT_EN::test_batch_generation_mt_en` Failing due to randomly generated tokens | The test fails with the following:
```
_________________ TestMarian_MT_EN.test_batch_generation_mt_en _________________
[gw0] linux -- Python 3.6.9 /usr/local/bin/python
self = <tests.test_modeling_tf_marian.TestMarian_MT_EN testMethod=test_batch_generation_mt_en>
@slow
def test_batch_generation_mt_en(self):
> self._assert_generated_batch_equal_expected()
tests/test_modeling_tf_marian.py:390:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_modeling_tf_marian.py:366: in _assert_generated_batch_equal_expected
self.assertListEqual(self.expected_text, generated_words)
E AssertionError: Lists differ: ['Tou[19 chars] healed a man who was affected by the sad disease of leprosy.'] != ['Tou[19 chars] healed a man who was affected byβkifkaΕΌUnjonik ill.']
E
E First differing element 0:
E 'Touc[17 chars]s healed a man who was affected by the sad disease of leprosy.'
E 'Touc[17 chars]s healed a man who was affected byβkifkaΕΌUnjonik ill.'
E
E - ['Touching gently, Jesus healed a man who was affected by the sad disease of '
E ? ^^^^^^ ^^^ ^^^^^^^^^
E
E + ['Touching gently, Jesus healed a man who was affected byβkifkaΕΌUnjonik ill.']
E ? ^^^^^ ^^^^^^ ^^^^^^ +
E
E - 'leprosy.']
``` | 07-12-2021 12:22:56 | 07-12-2021 12:22:56 | Traced back to this commit: https://github.com/huggingface/transformers/commit/184ef8ecd05ac783827b196e8d15403820efedf9
I suspect there is a difference between the upload TF and PT checkpoints<|||||>It seems there's a single difference in the final logits bias:
```py
import torch
from transformers import MarianMTModel
pt_model = MarianMTModel.from_pretrained("Helsinki-NLP/opus-mt-mt-en")
tf_model = MarianMTModel.from_pretrained("Helsinki-NLP/opus-mt-mt-en", from_tf=True)
pt, tf = pt_model.state_dict(), tf_model.state_dict()
ptf = {}
for key, value in pt.items():
ptf[key] = [value]
for key, value in tf.items():
if key not in ptf:
print(key, "not in ptf")
else:
ptf[key].append(value)
for key, value in ptf.items():
_pt, _tf = value
difference = torch.max(torch.abs(_pt - _tf)).tolist()
if difference > 0:
print(key, difference)
# final_logits_bias 10.176068305969238
```
Seems systematic, independent of runtime or seed.<|||||>I would say the error comes from the TF checkpoint on the hub, looking forward to your input @patrickvonplaten and @patil-suraj.
I'll deactivate the test in the meantime.<|||||>This is also the case for the `Helsinki-NLP/opus-mt-en-zh` checkpoint:
```py
# final_logits_bias 8.724637031555176
```<|||||>And for the `Helsinki-NLP/opus-mt-en-ROMANCE` checkpoint:
```
final_logits_bias 11.757145881652832
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,646 | closed | Fixed docs |
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-12-2021 12:11:26 | 07-12-2021 12:11:26 | |
transformers | 12,645 | closed | `TFHubertModelTest.test_model_from_pretrained` is failing | The `TFHubertModelTest.test_model_from_pretrained` is failing because the TensorFlow checkpoint isn't available under `facebook/hubert-base-ls960`.
Same with `TFHubertRobustModelTest.test_model_from_pretrained` | 07-12-2021 12:05:43 | 07-12-2021 12:05:43 | Added in [`cbd73bd#huggingface.co`](https://huggingface.co/facebook/hubert-base-ls960/commit/cbd73bd518689ab98a9370c74165f4875ccd97d5) |
transformers | 12,644 | closed | Only test the files impacted by changes in the diff | # What does this PR do?
This PR adds some utilities to only run the tests that are impacted by the diff in a PR, to have the CI run faster, save on CI costs and avoid hanging tests. For now the first stage of deployment only concerns PRs, the jobs run at each push (either on circle CI or GitHub actions) still run all the tests.
To make this work, the new utility `tests_fetcher.py` works in three stages:
1. It analyzes the diff and to grab the added/deleted/modified files
2. It builds a internal map that contains for each module all the other modules that depend on it (recursively). For instance `trainer` depends on `trainer_utils` so that map says `trainer_utils` impacts `trainer`. It is recursive, so since `trainer_utils` depends on `file_utils`, the map says `file_utils` impacts `trainer_utils` and `trainer`.
3. It maps all the impacted files to their corresponding test files. Note that some files in the library may not have direct test files (for instance a model configuration file has no direct test) but we the impacted files computed above, a model configuration files impacts the corresponding modeling file, so changing a model configuration will run the tests of that model.
The result is then run for each of the tests in circle CI. Note that for some tests (like text_examples and test_custom_tokenizer) we just check that there is at least some tests to run (so no trivial diff) but still run all the tests like before. In all the jobs, the output of the `tests_fetcher.py` is saved as an artifact for future debugging. | 07-12-2021 11:39:44 | 07-12-2021 11:39:44 | What happens if one only changes a file like `src/transformers/models/bert/__init__.py`? The only files that are impacted by this is `src/transformers/__init__` -> does this mean no tests are run? What if someone introduces a typo in this file that doesn't allow anymore to import `BertModel`? <|||||>Ah great point, I was supposed to add the direct_deps when the file is an init (so that if you change the bert init, it runs the model and tokenizers tests) but forgot! Will make a PR this afternoon! |
transformers | 12,643 | closed | Adding an argument to exclude some states (pretrained weights) from being loaded. | # π Feature request
Adding an argument in `from_pretrained` to exclude some states (pretrained weights) from being loaded.
## Motivation
In general, we usually use `from_pretrained` method to load pretrained states, from CDN or local files, into the model. However, In case when I need to adjust the shape of certain layers (submodules), the errors will be raised due to mismatched shapes.
For example, in the following snippets, I changed the embedding_size of Electra in order to tie the same embeddings as BERT in the subsequent code, but due to the mismatched shapes, many RuntimeErrors were raised in `module._load_from_state_dict`.
```
from transformers import BertModel, BertConfig, ElectraModel, ElectraConfig
bert_config = BertConfig.from_pretrained('bert-base-uncased')
bert_model = BertModel.from_pretrained('bert-base-uncased')
electra_config = ElectraConfig.from_pretrained(
'google/electra-small-generator',
embedding_size=bert_config.hidden_size
)
electra_model = ElectraModel.from_pretrained('google/electra-small-generator', config=electra_config)
```
```
Exception has occurred: RuntimeError
Error(s) in loading state_dict for ElectraModel:
size mismatch for electra.embeddings.word_embeddings.weight: copying a param with shape torch.Size([30522, 128]) from checkpoint, the shape in current model is torch.Size([30522, 768]).
size mismatch for electra.embeddings.position_embeddings.weight: copying a param with shape torch.Size([512, 128]) from checkpoint, the shape in current model is torch.Size([512, 768]).
size mismatch for electra.embeddings.token_type_embeddings.weight: copying a param with shape torch.Size([2, 128]) from checkpoint, the shape in current model is torch.Size([2, 768]).
size mismatch for electra.embeddings.LayerNorm.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for electra.embeddings.LayerNorm.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([768]).
size mismatch for electra.embeddings_project.weight: copying a param with shape torch.Size([256, 128]) from checkpoint, the shape in current model is torch.Size([256, 768]).
```
Therefore, I think it would be better to add an argument like `excluded_keys` (as the following example) in `from_pretrained` to explicitly prevent certain states from being loaded or add an argument to automatically have the states with mismatched shapes not loaded. I know there are some workarounds such as loading all states first then tying each weight respectively, but that will result in a long and not concise code segment.
Example:
```
electra_model = ElectraModel.from_pretrained(
'google/electra-small-generator',
config=electra_config,
excluded_keys = [
"electra.embeddings.word_embeddings.weight",
"electra.embeddings.position_embeddings.weight",
"electra.embeddings.token_type_embeddings.weight",
"electra.embeddings.LayerNorm.weight",
"electra.embeddings.LayerNorm.bias",
"electra.embeddings_project.weight",
"generator_predictions.LayerNorm.weight",
"generator_predictions.LayerNorm.bias",
"generator_predictions.dense.weight",
"generator_predictions.dense.bias",
"generator_lm_head.weight"
]
)
```
## Your contribution
If there is no other concern, and no one is implementing similar features, I would be happy to submit a PR for this.
Any thoughts are welcomed :) | 07-12-2021 10:17:48 | 07-12-2021 10:17:48 | I'm in favor of this. For example, when I wanted to fine-tune DETR, I had to use the following hack to make it work:
```
model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50")
state_dict = model.state_dict()
# Remove class weights
del state_dict["class_labels_classifier.weight"]
del state_dict["class_labels_classifier.bias"]
# define new model with custom class classifier
config = DetrConfig.from_pretrained("facebook/detr-resnet-50", num_labels=10)
model = DetrForObjectDetection(config)
model.load_state_dict(state_dict, strict=False)
```
This is because DETR has a head that has been fine-tuned on COCO, and it has 91 classes. However, when fine-tuning on my custom dataset, let's say it has 10 labels, then the classification head needs to be replaced, which is what I do above.
It would be easier if I could just do (with an API similar to what you propose above):
```
model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50", num_labels=10, excluded_keys = [
"class_labels_classifier.weight",
"class_labels_classifier.bias",
]
)
```
This way, you can easily replace the head of an already fine-tuned model with your custom classification head.
cc @patil-suraj @LysandreJik @patrickvonplaten @sgugger <|||||>Interesting proposal! I would also be in favor of this to enable @qqaatw's use-case.
For your use-case @NielsRogge, while the proposal would also work, I'd favor something much simpler as it's more common to want to drop a head to load in the same architecture but with a randomly initialized layer. With the proposal here, it implies knowing the weight names and manually specifying them.
It can also be achieved with
```
model = DetrModel.from_pretrained("facebook/detr-resnet-50")
model.save_pretrained("directory")
model = DetrForObjectDetection.from_pretrained("directory")
```
which will randomly initialize all layers that are new in `DetrForObjectDetection`.
For your use-case in particular Niels, I would be in favor of having an API like the following:
```
model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50", load_head=False)
```
It would imply being aware of the head layers, which would probably be achieved by using a `model.get_classification_head` similar to the existing `model.get_output_embeddings` method.<|||||>We have already discussed the second use case internally, and I think we came to the conclusion @NielsRogge code should work with a warning (like the ones we get when the head is different because we load a checkpoint for a task on another task).
The other use case presented in this issue is also interesting. Do we really need to add a new argument for it? We could treat it the same way: when trying to load the weights and there is a size mismatch, just ignore the weights and put them in the warning raised by the `from_pretrained` method.<|||||>@LysandreJik I tried your first code block, however, `DetrForObjectDetection` has 2 heads (one for class labels, one for bounding boxes), and one typically only wants to randomly initialize the class labels classifier (and further train the bounding box regressor head). However, your code only works if you want to randomly initialize both heads (it prints the following warning):
```
Some weights of the model checkpoint at facebook/detr-resnet-50 were not used when initializing DetrModel: ['bbox_predictor.layers.1.bias', 'bbox_predictor.layers.0.weight', 'bbox_predictor.layers.0.bias', 'class_labels_classifier.bias', 'bbox_predictor.layers.1.weight', 'class_labels_classifier.weight', 'bbox_predictor.layers.2.weight', 'bbox_predictor.layers.2.bias']
- This IS expected if you are initializing DetrModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DetrModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of DetrForObjectDetection were not initialized from the model checkpoint at directory and are newly initialized: ['bbox_predictor.layers.1.bias', 'bbox_predictor.layers.0.weight', 'bbox_predictor.layers.0.bias', 'class_labels_classifier.bias', 'bbox_predictor.layers.1.weight', 'class_labels_classifier.weight', 'bbox_predictor.layers.2.weight', 'bbox_predictor.layers.2.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
Of course, I don't think there are many models in the library right now that have multiple heads, but DETR is one such model.
<|||||>After discussing offline with @LysandreJik we will add a `ignore_mismatched_size` flag to `from_pretrained`. When activated, weights that don't have the right size will be ignored, which should cover both the use cases in this issue.
I will work on this today.<|||||>The feature has been implemented in #12664. Thanks @sgugger |
transformers | 12,642 | closed | "token_type_ids" is discarded when using GenerationMixin in βgeneration_utils.pyβ | ## Environment info
- `transformers` version: 4.8.2
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.6.10
- PyTorch version (GPU?): 1.9.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help
@patrickvonplaten @yjernite
## Information
Model I am using OpenAIGPTLMHeadModel:
I try to use class GenerationMixin in βgeneration_utils.pyβ to generate words for my pre-trained openai-gpt model, but I find a model performance degradation.
My generation code segment is like this and I need put "input_ids" and "token_type_ids" for my gpt model:
` input_ids = torch.tensor(instance["input_ids"], dtype=torch.long, device=args.device).unsqueeze(0)`
` token_type_ids = torch.tensor(instance["token_type_ids"], dtype=torch.long, device=args.device).unsqueeze(0)`
` paras = {"token_type_ids": token_type_ids}`
` bos, eos, pad, speaker1, speaker2 = tokenizer.convert_tokens_to_ids(SPECIAL_TOKENS)`
` chat_history_ids = model.generate(
input_ids,
**paras, max_length=128, min_length=5, num_beams=1,
pad_token_id=pad, use_cache=True,
eos_token_id=eos, temperature=0.7,
bos_token_id=bos,
top_p=0.9, top_k=30, do_sample=True, repetition_penalty=1.03).cpu() `
But I find when it call forward in class OpenAIGPTLMHeadModel(OpenAIGPTPreTrainedModel), token_type_ids in forward is None. Although I already put "token_type_ids" in model.generate() with **paras.
## Expected behavior
it seems the bug is here. This function in generation_utils.py discards my token_type_ids here:
`def prepare_inputs_for_generation(self, input_ids: torch.LongTensor, **kwargs) -> Dict[str, Any]:`
`return {"input_ids": input_ids}`
| 07-12-2021 09:55:14 | 07-12-2021 09:55:14 | Thanks for your issue @nanzhao! In order make `OpenAIGPTLMHeadModel` work with `token_type_ids` we should add this line to `prepare_inputs_for_generation`:
https://github.com/huggingface/transformers/blob/790f1c9545f4a83b97bf75640be82b2112c3efe7/src/transformers/models/gpt2/modeling_gpt2.py#L884<|||||>Would you like to open a PR and give it a try? :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,641 | closed | USE_TORCH while import transformers forever true | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.0
- Platform: linux
- Python version: 3.6
- PyTorch version (GPU?):1.8.1(No)
- Tensorflow version (GPU?):2.0.1(No)
- Using GPU in script?:No
- Using distributed or parallel set-up in script?:No
### Who can help
@sgugger @Rocketknight1
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):AutoTokenizer(Bert)
The problem arises when using:
* Importing transformers while Using TensorFlow backend
error arises :
```transformers/file_utils.py```
```if _torch_available:
280 torch_version = version.parse(importlib_metadata.version("torch"))
--> 281 _torch_fx_available = (torch_version.major, torch_version.minor) == (
282 TORCH_FX_REQUIRED_VERSION.major,
283 TORCH_FX_REQUIRED_VERSION.minor,
AttributeError: 'Version' object has no attribute 'major'
```
The tasks I am working on is:
* [ ] trying to use ```train_new_from_iterator``` on top of ```bert-base-cased```
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. https://github.com/huggingface/transformers/pull/9441
2. row 67, forever True (```USE_TORCH```) So cannot reach (```USE_TF```) , row 80
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
allow transformers import, when using TF
<!-- A clear and concise description of what you would expect to happen. -->
| 07-12-2021 08:06:05 | 07-12-2021 08:06:05 | I am unsure of what the bug is and the reproducer you are suggesting. You are not supposed to change the source code of the library to activate `USE_TORCH`, you should set it as an environment variable.<|||||>Please close issue,
Something wrong with my env
ΧΧͺΧΧ¨ΧΧ ΧΧΧ ΧΧ³, 12 ΧΧΧΧΧ 2021 Χ-17:33 ΧΧΧͺ Sylvain Gugger <
***@***.***>:
> I am unsure of what the bug is and the reproducer you are suggesting. You
> are not supposed to change the source code of the library to activate
> USE_TORCH, you should set it as an environment variable.
>
> β
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/12641#issuecomment-878330631>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AIMDXVAVVDRGGLYXT6BEBV3TXL4L5ANCNFSM5AGJ4YIQ>
> .
>
|
transformers | 12,640 | closed | fix typo in modeling_t5.py docstring | fixes a small typo | 07-12-2021 07:35:40 | 07-12-2021 07:35:40 | |
transformers | 12,639 | closed | Refactored code to improve performance/employ best practices. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Refactors several segments of code in the ```scripts```,```src```,```tests```,```utils``` and ```setup.py``` and increases performance by a bit, using compression methods and newer practices.<br>
No new functions or methods/models added, therefore no documentation changes were required.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-12-2021 07:21:17 | 07-12-2021 07:21:17 | Hello, and thank you for your contribution! It seems you were using a very outdated fork in order to open your PR (see the +5k -56k diff), as several files end up deleted and a lot of library improvements would be reverted were we to merge this PR.
In case these changes were intentional, let me point you to the following documentation: [**philosophy**](https://huggingface.co/transformers/philosophy.html).
The main idea is that all model/tokenizer code should be independent from other models/tokenizers. Manually editing, removing, or adding to a model or tokenizer should not impact any other whatsoever. This further allows to reduce the number of abstractions and gives access to the near-raw PyTorch/TensorFlow/Flax code of the models.
Finally, we have built [tools](https://github.com/huggingface/transformers/tree/master/utils) so that our maintenance isn't elevated by the high amount of duplicated code and so that our code coverage remains complete.
Thank you for your effort!<|||||>> Hello, and thank you for your contribution! It seems you were using a very outdated fork in order to open your PR (see the +5k -56k diff), as several files end up deleted and a lot of library improvements would be reverted were we to merge this PR.
>
> In case these changes were intentional, let me point you to the following documentation: [**philosophy**](https://huggingface.co/transformers/philosophy.html).
>
> The main idea is that all model/tokenizer code should be independent from other models/tokenizers. Manually editing, removing, or adding to a model or tokenizer should not impact any other whatsoever. This further allows to reduce the number of abstractions and gives access to the near-raw PyTorch/TensorFlow/Flax code of the models.
>
> Finally, we have built [tools](https://github.com/huggingface/transformers/tree/master/utils) so that our maintenance isn't elevated by the high amount of duplicated code and so that our code coverage remains complete.
>
> Thank you for your effort!
Right, got it. I'll close this PR and create another one keeping the changes you suggested in mind. Thanks! |
transformers | 12,638 | closed | [flax]fix jax array type check | # What does this PR do?
Fixes #12584, #12578
On colab the `ModelOutput` class is returning empty tuples for jax arrays. This is because on colab TPU the type of jax array is `jax.interpreters.xla._DeviceArray` and the `is_tensor` function here
https://github.com/huggingface/transformers/blob/2dd9440d0835782e41ae415a68e71fd15051c428/src/transformers/file_utils.py#L1796-L1798
expects `jaxlib.xla_extension.DeviceArray` or `jax.core.Tracer`. If the first argument is an array, the `is_tensor` returns `None` in which case the `ModelOutput` class expects the first argument to be a key-value container which is not the case here. So at the end, everything becomes `None` and the `ModelOutput` returns an empty tuple.
Instead the `jnp.ndarray` type check works for jax array types `jaxlib.xla_extension.DeviceArray`, `jax.interpreters.xla._DeviceArray` and also the `ShardedDeviceArray`
| 07-12-2021 06:55:15 | 07-12-2021 06:55:15 | |
transformers | 12,637 | closed | Slower training speed under DeepSpeed | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.2
- Platform: nvcr.io/nvidia/pytorch:21.02-py3 container on CentOS7
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.0 with GPU
- Tensorflow version (GPU?): not used
- Using GPU in script?: YES, Tesla P40
- Using distributed or parallel set-up in script?: YES
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [X] the official example scripts:
examples/pytorch/translation/run_translation.py
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task:
wwm16
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Create container using `sudo docker run -d -it --runtime=nvidia --net=host --ipc=host -v /home/user/:/workspace nvcr.io/nvidia/pytorch:21.02-py3 bash` on a Linux server with CentOS7 and Tesla P40 GPUs.
2. Install python dependencies mentioned above.
3. Run run_translation.py with different parameters listed below:
1. DDP with fp16 open
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node 4 examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --per_device_train_batch_size 1 --output_dir output_dir --overwrite_output_dir --fp16 --do_train --max_train_samples 500 --num_train_epochs 1 --dataset_name wmt16 --dataset_config "ro-en" --source_lang en --target_lang ro
2. DDP without fp16
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node 4 examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --per_device_train_batch_size 1 --output_dir output_dir --overwrite_output_dir --do_train --max_train_samples 500 --num_train_epochs 1 --dataset_name wmt16 --dataset_config "ro-en" --source_lang en --target_lang ro ```
3. DeepSpeed ZeRO2 with fp16 open
deepspeed examples/pytorch/translation/run_translation.py --deepspeed tests/deepspeed/ds_config_zero2.json --model_name_or_path t5-small --per_device_train_batch_size 1 --output_dir output_dir --overwrite_output_dir --fp16 --do_train --max_train_samples 500 --num_train_epochs 1 --dataset_name wmt16 --dataset_config "ro-en" --source_lang en --target_lang ro
4. DeepSpeed ZeRO2 without fp16
deepspeed examples/pytorch/translation/run_translation.py --deepspeed tests/deepspeed/ds_config_zero2.json --model_name_or_path t5-small --per_device_train_batch_size 1 --output_dir output_dir --overwrite_output_dir --do_train --max_train_samples 500 --num_train_epochs 1 --dataset_name wmt16 --dataset_config "ro-en" --source_lang en --target_lang ro
5. DeepSpeed ZeRO3 with fp16 open
deepspeed examples/pytorch/translation/run_translation.py --deepspeed tests/deepspeed/ds_config_zero3.json --model_name_or_path t5-small --per_device_train_batch_size 1 --output_dir output_dir --overwrite_output_dir --fp16 --do_train --max_train_samples 500 --num_train_epochs 1 --dataset_name wmt16 --dataset_config "ro-en" --source_lang en --target_lang ro
4. Training metrics are listed according to the above experiments order:
1. ***** train metrics *****
epoch = 1.0
train_loss = 1.5905
train_runtime = 0:00:20.29
train_samples = 500
train_samples_per_second = 24.632
train_steps_per_second = 6.158
2. ***** train metrics *****
epoch = 1.0
train_loss = 1.482
train_runtime = 0:00:17.57
train_samples = 500
train_samples_per_second = 28.448
train_steps_per_second = 7.112
3. ***** train metrics *****
epoch = 1.0
train_loss = 1.6752
train_runtime = 0:00:32.45
train_samples = 500
train_samples_per_second = 15.406
train_steps_per_second = 3.851
4. ***** train metrics *****
epoch = 1.0
train_loss = 1.523
train_runtime = 0:00:20.15
train_samples = 500
train_samples_per_second = 24.813
train_steps_per_second = 6.203
5. ***** train metrics *****
epoch = 1.0
train_loss = 1.523
train_runtime = 0:00:20.15
train_samples = 500
train_samples_per_second = 24.813
train_steps_per_second = 6.203
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
According to DeepSpeed's official document, its training speedup could up to 10 times faster. But in my experiments I could not get that much speedup. Since translation script is the tutorial task mentioned in the Transformers's "DeepSpeed Integration" document, so my expectation is a faster training speed. Is there any environment's limitations in my experiment? Or the speedup is not guaranteed? Thank you guys in advance for help me out.
<!-- A clear and concise description of what you would expect to happen. -->
@stas00 | 07-12-2021 04:45:46 | 07-12-2021 04:45:46 | Deepspeed is a project that has many different at times totally unrelated features.
Therefore when you read in a blog that it made something 10x faster, you need to pay close attention to what was the task, and what was the model size, and how many hundreds of gpus, and optimizer, etc., etc.
The main goal of deepspeed is to enable training huge models which is not possible using bare pytorch. In particular when you can't fit your model onto a single GPU. Which means a lot more overhead. Therefore if you're going to compare a straightforward bare-bones pytorch to any other complex solution that enables scalabilty the former will almost always be faster or on par.
Then as I started this comment Deepspeed has other tools, like faster optimizers, like 1-bit adam as posted here: https://www.deepspeed.ai/news/2020/09/08/onebit-adam-blog-post.html, which you haven't been using in your test.
I hope this gave you a little bit of clarity of what to expect when.
We have only the main functionality integrated and lots of features are still pending as you can see here https://github.com/huggingface/transformers/issues/9606 - some of them probably require no integration but need to be tested, we just haven't had the time to work on those yet. And I think that list is far from being complete, since the Deepspeed team adds new features all the time.
If you're interesting in particular in a specific feature please first try and see if it already works with transformers/HF Trainer, if not, let's discuss the feasibility of its integration.<|||||>> Deepspeed is a project that has many different at times totally unrelated features.
>
> Therefore when you read in a blog that it made something 10x faster, you need to pay close attention to what was the task, and what was the model size, and how many hudreds of gpus, and optimizer, etc., etc.
>
> The main goal of deepspeed is to enable training huge models which is not possible using bare pytorch. In particular when you can't fit your model onto a single GPU. Which means a lot more overhead. Therefore if you're going to compare a straightforward barebones pytorch to any other complex solution that enables scalabilty the former will almost always be faster or on par.
>
> Then as I started this comment Deepspeed has other tools, like faster optimizers, like 1-bit adam as posted here: https://www.deepspeed.ai/news/2020/09/08/onebit-adam-blog-post.html, which you haven't been using in your test.
>
> I hope this gave you a little bit of clarity of what to expect when.
>
> We have only the main functionality integrated and lots of features are still pending as you can see here #9606 - some of them probably require no integration but need to be tested, we just haven't had the time to work on those yet. And I think that list is far from being complete, since the Deepspeed team adds new features all the time.
>
> If you're interesting in particular in a specific feature please first try and see if it already works with transformers/HF Trainer, if not, let's discuss the feasibility of its integration.
Thank you for your reply Stas, I've been more clarified after read your detailed comment. I'll checkout more features and give you feedback if any progress.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,636 | closed | Error on training XLNet. RuntimeError: CUDA error: device-side assert triggered | Currently trying to make pretraining using XLNet architecture.
script to reproduce:
```
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
from transformers import XLNetTokenizerFast
from tokenizers import Tokenizer
from transformers import XLNetLMHeadModel
from transformers import XLNetConfig
from transformers import DataCollatorForLanguageModeling
tokenizer = XLNetTokenizerFast.from_pretrained("xlnet-base-cased")
config=XLNetConfig(vocab_size=16000,
)
model = XLNetLMHeadModel(config=config)
from datasets import load_dataset
raw_datasets = load_dataset("imdb")
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000))
training_args = TrainingArguments(
output_dir="./models/custom_pasona",
overwrite_output_dir=True,
num_train_epochs=1,
per_device_train_batch_size=1,
save_steps=1000,
save_total_limit=2,
prediction_loss_only=True,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=small_train_dataset,
)
trainer.train()
```
This would results on an error
```
ING=1 python pretrain.py
Reusing dataset imdb (~/.cache/huggingface/datasets/imdb/plain_text/1.0.0/e3c66f1788a67a89c7058d97ff62b6c30531e05b549de56d3ab91891f0561f9a)
DatasetDict({
train: Dataset({
features: ['label', 'text'],
num_rows: 25000
})
test: Dataset({
features: ['label', 'text'],
num_rows: 25000
})
unsupervised: Dataset({
features: ['label', 'text'],
num_rows: 50000
})
})
0%| | 0/25 [00:00<?, ?ba/s]Asking to pad to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no padding.
Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 25/25 [00:17<00:00, 1.43ba/s]
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 25/25 [00:17<00:00, 1.46ba/s]
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 50/50 [00:35<00:00, 1.42ba/s]
The following columns in the training set don't have a corresponding argument in `XLNetLMHeadModel.forward` and have been ignored: text.
***** Running training *****
Num examples = 1000
Num Epochs = 1
Instantaneous batch size per device = 1
Total train batch size (w. parallel, distributed & accumulation) = 1
Gradient Accumulation steps = 1
Total optimization steps = 1000
0%| | 0/1000 [00:00<?, ?it/s]/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
Traceback (most recent call last):
File "~/Desktop/workspace/recommendation/pretrain.py", line 74, in <module>
trainer.train()
File "~/anaconda3/envs/deltalake/lib/python3.9/site-packages/transformers/trainer.py", line 1269, in train
tr_loss += self.training_step(model, inputs)
File "~/anaconda3/envs/deltalake/lib/python3.9/site-packages/transformers/trainer.py", line 1762, in training_step
loss = self.compute_loss(model, inputs)
File "~/anaconda3/envs/deltalake/lib/python3.9/site-packages/transformers/trainer.py", line 1794, in compute_loss
outputs = model(**inputs)
File "~/anaconda3/envs/deltalake/lib/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "~/anaconda3/envs/deltalake/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1432, in forward
transformer_outputs = self.transformer(
File "~/anaconda3/envs/deltalake/lib/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "~/anaconda3/envs/deltalake/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1189, in forward
output_h = self.dropout(word_emb_k)
File "~/anaconda3/envs/deltalake/lib/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "~/anaconda3/envs/deltalake/lib/python3.9/site-packages/torch/nn/modules/dropout.py", line 58, in forward
return F.dropout(input, self.p, self.training, self.inplace)
File "~/anaconda3/envs/deltalake/lib/python3.9/site-packages/torch/nn/functional.py", line 983, in dropout
else _VF.dropout(input, p, training))
RuntimeError: CUDA error: device-side assert triggered
```
I could not find out what is the source of the problem.
Why is this happening?
Is there any walkthrough on training with XLNet? | 07-12-2021 04:30:02 | 07-12-2021 04:30:02 | I found out that changing vocab_size to 32000 fixes this error.
How do I change this number to other than 32000?
I made a custom tokenizer
```
tokenizer.train(files=paths, vocab_size=16000, special_tokens=special_tokens)
tokenizer.save('unigram.json', pretty=True)
```
loaded it
```
tokenizer = Tokenizer.from_file('unigram.json')
tokenizer = XLNetTokenizerFast(tokenizer_object=tokenizer)
```
using this with vocab_size 16000 causing an error.
How do I load this custom tokenizer with XLNet?<|||||>It seems to me that the issue here is that you're loading a specific tokenizer, `xlnet-base-cased`, with a dictionary of 32k tokens:
```py
>>> from transformers import XLNetTokenizerFast
>>> tokenizer = XLNetTokenizerFast.from_pretrained("xlnet-base-cased")
>>> len(tokenizer)
32000
```
But you're then using a randomly initialized model that you initialized at 16k tokens. So if your model receives a token that has an ID superior to 15999, it will crash with the error above. |
transformers | 12,635 | open | Long-Short Transformer | # π New model addition
## Model description
https://arxiv.org/abs/2107.02192
In this paper, they propose Long-Short Transformer, an efficient self-attention mechanism for modeling long sequences with linear complexity for both language and vision tasks. It aggregates a novel long-range attention with dynamic projection to model distant correlations and a short-term attention to capture fine-grained local correlations. Transformer-LS can be applied to both autoregressive and bidirectional models without additional complexity.
## Open source status
* [x] the model implementation is available: https://github.com/NVIDIA/transformer-ls
* [x] the model weights are available: https://github.com/NVIDIA/transformer-ls
* [x] who are the authors: Chen Zhu (@zhuchen03) and Wei Ping and Chaowei Xiao and Mohammad Shoeybi and Tom Goldstein and Anima Anandkumar and Bryan Catanzaro (NVIDIA, University of Maryland) | 07-12-2021 01:47:30 | 07-12-2021 01:47:30 | A PyTorch implementation: https://github.com/lucidrains/long-short-transformer<|||||>Cool work! However, models have a low chance of being added if there are no pre-trained weights available.<|||||>Thanks for your interest in our work! We have released the code for ImageNet and LRA at [https://github.com/NVIDIA/transformer-ls](url). Pretrained weights for ImageNet are also available. We will release the character-level LM soon. <|||||>Hi @zhuchen03! - Since I would like to add your model to the HuggingFace I am wondering if the pretrained weights are also available for character-level LM?<|||||>Hi @NielsRogge @zhuchen03 - I would like to implement these models. I will start with the ImageNet classification one. |
transformers | 12,634 | closed | Add ByT5 option to example run_t5_mlm_flax.py | Small change adding ByT5 option to the Flax T5 training example
When model_type is `byt5`, use ByT5Tokenizer in place of T5TokenizerFast
Example: https://colab.research.google.com/drive/1WcDRPYyvuMZDbWuhsS3hTaVyXxjqryPz?usp=sharing | 07-11-2021 21:46:55 | 07-11-2021 21:46:55 | I was trying this on TPU, but it ended with the following error message:
```bash
Traceback (most recent call last):
File "run_t5_mlm_flax.py", line 728, in <module>
model_inputs = data_collator(samples)
File "run_t5_mlm_flax.py", line 276, in __call__
batch["decoder_input_ids"] = shift_tokens_right(
File "/home/stefan/transformers/src/transformers/models/t5/modeling_flax_t5.py", line 55, in shift_tokens_right
shifted_input_ids = jax.ops.index_update(shifted_input_ids, (..., 0), decoder_start_token_id)
File "/home/stefan/dev/lib/python3.8/site-packages/jax/_src/ops/scatter.py", line 352, in index_update
return _scatter_update(
File "/home/stefan/dev/lib/python3.8/site-packages/jax/_src/ops/scatter.py", line 64, in _scatter_update
y = jnp.asarray(y)
File "/home/stefan/dev/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py", line 3082, in asarray
return array(a, dtype=dtype, copy=False, order=order)
File "/home/stefan/dev/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py", line 3042, in array
lax._check_user_dtype_supported(_inferred_dtype, "array")
File "/home/stefan/dev/lib/python3.8/site-packages/jax/_src/lax/lax.py", line 6963, in _check_user_dtype_supported
raise TypeError(msg)
TypeError: JAX only supports number and bool dtypes, got dtype object in array
```
I was using this config:
```
https://huggingface.co/google/byt5-base/raw/main/config.json
```
with the following parameters:
```bash
python run_t5_mlm_flax.py --output_dir="${MODEL_DIR}" --model_type="t5" --config_name="${MODEL_DIR}" --tokenizer_name="google/byt5-base" --max_seq_length="512" --per_device_train_batch_size="16" --per_device_eval_batch_size="16" --learning_rate="1e-3" --weight_decay="0.001" --warmup_steps="5000" --overwrite_output_dir --num_train_epochs="10" --logging_steps="500" --save_steps="2500" --eval_steps="2500" --train_file /mnt/datasets/train.txt --validation_file /mnt/datasets/validation.txt
```
@patrickvonplaten do you have any hint how to fix this :thinking: <|||||>> I was trying this on TPU, but it ended with the following error message:
>
> ```shell
> Traceback (most recent call last):
> File "run_t5_mlm_flax.py", line 728, in <module>
> model_inputs = data_collator(samples)
> File "run_t5_mlm_flax.py", line 276, in __call__
> batch["decoder_input_ids"] = shift_tokens_right(
> File "/home/stefan/transformers/src/transformers/models/t5/modeling_flax_t5.py", line 55, in shift_tokens_right
> shifted_input_ids = jax.ops.index_update(shifted_input_ids, (..., 0), decoder_start_token_id)
> File "/home/stefan/dev/lib/python3.8/site-packages/jax/_src/ops/scatter.py", line 352, in index_update
> return _scatter_update(
> File "/home/stefan/dev/lib/python3.8/site-packages/jax/_src/ops/scatter.py", line 64, in _scatter_update
> y = jnp.asarray(y)
> File "/home/stefan/dev/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py", line 3082, in asarray
> return array(a, dtype=dtype, copy=False, order=order)
> File "/home/stefan/dev/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py", line 3042, in array
> lax._check_user_dtype_supported(_inferred_dtype, "array")
> File "/home/stefan/dev/lib/python3.8/site-packages/jax/_src/lax/lax.py", line 6963, in _check_user_dtype_supported
> raise TypeError(msg)
> TypeError: JAX only supports number and bool dtypes, got dtype object in array
> ```
>
> I was using this config:
>
> ```
> https://huggingface.co/google/byt5-base/raw/main/config.json
> ```
>
> with the following parameters:
>
> ```shell
> python run_t5_mlm_flax.py --output_dir="${MODEL_DIR}" --model_type="t5" --config_name="${MODEL_DIR}" --tokenizer_name="google/byt5-base" --max_seq_length="512" --per_device_train_batch_size="16" --per_device_eval_batch_size="16" --learning_rate="1e-3" --weight_decay="0.001" --warmup_steps="5000" --overwrite_output_dir --num_train_epochs="10" --logging_steps="500" --save_steps="2500" --eval_steps="2500" --train_file /mnt/datasets/train.txt --validation_file /mnt/datasets/validation.txt
> ```
>
> @patrickvonplaten do you have any hint how to fix this
Let me try!<|||||>@stefan-it I cannot reproduce the error. Can you try running the following:
```bash
./run_t5_mlm_flax.py \
--output_dir="${MODEL_DIR}" \
--model_type="t5" \
--config_name="${MODEL_DIR}" \
--tokenizer_name="google/byt5-base" \
--max_seq_length="128" \
--per_device_train_batch_size="1" \
--per_device_eval_batch_size="1" \
--learning_rate="1e-3" \
--weight_decay="0.001" \
--warmup_steps="5000" \
--overwrite_output_dir \
--num_train_epochs="10" \
--logging_steps="500" \
--save_steps="2500" \
--eval_steps="2500" \
--dataset_name="oscar" \
--dataset_config_name="unshuffled_deduplicated_als"
```
using
`https://huggingface.co/google/byt5-base/raw/main/config.json` as your config?
This uses a very small oscar dataset just to check that the script is correct. As you can see the script should run just fine.<|||||>Hi @patrickvonplaten , your command is working - even with a sequence length of 512 and a batch size of 16. I'll check my dataset now π
Maybe some lines are too short...<|||||>That's really interesting, I just filtered lines that contain less than five tokens and training is working. Thanks for your help :hugs: |
transformers | 12,633 | closed | Error pushing GPT2 flax training model to hub | While training a GPT-2 model using the following scripts the model crashes while pushing to hub. I made the saving step 10 since I suspected was related so saving.
```
#!/usr/bin/env bash
python3 swedish-gpt2-oscar/run_stream_trainer.py \
--output_dir="${MODEL_DIR}" \
--model_type="gpt2" \
--config_name="${MODEL_DIR}" \
--tokenizer_name="${MODEL_DIR}" \
--dataset_name="mc4" \
--dataset_config_name="sv" \
--do_train --do_eval \
--block_size="512" \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="64" \
--learning_rate="5e-3" --warmup_steps="1000" \
--adam_beta1="0.9" --adam_beta2="0.98" --weight_decay="0.01" \
--overwrite_output_dir \
--max_steps="100000" \
--decay_steps="100000" \
--logging_steps="500" \
--save_steps="10" \
--eval_steps="2500" \
--push_to_hub
```
The files that would be commited.
https://huggingface.co/birgermoell/ckpt-10/commit/d256a3e1fc7dd9da4833c98a21ea689d3caede18
Stacktrace
```
Model weights saved in /home/bmoell/swedish-gpt2-oscar/ckpt-10/flax_model.msgpack
07/11/2021 20:56:44 - INFO - huggingface_hub.repository - Uploading LFS objects: 100% (1/1), 498 MB | 33 MB/s, done.
Model pushed to the hub in this commit: https://huggingface.co/birgermoell/ckpt-10/commit/d256a3e1fc7dd9da4833c98a21ea689d3caede18
07/11/2021 20:56:45 - INFO - __main__ - checkpoint saved
07/11/2021 20:56:45 - INFO - absl - Saving checkpoint at step: 10
tcmalloc: large alloc 1373577216 bytes == 0x241732000 @ 0x7ff956432680 0x7ff956452bdd 0x7ff67595f20d 0x7ff67596d340 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff675968bd3 0x7ff6759691fe 0x504d56 0x56acb6 0x568d9a 0x5f5b33 0x56bc9b 0x5f5956 0x56aadf 0x5f5956 0x56fb87 0x568d9a 0x5f5b33 0x56bc9b 0x568d9a 0x5f5b33 0x56aadf
tcmalloc: large alloc 2986590208 bytes == 0x293524000 @ 0x7ff956432680 0x7ff956452bdd 0x7ff67595f20d 0x7ff67596d340 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff675968bd3 0x7ff6759691fe 0x504d56 0x56acb6 0x568d9a 0x5f5b33 0x56bc9b 0x5f5956 0x56aadf 0x5f5956 0x56fb87 0x568d9a 0x5f5b33 0x56bc9b 0x568d9a 0x5f5b33 0x56aadf 0x568d9a 0x68cdc7 0x67e161
tcmalloc: large alloc 1493295104 bytes == 0x1eff42000 @ 0x7ff956432680 0x7ff956453824 0x5f7b11 0x7ff675968c6f 0x7ff6759691fe 0x504d56 0x56acb6 0x568d9a 0x5f5b33 0x56bc9b 0x5f5956 0x56aadf 0x5f5956 0x56fb87 0x568d9a 0x5f5b33 0x56bc9b 0x568d9a 0x5f5b33 0x56aadf 0x568d9a 0x68cdc7 0x67e161 0x67e1df 0x67e281 0x67e627 0x6b6e62 0x6b71ed 0x7ff9562490b3 0x5f96de
07/11/2021 20:56:53 - INFO - absl - Saved checkpoint at swedish-gpt2-oscar/checkpoint_10
Traceback (most recent call last):
File "swedish-gpt2-oscar/run_stream_trainer.py", line 818, in <module>
main()
File "swedish-gpt2-oscar/run_stream_trainer.py", line 805, in main
save_checkpoint(training_args.output_dir, jax_utils.unreplicate(state), cur_step, keep=training_args.save_total_limit, overwrite=False)
File "/home/bmoell/gpt2/lib/python3.8/site-packages/flax/training/checkpoints.py", line 139, in save_checkpoint
if len(checkpoint_files) > keep:
TypeError: '>' not supported between instances of 'int' and 'NoneType'
https://symbolize.stripped_domain/r/?trace=7ff9562123f4,7ff95626820f,7f&map=
*** SIGTERM received by PID 61911 (TID 61911) on cpu 29 from PID 60845; stack trace: ***
PC: @ 0x7ff9562123f4 (unknown) do_futex_wait.constprop.0
@ 0x7ff94d15e800 976 (unknown)
@ 0x7ff956268210 348884112 (unknown)
@ 0x80 (unknown) (unknown)
https://symbolize.stripped_domain/r/?trace=7ff9562123f4,7ff94d15e7ff,7ff95626820f,7f&map=2a762cd764e70bc90ae4c7f9747c08d7:7ff94021c000-7ff94d49d280
E0711 20:56:53.386296 61911 coredump_hook.cc:250] RAW: Remote crash gathering disabled for SIGTERM.
E0711 20:56:54.372857 61911 process_state.cc:771] RAW: Raising signal 15 with default behavior
0%| | 11/100000 [01:37<245:42:13, 8.85s/it]
``` | 07-11-2021 21:03:52 | 07-11-2021 21:03:52 | The training script `run_stream_trainer.py` is not an official training script no? Where can I find `run_stream_trainer.py` ? The error also does not seem to be related to pushing to the hub but rather with the line `keep=training_args.save_total_limit`.<|||||>I noticed some strange git behaviour including the creation of another repo where the files were uploaded.
https://huggingface.co/birgermoell/ckpt-10/tree/main
This is likely related to git and not related to transformers so I'm closing the issue and I'm hoping to resolve it.
Thank you so much for the help debugging.
<|||||>> The training script `run_stream_trainer.py` is not an official training script no? Where can I find `run_stream_trainer.py` ? The error also does not seem to be related to pushing to the hub but rather with the line `keep=training_args.save_total_limit`.
It's true that it's not an official script. Uploading the script manually now. |
transformers | 12,632 | closed | Vocab Size does not change when adding new tokens | Env:
- `transformers` version: 4.8.2
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0+cu102
When adding new tokens to an existing tokenizer, the tokenizer's vocab size variable doesn't change. I believe it should be updated every time the tokens change.
Here is a google colab to reproduce: https://colab.research.google.com/drive/1mC_eSmHOgA_F5fPX7AsUt86jAbC7iSSw?usp=sharing
Specifics:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("gpt2")
current_size = tokenizer.vocab_size
tokenizer.add_tokens(["new_token"])
tokenizer.vocab_size, current_size, len(tokenizer.vocab)
```
Outputs: (50257, 50257, 50258)
The same happens when I do the following as well
`tokenizer = AutoTokenizer.from_pretrained("gpt2", additional_special_tokens=["new_token"])` | 07-11-2021 19:08:16 | 07-11-2021 19:08:16 | I think you should use `print(len(tokenizer))` instead of `print(tokenizer.vocab_size)` (as the `vocab_size` is a fixed attribute, referring to the base vocabulary without any additional tokens). Refer to [this](https://github.com/huggingface/transformers/issues/1413#issuecomment-538083512) and [this](https://github.com/huggingface/transformers/blob/2dd9440d0835782e41ae415a68e71fd15051c428/src/transformers/tokenization_utils.py#L161).<|||||>ah okay, didn't realize this was expected behavior. Thanks! |
transformers | 12,631 | closed | TypeError: forward() got an unexpected keyword argument 'label' in main tutorial | I am following the instructions provided in https://huggingface.co/transformers/training.html and trying to use PyTorch API for Fine-tuning, here is the error I am getting
```
from datasets import load_dataset
from transformers import AutoTokenizer
from torch.utils.data import DataLoader
from transformers import AutoModelForSequenceClassification
from transformers import get_scheduler
from transformers import AdamW
import torch
from tqdm.auto import tqdm
raw_datasets = load_dataset("imdb")
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
tokenized_datasets = tokenized_datasets.remove_columns(["text"])
tokenized_datasets.set_format("torch")
small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000))
train_dataloader = DataLoader(small_train_dataset, shuffle=True, batch_size=8)
eval_dataloader = DataLoader(small_eval_dataset, batch_size=8)
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2)
optimizer = AdamW(model.parameters(), lr=5e-5)
num_epochs = 3
num_training_steps = num_epochs * len(train_dataloader)
lr_scheduler = get_scheduler(
"linear",
optimizer=optimizer,
num_warmup_steps=0,
num_training_steps=num_training_steps
)
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model.to(device)
progress_bar = tqdm(range(num_training_steps))
model.train()
for epoch in range(num_epochs):
for batch in train_dataloader:
batch = {k: v.to(device) for k, v in batch.items()}
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar.update(1)
```
Error
```
TypeError Traceback (most recent call last)
<ipython-input-74-79930d537f14> in <module>()
54 for batch in train_dataloader:
55 batch = {k: v.to(device) for k, v in batch.items()}
---> 56 outputs = model(**batch)
57 loss = outputs.loss
58 loss.backward()
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
TypeError: forward() got an unexpected keyword argument 'label'
``` | 07-11-2021 11:31:03 | 07-11-2021 11:31:03 | You missed a line in the tutorial:
```python
tokenized_datasets = tokenized_datasets.remove_columns(["text"]) # this you have
tokenized_datasets = tokenized_datasets.rename_column("label", "labels") # MISSED
tokenized_datasets.set_format("torch") # this you have
```
Model expects a column called `labels` not `label` so that is why it complains.<|||||>I had to restart everything from scratch and it worked. Before that, I tried renaming the label to labels but got this error :
```
KeyError Traceback (most recent call last)
<ipython-input-5-c7230e7411f6> in <module>()
27 tokenized_datasets.set_format("torch") # this you have
28
---> 29 train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
30 eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000))
31 train_dataloader = DataLoader(train_dataset, shuffle=True, batch_size=8)
6 frames
/usr/local/lib/python3.7/dist-packages/datasets/info.py in __post_init__(self)
177 for idx, template in enumerate(self.task_templates):
178 if isinstance(template, TextClassification):
--> 179 labels = self.features[template.label_column].names
180 self.task_templates[idx] = TextClassification(
181 text_column=template.text_column, label_column=template.label_column, labels=labels
KeyError: 'label'
``` |
transformers | 12,630 | closed | [Examples][Flax] added test file in summarization example | # What does this PR do?
Fixes #12527
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@sgugger, @patil-suraj
| 07-11-2021 05:30:58 | 07-11-2021 05:30:58 | |
transformers | 12,629 | closed | How much of an improvement is DistilGPT-2 over an equivalent model trained without distilation? | Hi!
I'm working on a distillation project right now, and I was wondering if this information is available anywhere.
I saw [this page](https://github.com/huggingface/transformers/tree/9ee66adadb2a8d6e04e8b18a1c9ea0b57c80642e/examples/research_projects/distillation) provides a comparison for `DistilGPT-2` vs `GPT-2`, but I don't see anything about the improvement of `DistilGPT-2` over an equivalent model (same parameters, etc.) trained in a traditional fashion.
Any help would be greatly appreciated. Thanks! | 07-10-2021 21:17:03 | 07-10-2021 21:17:03 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,628 | open | GPTNeo Error Attempting to Generate Text | ## Environment info
- `transformers` version: 4.9.0.dev0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (tpu)
- Jax version: 0.2.16
- JaxLib version: 0.1.68
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
- Using TPU: Yes
### Who can help
@patil-suraj and @patrickvonplaten
Models:
GPTNeo
Library:
- flax transformers
-->
## Information
Model I am using (Bert, XLNet ...): GPTNeo
## To reproduce
Steps to reproduce the behavior:
Here is a Google Colab for reproducing: https://colab.research.google.com/drive/1tba52h5t-BP3g13FMdPXVjKqpoLTlGvP?usp=sharing
For convenience here is the error msg:
```
TypeError: dynamic_update_slice update shape must be smaller than operand shape, got update shape (1, 45) for operand shape (1, 20).
```
I was originally getting the same error as #12081. However, when I attempted to implement the same fix as in that issue, I got the above error. The error might be because I am using the "ForCausalLM" version of GPTNeo. However, there is no LMHead version
## Expected behavior
Generate the output sequence
| 07-10-2021 18:50:08 | 07-10-2021 18:50:08 | Hey @ncoop57,
The reason for this error is that `input_ids.shape[1]` (the length of the input length) is larger then `max_length`. By default `max_length` of generate is 20 and in your case `input_ids.shape[1]` is > 20 which will error out. `max_length` define the number of total output tokens (not just the number of generated tokens). So if number of input tokens (`input_ids.shape[1]` is already > `max_length`) the model is told to not generate anything and will error out (we should put better error messages here - which is why I'm leaving this issue open).
In short to solve your problem, simply pass a higher `max_length` parameter:
```
output_seq = model.generate(input_ids=inputs.input_ids, max_length=100)
```<|||||>@patrickvonplaten we should probably raise an error if cur_length is greater than `max_length`, otherwise, it seems it's hard to figure out. |
transformers | 12,627 | open | Add Flax Models to Pipelines | # π Feature request
Hi y'all, I am trying a GPTNeo Flax model and am wanting to use it in the text generation pipeline. However, it is currently not supported. From looking at the current implementation of flax models and the text generation pipeline, it should be a relatively easy (famous last words) addition.
## Motivation
HF is heavily integrating Flax models (which I think is awesome!) into the library and has parroted many of the already existing parts of the transformers library to flax models, similar to what was done for TF models. The addition of the support of Flax models in the pipeline API will help those who are working with pure Flax models, especially for applications that will be use the model to accomplish some task.
## Your contribution
I would be willing to open a PR if one is not currently underway (I looked for one and didn't find any). However, I am new to flax so if the task is more difficult than I expect I probably will be not able to complete it.
| 07-10-2021 18:26:54 | 07-10-2021 18:26:54 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.