url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/9930 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9930/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9930/comments | https://api.github.com/repos/huggingface/transformers/issues/9930/events | https://github.com/huggingface/transformers/issues/9930 | 798,522,543 | MDU6SXNzdWU3OTg1MjI1NDM= | 9,930 | Hyperparameter search w/ RayTune BrokenPipeError: [Errno 32] Broken pipe | {
"login": "marcoabrate",
"id": 43387597,
"node_id": "MDQ6VXNlcjQzMzg3NTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/43387597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcoabrate",
"html_url": "https://github.com/marcoabrate",
"followers_url": "https://api.github.com/users/marcoabrate/followers",
"following_url": "https://api.github.com/users/marcoabrate/following{/other_user}",
"gists_url": "https://api.github.com/users/marcoabrate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcoabrate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcoabrate/subscriptions",
"organizations_url": "https://api.github.com/users/marcoabrate/orgs",
"repos_url": "https://api.github.com/users/marcoabrate/repos",
"events_url": "https://api.github.com/users/marcoabrate/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcoabrate/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @marcoabrate it looks like you're seeing the same error as here https://github.com/huggingface/transformers/issues/9146. This should be fixed on transformers master and the latest Ray nightly wheels. Can you try with those and see if that fixes this? You can install the latest Ray nightly wheels by following the instructions here: https://docs.ray.io/en/master/installation.html#daily-releases-nightlies.",
"Hi @amogkam, thank you for your quick reply.\r\nI am now on HF transformers master and I am installing raytune for Python 3.6 with\r\n\r\n`pip install -U \"https://s3-us-west-2.amazonaws.com/ray-wheels/latest/ray-2.0.0.dev0-cp36-cp36m-manylinux2014_x86_64.whl\"`\r\n\r\nHowever, on Google Colab I still get the same error:\r\n\r\n```\r\n[INFO|trainer.py:358] 2021-02-02 11:04:25,269 >> Using amp fp16 backend\r\n02/02/2021 11:04:25 - INFO - __main__ - *** Hyperparameters Search ***\r\n02/02/2021 11:04:25 - INFO - ray.tune.ray_trial_executor - Initializing Ray automatically.For cluster usage or custom Ray initialization, call `ray.init(...)` before `tune.run`.\r\n2021-02-02 11:04:26,687\tINFO services.py:1182 -- View the Ray dashboard at http://127.0.0.1:8265\r\ntcmalloc: large alloc 1236656128 bytes == 0x7fc271b46000 @ 0x7fc54e359615 0x591e47 0x4cc179 0x4cc2db 0x566a71 0x5a4cd1 0x5a4fb8 0x7fc4a06ddc7c 0x7fc4a06e4bfa 0x7fc4a06e4fe2 0x7fc4a06e634f 0x7fc4a06e3a39 0x7fc4a06e4afc 0x7fc4a06e634f 0x7fc4a06e3a39 0x7fc4a06e4fe2 0x7fc4a06e634f 0x7fc4a06e3a39 0x7fc4a06e5d13 0x7fc4a06e643c 0x7fc4a06e3a39 0x7fc4a06e4f45 0x7fc4a06e67fa 0x7fc4a06e3a39 0x7fc4a06e5d5e 0x7fc4a06e643c 0x7fc4a06e3a39 0x7fc4a06e4f45 0x7fc4a06e67fa 0x7fc4a06e3a39 0x7fc4a06e5d5e\r\ntcmalloc: large alloc 1545822208 bytes == 0x7fc215910000 @ 0x7fc54e359615 0x591e47 0x4cc179 0x4cc2db 0x566a71 0x5a4cd1 0x5a4fb8 0x7fc4a06ddc7c 0x7fc4a06e4bfa 0x7fc4a06e4fe2 0x7fc4a06e634f 0x7fc4a06e3a39 0x7fc4a06e4afc 0x7fc4a06e634f 0x7fc4a06e3a39 0x7fc4a06e4fe2 0x7fc4a06e634f 0x7fc4a06e3a39 0x7fc4a06e5d13 0x7fc4a06e643c 0x7fc4a06e3a39 0x7fc4a06e4f45 0x7fc4a06e67fa 0x7fc4a06e3a39 0x7fc4a06e5d5e 0x7fc4a06e643c 0x7fc4a06e3a39 0x7fc4a06e4f45 0x7fc4a06e67fa 0x7fc4a06e3a39 0x7fc4a06e5d13\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/redis/connection.py\", line 706, in send_packed_command\r\n sendall(self._sock, item)\r\n File \"/usr/local/lib/python3.6/dist-packages/redis/_compat.py\", line 9, in sendall\r\n return sock.sendall(*args, **kwargs)\r\nConnectionResetError: [Errno 104] Connection reset by peer\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/content/drive/My Drive/MAGMA: Summarization/transformers_last/transformers/examples/seq2seq/finetune_trainer.py\", line 432, in <module>\r\n main()\r\n File \"/content/drive/My Drive/MAGMA: Summarization/transformers_last/transformers/examples/seq2seq/finetune_trainer.py\", line 346, in main\r\n resources_per_trial = {'gpu': 1})\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 1188, in hyperparameter_search\r\n best_run = run_hp_search(self, n_trials, direction, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/integrations.py\", line 220, in run_hp_search_ray\r\n analysis = ray.tune.run(_objective, config=trainer.hp_space(None), num_samples=n_trials, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/tune.py\", line 338, in run\r\n restore=restore)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/experiment.py\", line 149, in __init__\r\n self._run_identifier = Experiment.register_if_needed(run)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/experiment.py\", line 294, in register_if_needed\r\n register_trainable(name, run_object)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/registry.py\", line 71, in register_trainable\r\n _global_registry.register(TRAINABLE_CLASS, name, trainable)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/registry.py\", line 124, in register\r\n self.flush_values()\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/tune/registry.py\", line 146, in flush_values\r\n _internal_kv_put(_make_key(category, key), value, overwrite=True)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/_private/client_mode_hook.py\", line 47, in wrapper\r\n return func(*args, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/ray/experimental/internal_kv.py\", line 35, in _internal_kv_put\r\n updated = worker.redis_client.hset(key, \"value\", value)\r\n File \"/usr/local/lib/python3.6/dist-packages/redis/client.py\", line 3050, in hset\r\n return self.execute_command('HSET', name, *items)\r\n File \"/usr/local/lib/python3.6/dist-packages/redis/client.py\", line 900, in execute_command\r\n conn.send_command(*args)\r\n File \"/usr/local/lib/python3.6/dist-packages/redis/connection.py\", line 726, in send_command\r\n check_health=kwargs.get('check_health', True))\r\n File \"/usr/local/lib/python3.6/dist-packages/redis/connection.py\", line 718, in send_packed_command\r\n (errno, errmsg))\r\nredis.exceptions.ConnectionError: Error 104 while writing to socket. Connection reset by peer.\r\n02/02/2021 11:04:38 - INFO - wandb.sdk.internal.internal - Internal process exited\r\n```",
"I confirm with HF Transformers master and the latest ray[tune] version available using pip, the Trainer function works as expected.\r\n\r\nThank you for your help."
] | 1,612 | 1,612 | 1,612 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
- Platform: Google Colab
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7
- Using GPU in script?: Yes
### Who can help
- ray/raytune: @richardliaw, @amogkam
- trainer: @sgugger
### Information
Model I am using (Bert, XLNet ...): sshleifer/distilbart-cnn-12-6
Dataset: dummy XSUM (50 samples in train, 5 samples in val)
## To reproduce
I have tried `trainer.train` with the exact same parameters and it works just fine.
I am trying to do a hyperparameter search with the Seq2SeqTrainer and RayTune. For now I am just trying a dummy search with 2 different learning rates and 2 different gradient accumulation steps. Here is my code:
```
def hp_objective(metrics):
loss = metrics.pop('eval_loss', None)
_ = metrics.pop('epoch', None)
_ = metrics.pop('eval_gen_len', None)
return np.sum(list(metrics.values()))
def hp_space(trial):
from ray import tune
return {
'learning_rate': tune.choice([1e-5, 1e-4]),
'gradient_accumulation_steps': tune.choice([4, 8])
}
def model_init():
model = AutoModelForSeq2SeqLM.from_pretrained(
model_args.model_name_or_path,
config=config,
cache_dir=model_args.cache_dir,
)
# use task specific params
use_task_specific_params(model, data_args.task)
# set num_beams for evaluation
if data_args.eval_beams is None:
data_args.eval_beams = model.config.num_beams
# set decoder_start_token_id for MBart
if model.config.decoder_start_token_id is None and isinstance(tokenizer, MBartTokenizer):
assert (
data_args.tgt_lang is not None and data_args.src_lang is not None
), "mBart requires --tgt_lang and --src_lang"
model.config.decoder_start_token_id = tokenizer.lang_code_to_id[data_args.tgt_lang]
if model_args.freeze_embeds:
freeze_embeds(model)
if model_args.freeze_encoder:
freeze_params(model.get_encoder())
assert_all_frozen(model.get_encoder())
return model
trainer = Seq2SeqTrainer(
model_init=model_init,
config=config,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
data_collator=Seq2SeqDataCollator(tokenizer, data_args, training_args.tpu_num_cores),
compute_metrics=compute_metrics_fn,
data_args=data_args)
logger.info("*** Hyperparameters Search ***")
start_time = time.time()
trainer.hyperparameter_search(
direction = "maximize",
compute_objective = hp_objective,
hp_space = hp_space,
backend = "ray",
resources_per_trial = {'gpu': 1})
```
And I get the following error:
```
02/01/2021 16:26:31 - INFO - __main__ - *** Hyperparameters Search ***
02/01/2021 16:26:31 - INFO - ray.tune.ray_trial_executor - Initializing Ray automatically.For cluster usage or custom Ray initialization, call `ray.init(...)` before `tune.run`.
2021-02-01 16:26:33,796 INFO services.py:1173 -- View the Ray dashboard at http://127.0.0.1:8265
tcmalloc: large alloc 1236656128 bytes == 0x7f275ac1a000 @ 0x7f2ab6d16615 0x591e47 0x4cc179 0x4cc2db 0x566a71 0x5a4cd1 0x5a4fb8 0x7f2a1e822c7c 0x7f2a1e829bfa 0x7f2a1e829fe2 0x7f2a1e82b34f 0x7f2a1e828a39 0x7f2a1e829afc 0x7f2a1e82b34f 0x7f2a1e828a39 0x7f2a1e829fe2 0x7f2a1e82b34f 0x7f2a1e828a39 0x7f2a1e82ad13 0x7f2a1e82b43c 0x7f2a1e828a39 0x7f2a1e829f45 0x7f2a1e82b7fa 0x7f2a1e828a39 0x7f2a1e82ad5e 0x7f2a1e82b43c 0x7f2a1e828a39 0x7f2a1e829f45 0x7f2a1e82b7fa 0x7f2a1e828a39 0x7f2a1e82ad5e
tcmalloc: large alloc 1545822208 bytes == 0x7f26fe9e4000 @ 0x7f2ab6d16615 0x591e47 0x4cc179 0x4cc2db 0x566a71 0x5a4cd1 0x5a4fb8 0x7f2a1e822c7c 0x7f2a1e829bfa 0x7f2a1e829fe2 0x7f2a1e82b34f 0x7f2a1e828a39 0x7f2a1e829afc 0x7f2a1e82b34f 0x7f2a1e828a39 0x7f2a1e829fe2 0x7f2a1e82b34f 0x7f2a1e828a39 0x7f2a1e82ad13 0x7f2a1e82b43c 0x7f2a1e828a39 0x7f2a1e829f45 0x7f2a1e82b7fa 0x7f2a1e828a39 0x7f2a1e82ad5e 0x7f2a1e82b43c 0x7f2a1e828a39 0x7f2a1e829f45 0x7f2a1e82b7fa 0x7f2a1e828a39 0x7f2a1e82ad13
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 706, in send_packed_command
sendall(self._sock, item)
File "/usr/local/lib/python3.6/dist-packages/redis/_compat.py", line 9, in sendall
return sock.sendall(*args, **kwargs)
BrokenPipeError: [Errno 32] Broken pipe
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/content/drive/My Drive/MAGMA: Summarization/transformers/examples/seq2seq/finetune_trainer.py", line 436, in <module>
main()
File "/content/drive/My Drive/MAGMA: Summarization/transformers/examples/seq2seq/finetune_trainer.py", line 351, in main
resources_per_trial = {'gpu': 1})
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1077, in hyperparameter_search
best_run = run_hp_search(self, n_trials, direction, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/integrations.py", line 252, in run_hp_search_ray
analysis = ray.tune.run(_objective, config=trainer.hp_space(None), num_samples=n_trials, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/tune.py", line 325, in run
restore=restore)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/experiment.py", line 149, in __init__
self._run_identifier = Experiment.register_if_needed(run)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/experiment.py", line 287, in register_if_needed
register_trainable(name, run_object)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/registry.py", line 71, in register_trainable
_global_registry.register(TRAINABLE_CLASS, name, trainable)
File "/usr/local/lib/python3.6/dist-packages/ray/tune/registry.py", line 124, in register
self.flush_values()
File "/usr/local/lib/python3.6/dist-packages/ray/tune/registry.py", line 146, in flush_values
_internal_kv_put(_make_key(category, key), value, overwrite=True)
File "/usr/local/lib/python3.6/dist-packages/ray/experimental/internal_kv.py", line 27, in _internal_kv_put
updated = worker.redis_client.hset(key, "value", value)
File "/usr/local/lib/python3.6/dist-packages/redis/client.py", line 3050, in hset
return self.execute_command('HSET', name, *items)
File "/usr/local/lib/python3.6/dist-packages/redis/client.py", line 900, in execute_command
conn.send_command(*args)
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 726, in send_command
check_health=kwargs.get('check_health', True))
File "/usr/local/lib/python3.6/dist-packages/redis/connection.py", line 718, in send_packed_command
(errno, errmsg))
redis.exceptions.ConnectionError: Error 32 while writing to socket. Broken pipe.
02/01/2021 16:26:48 - INFO - wandb.sdk.internal.internal - Internal process exited
```
## Expected behavior
The Trainer should run a hyperparameter search with the 8 different combinations of `learning_rate` and `gradient_accumulation_steps`.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9930/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9929 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9929/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9929/comments | https://api.github.com/repos/huggingface/transformers/issues/9929/events | https://github.com/huggingface/transformers/issues/9929 | 798,482,350 | MDU6SXNzdWU3OTg0ODIzNTA= | 9,929 | Hyperparameter search w/ Optuna CUDA out of memory | {
"login": "marcoabrate",
"id": 43387597,
"node_id": "MDQ6VXNlcjQzMzg3NTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/43387597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcoabrate",
"html_url": "https://github.com/marcoabrate",
"followers_url": "https://api.github.com/users/marcoabrate/followers",
"following_url": "https://api.github.com/users/marcoabrate/following{/other_user}",
"gists_url": "https://api.github.com/users/marcoabrate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcoabrate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcoabrate/subscriptions",
"organizations_url": "https://api.github.com/users/marcoabrate/orgs",
"repos_url": "https://api.github.com/users/marcoabrate/repos",
"events_url": "https://api.github.com/users/marcoabrate/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcoabrate/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't think optuna properly optimizes GPU memory. We don't have support from them so you may be better using ray-tune, where the maintainers happily reply to question on our GitHub in case of problems. ",
"Thank you. I was using Optuna because with RayTune I get an error even before the first trial starts. I will open an issue about the RayTune error."
] | 1,612 | 1,612 | 1,612 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
- Platform: Google Colab and Ubuntu
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7
- Using GPU in script?: Yes
### Who can help
- trainer: @sgugger
- Optuna: ???
## Information
Model I am using (Bert, XLNet ...): sshleifer/distilbart-cnn-12-6
## To reproduce
I am running an hyperparameter search with Optuna. I get a CUDA OOM error even if `per_device_train_batch_size` is set to 1 and the only parameters that I change are `learning_rate` and `gradient_accumulation_steps`. I have the same problem both with Google Colab and Ubuntu. Both of this environments have a 15 GB GPU.
The code I am running:
```
def hp_objective(metrics):
loss = metrics.pop('eval_loss', None)
_ = metrics.pop('epoch', None)
_ = metrics.pop('eval_gen_len', None)
return np.sum(list(metrics.values()))
def hp_space(trial):
return {
'learning_rate': trial.suggest_float('learning_rate', 1e-5, 1e-2, log=True),
'gradient_accumulation_steps':\
trial.suggest_categorical('gradient_accumulation_steps', [4, 8]),
}
def model_init():
model = AutoModelForSeq2SeqLM.from_pretrained(
model_args.model_name_or_path,
config=config,
cache_dir=model_args.cache_dir,
)
# use task specific params
use_task_specific_params(model, data_args.task)
# set num_beams for evaluation
if data_args.eval_beams is None:
data_args.eval_beams = model.config.num_beams
# set decoder_start_token_id for MBart
if model.config.decoder_start_token_id is None and isinstance(tokenizer, MBartTokenizer):
assert (
data_args.tgt_lang is not None and data_args.src_lang is not None
), "mBart requires --tgt_lang and --src_lang"
model.config.decoder_start_token_id = tokenizer.lang_code_to_id[data_args.tgt_lang]
if model_args.freeze_embeds:
freeze_embeds(model)
if model_args.freeze_encoder:
freeze_params(model.get_encoder())
assert_all_frozen(model.get_encoder())
return model
trainer = Seq2SeqTrainer(
model_init=model_init,
config=config,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
data_collator=Seq2SeqDataCollator(tokenizer, data_args, training_args.tpu_num_cores),
compute_metrics=compute_metrics_fn,
data_args=data_args,
)
logger.info("*** Hyperparameters Search ***")
start_time = time.time()
trainer.hyperparameter_search(
direction = "maximize",
compute_objective = hp_objective,
hp_space = hp_space,
backend = "optuna")
```
The error:
```
[INFO|modeling_utils.py:1149] 2021-02-01 15:24:41,640 >> All the weights of BartForConditionalGeneration were initialized from the model checkpoint at sshleifer/distilbart-cnn-12-6.
If your task is similar to the task the model of the checkpoint was trained on, you can already use BartForConditionalGeneration for predictions without further training.
02/01/2021 15:24:41 - INFO - utils - using task specific params for summarization: {'early_stopping': True, 'length_penalty': 2.0, 'max_length': 142, 'min_length': 56, 'no_repeat_ngram_size': 3, 'num_beams': 4}
[W 2021-02-01 15:24:42,103] Trial 8 failed because of the following error: RuntimeError('CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 14.76 GiB total capacity; 13.66 GiB already allocated; 13.75 MiB free; 13.83 GiB reserved in total by PyTorch)',)
Traceback (most recent call last):
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/optuna/_optimize.py", line 211, in _run_trial
value_or_values = func(trial)
File "/home/ubuntu/transformers/src/transformers/integrations.py", line 168, in _objective
trainer.train(model_path=model_path, trial=trial)
File "/home/ubuntu/transformers/src/transformers/trainer.py", line 622, in train
self.model = model.to(self.args.device)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 612, in to
return self._apply(convert)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 359, in _apply
module._apply(fn)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 359, in _apply
module._apply(fn)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 359, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 381, in _apply
param_applied = fn(param)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 610, in convert
return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
RuntimeError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 14.76 GiB total capacity; 13.66 GiB already allocated; 13.75 MiB free; 13.83 GiB reserved in total by PyTorch)
Traceback (most recent call last):
File "/home/ubuntu/transformers/examples/seq2seq/finetune_trainer.py", line 435, in <module>
main()
File "/home/ubuntu/transformers/examples/seq2seq/finetune_trainer.py", line 350, in main
backend = "optuna")
File "/home/ubuntu/transformers/src/transformers/trainer.py", line 1077, in hyperparameter_search
best_run = run_hp_search(self, n_trials, direction, **kwargs)
File "/home/ubuntu/transformers/src/transformers/integrations.py", line 178, in run_hp_search_optuna
study.optimize(_objective, n_trials=n_trials, timeout=timeout, n_jobs=n_jobs)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/optuna/study.py", line 385, in optimize
show_progress_bar=show_progress_bar,
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/optuna/_optimize.py", line 73, in _optimize
progress_bar=progress_bar,
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/optuna/_optimize.py", line 164, in _optimize_sequential
trial = _run_trial(study, func, catch)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/optuna/_optimize.py", line 262, in _run_trial
raise func_err
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/optuna/_optimize.py", line 211, in _run_trial
value_or_values = func(trial)
File "/home/ubuntu/transformers/src/transformers/integrations.py", line 168, in _objective
trainer.train(model_path=model_path, trial=trial)
File "/home/ubuntu/transformers/src/transformers/trainer.py", line 622, in train
self.model = model.to(self.args.device)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 612, in to
return self._apply(convert)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 359, in _apply
module._apply(fn)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 359, in _apply
module._apply(fn)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 359, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 381, in _apply
param_applied = fn(param)
File "/home/ubuntu/miniconda3/envs/magma/lib/python3.6/site-packages/torch/nn/modules/module.py", line 610, in convert
return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
RuntimeError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 14.76 GiB total capacity; 13.66 GiB already allocated; 13.75 MiB free; 13.83 GiB reserved in total by PyTorch)
```
## Expected behavior
The GPU should never go OOM, since the batch size is 1 in all trials.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9929/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9928 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9928/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9928/comments | https://api.github.com/repos/huggingface/transformers/issues/9928/events | https://github.com/huggingface/transformers/pull/9928 | 798,472,218 | MDExOlB1bGxSZXF1ZXN0NTY1MjgzNjQ4 | 9,928 | [Tokenizer Utils Base] Make pad function more flexible | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,612 | 1,612 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Currently, tokenizers forces the dict to be padded to have an `input_ids` key. This restricts transformers tokenizers too much for models outside of NLP, such as Wav2Vec2: https://github.com/huggingface/transformers/pull/9659/files?file-filters%5B%5D=.py
As discussed offline, the cleanest approach is to add `input_ids` to the class attribute `model_input_names` and enforce a certain order. This is ensured by a test and a couple of comments that make the reader aware of it.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9928/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9928",
"html_url": "https://github.com/huggingface/transformers/pull/9928",
"diff_url": "https://github.com/huggingface/transformers/pull/9928.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9928.patch",
"merged_at": 1612251328000
} |
https://api.github.com/repos/huggingface/transformers/issues/9927 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9927/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9927/comments | https://api.github.com/repos/huggingface/transformers/issues/9927/events | https://github.com/huggingface/transformers/issues/9927 | 798,321,716 | MDU6SXNzdWU3OTgzMjE3MTY= | 9,927 | Missing None verification in the CLM language modeling example | {
"login": "Aunsiels",
"id": 7902128,
"node_id": "MDQ6VXNlcjc5MDIxMjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7902128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aunsiels",
"html_url": "https://github.com/Aunsiels",
"followers_url": "https://api.github.com/users/Aunsiels/followers",
"following_url": "https://api.github.com/users/Aunsiels/following{/other_user}",
"gists_url": "https://api.github.com/users/Aunsiels/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aunsiels/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aunsiels/subscriptions",
"organizations_url": "https://api.github.com/users/Aunsiels/orgs",
"repos_url": "https://api.github.com/users/Aunsiels/repos",
"events_url": "https://api.github.com/users/Aunsiels/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aunsiels/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"HI @Aunsiels you are right, thanks! Feel free to open PR to fix it :)\r\n\r\nIf `train_file` is `None` it should use `validation_file` to get the extension"
] | 1,612 | 1,612 | 1,612 | NONE | null | Here: https://github.com/huggingface/transformers/blob/1682804ebd504d3381523116773583a52f35afd1/examples/language-modeling/run_clm.py#L230, data_args.train_file can be None (as it is checked some lines above). Therefore, there should be a check to see if it is the case or not.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9927/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9926 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9926/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9926/comments | https://api.github.com/repos/huggingface/transformers/issues/9926/events | https://github.com/huggingface/transformers/issues/9926 | 798,291,473 | MDU6SXNzdWU3OTgyOTE0NzM= | 9,926 | Deploying a transformers pipeline into Google Cloud AI-Platform prediction | {
"login": "iElsha",
"id": 38140638,
"node_id": "MDQ6VXNlcjM4MTQwNjM4",
"avatar_url": "https://avatars.githubusercontent.com/u/38140638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iElsha",
"html_url": "https://github.com/iElsha",
"followers_url": "https://api.github.com/users/iElsha/followers",
"following_url": "https://api.github.com/users/iElsha/following{/other_user}",
"gists_url": "https://api.github.com/users/iElsha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iElsha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iElsha/subscriptions",
"organizations_url": "https://api.github.com/users/iElsha/orgs",
"repos_url": "https://api.github.com/users/iElsha/repos",
"events_url": "https://api.github.com/users/iElsha/events{/privacy}",
"received_events_url": "https://api.github.com/users/iElsha/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hmmm I would say the issue here comes from `tqdm`. Do you know how come there is no distribution for `tqdm`? Managing to install it as a standalone would be a good first step imo.",
"pinging @philschmid or @n1t0 who might know about Google's AI Platform (and other ways to deploy in the cloud)",
"> Hmmm I would say the issue here comes from `tqdm`. Do you know how come there is no distribution for `tqdm`? Managing to install it as a standalone would be a good first step imo.\r\n\r\nI tried to precise the link of `tqdm` package but without success\r\n```python\r\nfrom setuptools import setup\r\n\r\nsetup(\r\n name='customerPredictionCustomerReview',\r\n version='0.1',\r\n scripts=['predictor.py'],\r\n install_requires=[\"tqdm\", \"transformers==4.2.2\"],\r\n dependency_links=[\r\n \"https://files.pythonhosted.org/packages/80/02/8f8880a4fd6625461833abcf679d4c12a44c76f9925f92bf212bb6cefaad/tqdm-4.56.0-py2.py3-none-any.whl\"]\r\n)\r\n```",
"@iElsha I am going to take a look later at why your deployment into Google Cloud AI-platform with a customer prediction routine might not work. \r\n\r\nIn addition, Google offers different other services to deploy `transformers` in the cloud. The easiest way I think is to use [managed Cloud Run](https://cloud.google.com/run). With Cloud Run you can deploy highly scalable containerized applications on a fully managed serverless platform it supports currently up to 8GB of memory and 4 CPUs. You just have to build a `flask` or `fastAPI` container and deploy it. \r\n\r\nAnother possible solution could be `GKE`, Google's managed Kubernetes service when you want to scale your application or want to be more flexible in terms of configuration. `GKE` supports `Cloud Run` too. So it is possible to use your `Cloud Run` container out-of-the-box on `GKE`. \r\n\r\nAnd last but not least there is [App Engine](https://cloud.google.com/appengine/docs/standard/python3/quickstart) a highly scalable fully managed platform. \r\n\r\n\r\n",
"> > Hmmm I would say the issue here comes from `tqdm`. Do you know how come there is no distribution for `tqdm`? Managing to install it as a standalone would be a good first step imo.\r\n> \r\n> I tried to precise the link of `tqdm` package but without success\r\n> \r\n> ```python\r\n> from setuptools import setup\r\n> \r\n> setup(\r\n> name='customerPredictionCustomerReview',\r\n> version='0.1',\r\n> scripts=['predictor.py'],\r\n> install_requires=[\"tqdm\", \"transformers==4.2.2\"],\r\n> dependency_links=[\r\n> \"https://files.pythonhosted.org/packages/80/02/8f8880a4fd6625461833abcf679d4c12a44c76f9925f92bf212bb6cefaad/tqdm-4.56.0-py2.py3-none-any.whl\"]\r\n> )\r\n> ```\r\n\r\n@iElsha do you have the complete code somewhere available? like in a Github Repository? I would like to try to recreate the error.",
"Thanks for the quick reply @philschmid \r\n\r\nHere's the github link: https://github.com/iElsha/ICC-Customer-system-AI\r\n\r\nAnd there the deployment commands:\r\n\r\n```shell\r\npython setup.py sdist --formats=gztar\r\ngsutil cp cloudDeploy/dist/customerPredictionCustomerReview-0.1.tar.gz gs://customer_system/src/\r\n\r\n# Create the model project once and update the gcloud tool\r\ngcloud ai-platform models create customerReviewModel --regions europe-west1 --project <YourProjectId>\r\ngcloud components install beta\r\n\r\n# create & delete command to manage the version\r\ngcloud beta ai-platform versions create v01 --model customerReviewModel --runtime-version 2.2 --python-version 3.7 --origin gs://customer_system/model --package-uris gs://customer_system/src/customerPredictionCustomerReview-0.1.tar.gz --prediction-class predictor.MyPredictor --project <YourProjectId>\r\ngcloud beta ai-platform versions delete v01 --model customerReviewModel --project <YourProjectId>\r\n```\r\n\r\n---\r\n\r\n**Edit 01/02 - 16:30:**\r\n\r\nI also tried the solution with `App Engine` (F4 - memory 1024MB) but it seems that it can not load TensorFlow properly:\r\n\r\n```json\r\n{\r\n \"textPayload\": \"2021-02-01 15:00:03.883430: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /layers/google.python.pip/pip/lib\",\r\n}\r\n```\r\n\r\nI am going to try with pytorch",
"I also tried to see where does the error comes from by only installing `tqdm` and not `transformer`. it worked and I succeed to deploy, meaning that the issue might come from `transformer` or somewhere else but `tqdm` seems fine?\r\n",
"@iElsha I could reproduce the error. After that, I researched and found that the Google Cloud AI-Platform `Custom prediction routines` is in BETA and not official GA and others have the same problem with installing packages. [Issue 1](https://stackoverflow.com/questions/62816129/how-do-you-override-google-ai-platforms-standard-librarys-i-e-upgrade-scikit) [Issue 2](https://stackoverflow.com/questions/64781326/getting-create-version-failed-bad-model-detected-with-error-on-ai-platform-wh) \r\nI think the issue is not from `transformers` side. You can create an Issue [at Google official Issue tracker](https://issuetracker.google.com/issues/new?component=187220&template=1161235) or try to create [a custom container for online prediction with AI-Platform](https://cloud.google.com/ai-platform/prediction/docs/custom-container-requirements) or use Cloud Run. I found this [blog post](https://chatbotslife.com/deploying-transformer-models-1350876016f) where a GPT-2 model is used. \r\n",
"> Another possible solution could be `GKE`, Google's managed Kubernetes service when you want to scale your application or want to be more flexible in terms of configuration. `GKE` supports `Cloud Run` too. So it is possible to use your `Cloud Run` container out-of-the-box on `GKE`.\r\n\r\nAs you suggested it works with cloud Run, just with a docker container.\r\n\r\nI before tried on AppEngine, where I was with TensorFlow (2G memory), but TensorFlow couldn't load there due to a missing dependency in the system. I switched to PyTorch and it worked for a few requests but exceed the memory and makes the service unavailable.\r\n\r\nCloud Run with a docker container and flask is, for now, the correct solution to deploy the transformers pipeline. I used a 4G & 1VCPU as settings with PyTorch, which seems lighter & faster to load on a cold boot than TensorFlow.\r\n\r\nThanks for the help\r\n",
"@iElsha Would be very interesting if you can at some point share about the operational aspects of Cloud Run (request latency distribution, scalability from simulated traffic, cost)! We could even write a blogpost about it.",
"> > > Hmmm I would say the issue here comes from `tqdm`. Do you know how come there is no distribution for `tqdm`? Managing to install it as a standalone would be a good first step imo.\r\n> > \r\n> > \r\n> > I tried to precise the link of `tqdm` package but without success\r\n> > ```python\r\n> > from setuptools import setup\r\n> > \r\n> > setup(\r\n> > name='customerPredictionCustomerReview',\r\n> > version='0.1',\r\n> > scripts=['predictor.py'],\r\n> > install_requires=[\"tqdm\", \"transformers==4.2.2\"],\r\n> > dependency_links=[\r\n> > \"https://files.pythonhosted.org/packages/80/02/8f8880a4fd6625461833abcf679d4c12a44c76f9925f92bf212bb6cefaad/tqdm-4.56.0-py2.py3-none-any.whl\"]\r\n> > )\r\n> > ```\r\n> \r\n> @iElsha do you have the complete code somewhere available? like in a Github Repository? I would like to try to recreate the error.\r\n\r\nThis kind of makes me feel that the issue is not with GCP Custom Prediction routines but some way `tqdm` and `transformers` are interacting when installing this way. I am able to install several other packages, including `tqdm` in a custom prediction routine build - but I cannot install `transformers`.",
"> > Hmmm I would say the issue here comes from `tqdm`. Do you know how come there is no distribution for `tqdm`? Managing to install it as a standalone would be a good first step imo.\r\n> \r\n> I tried to precise the link of `tqdm` package but without success\r\n> \r\n> ```python\r\n> from setuptools import setup\r\n> \r\n> setup(\r\n> name='customerPredictionCustomerReview',\r\n> version='0.1',\r\n> scripts=['predictor.py'],\r\n> install_requires=[\"tqdm\", \"transformers==4.2.2\"],\r\n> dependency_links=[\r\n> \"https://files.pythonhosted.org/packages/80/02/8f8880a4fd6625461833abcf679d4c12a44c76f9925f92bf212bb6cefaad/tqdm-4.56.0-py2.py3-none-any.whl\"]\r\n> )\r\n> ```\r\n\r\n install_requires=[\"tqdm-wheel\"] will help you installed the library I guess so because I had also similar kind of problems with libraries and I installed it this way.\r\nI think it will help you too."
] | 1,612 | 1,617 | 1,612 | NONE | null | I am trying to deploy the model "distilbert-base-uncased-finetuned-sst-2-english" into Google Cloud AI-platform with a [customer prediction routine](https://cloud.google.com/ai-platform/prediction/docs/custom-prediction-routines).
The code stays pretty simple but I encounter an issue when deploying the model.
```
Create Version failed. Bad model detected with error: "Failed to load model: Unexpected error when loading the model: problem in predictor - DistributionNotFound: The 'tqdm>=4.27' distribution was not found and is required by this application, \nTry: pip install transformers -U or pip install -e '.[dev]' if you're working with git master (Error code: 0)"
```
My setup.py files is:
```python
from setuptools import setup
setup(
name='customerPredictionCustomerReview',
version='0.1',
scripts=['predictor.py'],
install_requires=["transformers==4.2.2"],
)
```
My application is just using the sentiment-analysis pipeline and one model.
```python
model_path = os.path.join(model_dir, 'distilbert-base-uncased-finetuned-sst-2-english')
classify = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
```
I am wondering if you know why the others dependencies of transformers are not being installed properly. I have also tried to add `tqdm` to the setup `install_requires` but it didn't work, I got the same error. Would you have an idea here?
In addition, do you maybe suggest another way to deploy the model than the one I used?
Thank you in advance | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9926/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9925 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9925/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9925/comments | https://api.github.com/repos/huggingface/transformers/issues/9925/events | https://github.com/huggingface/transformers/issues/9925 | 798,290,297 | MDU6SXNzdWU3OTgyOTAyOTc= | 9,925 | Implementing ELECTRIC training for ELECTRA | {
"login": "stephantul",
"id": 8882233,
"node_id": "MDQ6VXNlcjg4ODIyMzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8882233?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stephantul",
"html_url": "https://github.com/stephantul",
"followers_url": "https://api.github.com/users/stephantul/followers",
"following_url": "https://api.github.com/users/stephantul/following{/other_user}",
"gists_url": "https://api.github.com/users/stephantul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stephantul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stephantul/subscriptions",
"organizations_url": "https://api.github.com/users/stephantul/orgs",
"repos_url": "https://api.github.com/users/stephantul/repos",
"events_url": "https://api.github.com/users/stephantul/events{/privacy}",
"received_events_url": "https://api.github.com/users/stephantul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [] | 1,612 | 1,615 | null | CONTRIBUTOR | null | # 🚀 Feature request
Google released Electric this summer at EMNLP (see: [here](https://www.aclweb.org/anthology/2020.emnlp-main.20.pdf)). Electric is like ELECTRA, but trained using a Noise Contrastive Estimation loss instead of a negative sampling loss.
## Motivation
Electric is well-suited for modeling perplexity scores, and can model these very efficiently. Modeling these perplexity scores using BERT requires N passes over the input sentence, where N is the number of tokes in the sentence (see [here](https://arxiv.org/abs/1910.14659)).
## Your contribution
Electric has been implemented in the Google Electra repository. From I can see, moving from an Electra to Electric-style training is not a huge code change, but I'm not that familiar with the inner workings of transformers to be able to make a judgment call on this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9925/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9925/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9924 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9924/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9924/comments | https://api.github.com/repos/huggingface/transformers/issues/9924/events | https://github.com/huggingface/transformers/pull/9924 | 798,247,951 | MDExOlB1bGxSZXF1ZXN0NTY1MDk0NDQz | 9,924 | [docs] fix auto model docs | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,612 | 1,612 | MEMBER | null | # What does this PR do?
Small doc fixes for auto model classes. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9924/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9924",
"html_url": "https://github.com/huggingface/transformers/pull/9924",
"diff_url": "https://github.com/huggingface/transformers/pull/9924.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9924.patch",
"merged_at": 1612185466000
} |
https://api.github.com/repos/huggingface/transformers/issues/9923 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9923/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9923/comments | https://api.github.com/repos/huggingface/transformers/issues/9923/events | https://github.com/huggingface/transformers/pull/9923 | 798,236,875 | MDExOlB1bGxSZXF1ZXN0NTY1MDg1MTM0 | 9,923 | Fix bart conversion script | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,612 | 1,612 | MEMBER | null | # What does this PR do?
Fix import and add the `make_linear_from_emb` function in the script. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9923/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9923",
"html_url": "https://github.com/huggingface/transformers/pull/9923",
"diff_url": "https://github.com/huggingface/transformers/pull/9923.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9923.patch",
"merged_at": 1612196235000
} |
https://api.github.com/repos/huggingface/transformers/issues/9922 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9922/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9922/comments | https://api.github.com/repos/huggingface/transformers/issues/9922/events | https://github.com/huggingface/transformers/pull/9922 | 798,233,021 | MDExOlB1bGxSZXF1ZXN0NTY1MDgyMDQz | 9,922 | Tensorflow doc changes on loss output size | {
"login": "janjitse",
"id": 16238701,
"node_id": "MDQ6VXNlcjE2MjM4NzAx",
"avatar_url": "https://avatars.githubusercontent.com/u/16238701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/janjitse",
"html_url": "https://github.com/janjitse",
"followers_url": "https://api.github.com/users/janjitse/followers",
"following_url": "https://api.github.com/users/janjitse/following{/other_user}",
"gists_url": "https://api.github.com/users/janjitse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/janjitse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/janjitse/subscriptions",
"organizations_url": "https://api.github.com/users/janjitse/orgs",
"repos_url": "https://api.github.com/users/janjitse/repos",
"events_url": "https://api.github.com/users/janjitse/events{/privacy}",
"received_events_url": "https://api.github.com/users/janjitse/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #9771, by changing the documentation to correctly state the size of the output loss function.
I did not change the documentation for TFSeq2SeqQuestionAnsweringModelOutput and TFSeq2SeqSequenceClassifierOutput, as I could not find any code using this, so I was unsure what the correct output size would be.
I also fixed a few instances where I found the documentation referring to torch.LongTensor when tf.tensor should be used.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@jplu
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9922/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9922",
"html_url": "https://github.com/huggingface/transformers/pull/9922",
"diff_url": "https://github.com/huggingface/transformers/pull/9922.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9922.patch",
"merged_at": 1612196271000
} |
https://api.github.com/repos/huggingface/transformers/issues/9921 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9921/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9921/comments | https://api.github.com/repos/huggingface/transformers/issues/9921/events | https://github.com/huggingface/transformers/pull/9921 | 798,134,937 | MDExOlB1bGxSZXF1ZXN0NTY1MDAwNTEy | 9,921 | [Templates] Add template "call-for-model" markdown and "call-for-big-bird" markdown | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,612 | 1,612 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds a template to generate a "call-for-model" sheet and also adds one for [BigBird](https://github.com/google-research/bigbird)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9921/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9921/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9921",
"html_url": "https://github.com/huggingface/transformers/pull/9921",
"diff_url": "https://github.com/huggingface/transformers/pull/9921.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9921.patch",
"merged_at": 1612529275000
} |
https://api.github.com/repos/huggingface/transformers/issues/9920 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9920/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9920/comments | https://api.github.com/repos/huggingface/transformers/issues/9920/events | https://github.com/huggingface/transformers/issues/9920 | 798,100,137 | MDU6SXNzdWU3OTgxMDAxMzc= | 9,920 | Would you like to add convert the generator script by ConvBert | {
"login": "RyanHuangNLP",
"id": 49582480,
"node_id": "MDQ6VXNlcjQ5NTgyNDgw",
"avatar_url": "https://avatars.githubusercontent.com/u/49582480?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RyanHuangNLP",
"html_url": "https://github.com/RyanHuangNLP",
"followers_url": "https://api.github.com/users/RyanHuangNLP/followers",
"following_url": "https://api.github.com/users/RyanHuangNLP/following{/other_user}",
"gists_url": "https://api.github.com/users/RyanHuangNLP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RyanHuangNLP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RyanHuangNLP/subscriptions",
"organizations_url": "https://api.github.com/users/RyanHuangNLP/orgs",
"repos_url": "https://api.github.com/users/RyanHuangNLP/repos",
"events_url": "https://api.github.com/users/RyanHuangNLP/events{/privacy}",
"received_events_url": "https://api.github.com/users/RyanHuangNLP/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Pinging @abhishekkrthakur ",
"@abhishekkrthakur I have try to convert a mlm convbert to transformers one, this is the convert code\r\n\r\n```\r\nimport torch\r\nimport os\r\n\r\nimport tensorflow as tf\r\n\r\nfrom transformers import ConvBertConfig, ConvBertForMaskedLM, ConvBertPreTrainedModel\r\nfrom transformers.utils import logging\r\nfrom operator import attrgetter\r\n\r\nlogger = logging.get_logger(__name__)\r\n\r\nconfig_file = \"weights/convbert_base_mlm/config.json\"\r\ntf_path = \"tf_weights/ft_local/model.ckpt-490000\"\r\npytorch_dump_path = \"weights/convbert_base_mlm\"\r\nconfig = ConvBertConfig.from_json_file(config_file)\r\n\r\n#model = ConvBertPreTrainedModel(config)\r\nmodel = ConvBertForMaskedLM(config)\r\n\r\ndef load_tf_weights_in_convbert(model, config, tf_checkpoint_path):\r\n \"\"\"Load tf checkpoints in a pytorch model.\"\"\"\r\n try:\r\n import tensorflow as tf\r\n except ImportError:\r\n logger.error(\r\n \"Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see \"\r\n \"https://www.tensorflow.org/install/ for installation instructions.\"\r\n )\r\n raise\r\n tf_path = os.path.abspath(tf_checkpoint_path)\r\n logger.info(\"Converting TensorFlow checkpoint from {}\".format(tf_path))\r\n # Load weights from TF model\r\n init_vars = tf.train.list_variables(tf_path)\r\n tf_data = {}\r\n for name, shape in init_vars:\r\n logger.info(\"Loading TF weight {} with shape {}\".format(name, shape))\r\n array = tf.train.load_variable(tf_path, name)\r\n tf_data[name] = array\r\n\r\n param_mapping = {\r\n \"convbert.embeddings.word_embeddings.weight\": \"electra/embeddings/word_embeddings\",\r\n \"convbert.embeddings.position_embeddings.weight\": \"electra/embeddings/position_embeddings\",\r\n \"convbert.embeddings.token_type_embeddings.weight\": \"electra/embeddings/token_type_embeddings\",\r\n \"convbert.embeddings.LayerNorm.weight\": \"electra/embeddings/LayerNorm/gamma\",\r\n \"convbert.embeddings.LayerNorm.bias\": \"electra/embeddings/LayerNorm/beta\",\r\n \"convbert.embeddings_project.weight\": \"electra/embeddings_project/kernel\",\r\n \"convbert.embeddings_project.bias\": \"electra/embeddings_project/bias\",\r\n \"generator_predictions.LayerNorm.weight\": \"generator_predictions/LayerNorm/gamma\",\r\n \"generator_predictions.LayerNorm.bias\": \"generator_predictions/LayerNorm/beta\",\r\n \"generator_predictions.dense.weight\": \"generator_predictions/dense/kernel\",\r\n \"generator_predictions.dense.bias\": \"generator_predictions/dense/bias\",\r\n \"generator_lm_head.bias\": \"generator_predictions/output_bias\"\r\n }\r\n if config.num_groups > 1:\r\n group_dense_name = \"g_dense\"\r\n else:\r\n group_dense_name = \"dense\"\r\n\r\n for j in range(config.num_hidden_layers):\r\n param_mapping[\r\n f\"convbert.encoder.layer.{j}.attention.self.query.weight\"\r\n ] = f\"electra/encoder/layer_{j}/attention/self/query/kernel\"\r\n param_mapping[\r\n f\"convbert.encoder.layer.{j}.attention.self.query.bias\"\r\n ] = f\"electra/encoder/layer_{j}/attention/self/query/bias\"\r\n param_mapping[\r\n f\"convbert.encoder.layer.{j}.attention.self.key.weight\"\r\n ] = f\"electra/encoder/layer_{j}/attention/self/key/kernel\"\r\n param_mapping[\r\n f\"convbert.encoder.layer.{j}.attention.self.key.bias\"\r\n ] = f\"electra/encoder/layer_{j}/attention/self/key/bias\"\r\n param_mapping[\r\n f\"convbert.encoder.layer.{j}.attention.self.value.weight\"\r\n ] = f\"electra/encoder/layer_{j}/attention/self/value/kernel\"\r\n param_mapping[\r\n f\"convbert.encoder.layer.{j}.attention.self.value.bias\"\r\n ] = f\"electra/encoder/layer_{j}/attention/self/value/bias\"\r\n param_mapping[\r\n f\"convbert.encoder.layer.{j}.attention.self.key_conv_attn_layer.depthwise.weight\"\r\n ] = f\"electra/encoder/layer_{j}/attention/self/conv_attn_key/depthwise_kernel\"\r\n param_mapping[\r\n f\"convbert.encoder.layer.{j}.attention.self.key_conv_attn_layer.pointwise.weight\"\r\n ] = f\"electra/encoder/layer_{j}/attention/self/conv_attn_key/pointwise_kernel\"\r\n param_mapping[\r\n f\"convbert.encoder.layer.{j}.attention.self.key_conv_attn_layer.bias\"\r\n ] = f\"electra/encoder/layer_{j}/attention/self/conv_attn_key/bias\"\r\n param_mapping[\r\n f\"convbert.encoder.layer.{j}.attention.self.conv_kernel_layer.weight\"\r\n ] = f\"electra/encoder/layer_{j}/attention/self/conv_attn_kernel/kernel\"\r\n param_mapping[\r\n f\"convbert.encoder.layer.{j}.attention.self.conv_kernel_layer.bias\"\r\n ] = f\"electra/encoder/layer_{j}/attention/self/conv_attn_kernel/bias\"\r\n param_mapping[\r\n f\"convbert.encoder.layer.{j}.attention.self.conv_out_layer.weight\"\r\n ] = f\"electra/encoder/layer_{j}/attention/self/conv_attn_point/kernel\"\r\n param_mapping[\r\n f\"convbert.encoder.layer.{j}.attention.self.conv_out_layer.bias\"\r\n ] = f\"electra/encoder/layer_{j}/attention/self/conv_attn_point/bias\"\r\n param_mapping[\r\n f\"convbert.encoder.layer.{j}.attention.output.dense.weight\"\r\n ] = f\"electra/encoder/layer_{j}/attention/output/dense/kernel\"\r\n param_mapping[\r\n f\"convbert.encoder.layer.{j}.attention.output.LayerNorm.weight\"\r\n ] = f\"electra/encoder/layer_{j}/attention/output/LayerNorm/gamma\"\r\n param_mapping[\r\n f\"convbert.encoder.layer.{j}.attention.output.dense.bias\"\r\n ] = f\"electra/encoder/layer_{j}/attention/output/dense/bias\"\r\n param_mapping[\r\n f\"convbert.encoder.layer.{j}.attention.output.LayerNorm.bias\"\r\n ] = f\"electra/encoder/layer_{j}/attention/output/LayerNorm/beta\"\r\n param_mapping[\r\n f\"convbert.encoder.layer.{j}.intermediate.dense.weight\"\r\n ] = f\"electra/encoder/layer_{j}/intermediate/{group_dense_name}/kernel\"\r\n param_mapping[\r\n f\"convbert.encoder.layer.{j}.intermediate.dense.bias\"\r\n ] = f\"electra/encoder/layer_{j}/intermediate/{group_dense_name}/bias\"\r\n param_mapping[\r\n f\"convbert.encoder.layer.{j}.output.dense.weight\"\r\n ] = f\"electra/encoder/layer_{j}/output/{group_dense_name}/kernel\"\r\n param_mapping[\r\n f\"convbert.encoder.layer.{j}.output.dense.bias\"\r\n ] = f\"electra/encoder/layer_{j}/output/{group_dense_name}/bias\"\r\n param_mapping[\r\n f\"convbert.encoder.layer.{j}.output.LayerNorm.weight\"\r\n ] = f\"electra/encoder/layer_{j}/output/LayerNorm/gamma\"\r\n param_mapping[f\"convbert.encoder.layer.{j}.output.LayerNorm.bias\"] = f\"electra/encoder/layer_{j}/output/LayerNorm/beta\"\r\n\r\n for param in model.named_parameters():\r\n param_name = param[0]\r\n retriever = attrgetter(param_name)\r\n result = retriever(model)\r\n tf_name = param_mapping[param_name]\r\n value = torch.from_numpy(tf_data[tf_name])\r\n logger.info(f\"TF: {tf_name}, PT: {param_name} \")\r\n if tf_name.endswith(\"/kernel\"):\r\n if not tf_name.endswith(\"/intermediate/g_dense/kernel\"):\r\n if not tf_name.endswith(\"/output/g_dense/kernel\"):\r\n value = value.T\r\n if tf_name.endswith(\"/depthwise_kernel\"):\r\n value = value.permute(1, 2, 0) # 2, 0, 1\r\n if tf_name.endswith(\"/pointwise_kernel\"):\r\n value = value.permute(2, 1, 0) # 2, 1, 0\r\n if tf_name.endswith(\"/conv_attn_key/bias\"):\r\n value = value.unsqueeze(-1)\r\n result.data = value\r\n return model\r\n\r\nmodel = load_tf_weights_in_convbert(model, config, tf_path)\r\nmodel.save_pretrained(pytorch_dump_path)\r\n```",
"@RyanHuangNLP good idea! do you want to make a PR? Or should I fix it? ",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"@RyanHuangNLP I have a question regarding your script. I tried to extract the generator from the tf checkpoint, but it seems the size mismatch. I reduce the hidden_size by 4 (25%), as in the electra config file, and num_attention_head to 4. Does your script is currently converting the discriminator instead?",
"@Shiro-LK my script is not for the electra one, that is for mlm one, may be you should first convert the generator parameters names to the discriminator. It is important to check the parameter name",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,612 | 1,619 | 1,619 | NONE | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
I excite to find that transformer add the support fort ConvBert, but I found that it just provide the script about convert the discriminator, would you like to support for convert the generator of Convbert like the Electra, I train both electra convbert and masklm convbert.
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9920/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9919 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9919/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9919/comments | https://api.github.com/repos/huggingface/transformers/issues/9919/events | https://github.com/huggingface/transformers/issues/9919 | 797,896,287 | MDU6SXNzdWU3OTc4OTYyODc= | 9,919 | AttributeError: module 'torch.utils' has no attribute 'checkpoint' for fine tune LED | {
"login": "mmoya01",
"id": 17535683,
"node_id": "MDQ6VXNlcjE3NTM1Njgz",
"avatar_url": "https://avatars.githubusercontent.com/u/17535683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmoya01",
"html_url": "https://github.com/mmoya01",
"followers_url": "https://api.github.com/users/mmoya01/followers",
"following_url": "https://api.github.com/users/mmoya01/following{/other_user}",
"gists_url": "https://api.github.com/users/mmoya01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmoya01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmoya01/subscriptions",
"organizations_url": "https://api.github.com/users/mmoya01/orgs",
"repos_url": "https://api.github.com/users/mmoya01/repos",
"events_url": "https://api.github.com/users/mmoya01/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmoya01/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @mmoya01 \r\n\r\nThis error happens if `torch.utils.checkpoint` is not imported. This is fixed on master now, see #9626",
"@patil-suraj thank you, that work",
"I still get the same error (when training DeBERTa-V3-base) on a colab GPU with Trainsformers==4.12\r\n\r\nI using\r\n`model.gradient_checkpointing_enable() # to decrease memory usage\r\n`\r\nBefore doing normal training via the HF trainer.\r\n\r\n(It's fixed if I run this:) \r\n`from torch.utils.checkpoint import checkpoint\r\n`\r\n",
"I get the same error when running training on `DebertaForSequenceClassification` using the Trainer API with `gradient_checkpointing` set to True.\r\n\r\n@MoritzLaurer 's solution works for this also",
"> \r\n\r\nthanks, it worked! "
] | 1,612 | 1,662 | 1,612 | NONE | null | hello, I fine tuned my own LED model by following this [notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing#scrollTo=tLM3niQqhEzP) and I saved it using
```python
led.save_pretrained("longformer2Bart")
tokenizer.save_pretrained("longformer2Bart")
```
however, whenever I try testing that model using something like this
```python
from transformers import LEDTokenizer, LEDForConditionalGeneration
model = LEDForConditionalGeneration.from_pretrained("longformer2Bart")
tokenizer = LEDTokenizer.from_pretrained("longformer2Bart")
article = """(CNN)James Holmes made his introduction to the world in a Colorado cinema filled with spectators watching a midnight showing of the new Batman movie, "The Dark Knight Rises," in June 2012. The moment became one of the deadliest shootings in U.S. history. Holmes is accused of opening fire on the crowd, killing 12 people and injuring or maiming 70 others in Aurora, a suburb of Denver. Holmes appeared like a comic book character: He resembled the Joker, with red-orange hair, similar to the late actor Heath Ledger\'s portrayal of the villain in an earlier Batman movie, authorities said. But Holmes was hardly a cartoon. Authorities said he wore body armor and carried several guns, including an AR-15 rifle, with lots of ammo. He also wore a gas mask. Holmes says he was insane at the time of the shootings, and that is his legal defense and court plea: not guilty by reason of insanity. Prosecutors aren\'t swayed and will seek the death penalty. Opening statements in his trial are scheduled to begin Monday. Holmes admits to the shootings but says he was suffering "a psychotic episode" at the time, according to court papers filed in July 2013 by the state public defenders, Daniel King and Tamara A. Brady. Evidence "revealed thus far in the case supports the defense\'s position that Mr. Holmes suffers from a severe mental illness and was in the throes of a psychotic episode when he committed the acts that resulted in the tragic loss of life and injuries sustained by moviegoers on July 20, 2012," the public defenders wrote. Holmes no longer looks like a dazed Joker, as he did in his first appearance before a judge in 2012. He appeared dramatically different in January when jury selection began for his trial: 9,000 potential jurors were summoned for duty, described as one of the nation\'s largest jury calls. Holmes now has a cleaner look, with a mustache, button-down shirt and khaki pants. In January, he had a beard and eyeglasses. If this new image sounds like one of an academician, it may be because Holmes, now 27, once was one. Just before the shooting, Holmes was a doctoral student in neuroscience, and he was studying how the brain works, with his schooling funded by a U.S. government grant. Yet for all his learning, Holmes apparently lacked the capacity to command his own mind, according to the case against him. A jury will ultimately decide Holmes\' fate. That panel is made up of 12 jurors and 12 alternates. They are 19 women and five men, and almost all are white and middle-aged. The trial could last until autumn. When jury summonses were issued in January, each potential juror stood a 0.2% chance of being selected, District Attorney George Brauchler told the final jury this month. He described the approaching trial as "four to five months of a horrible roller coaster through the worst haunted house you can imagine." The jury will have to render verdicts on each of the 165 counts against Holmes, including murder and attempted murder charges. Meanwhile, victims and their relatives are challenging all media outlets "to stop the gratuitous use of the name and likeness of mass killers, thereby depriving violent individuals the media celebrity and media spotlight they so crave," the No Notoriety group says. They are joined by victims from eight other mass shootings in recent U.S. history. Raised in central coastal California and in San Diego, James Eagan Holmes is the son of a mathematician father noted for his work at the FICO firm that provides credit scores and a registered nurse mother, according to the U-T San Diego newspaper. Holmes also has a sister, Chris, a musician, who\'s five years younger, the newspaper said. His childhood classmates remember him as a clean-cut, bespectacled boy with an "exemplary" character who "never gave any trouble, and never got in trouble himself," The Salinas Californian reported. His family then moved down the California coast, where Holmes grew up in the San Diego-area neighborhood of Rancho Peñasquitos, which a neighbor described as "kind of like Mayberry," the San Diego newspaper said. Holmes attended Westview High School, which says its school district sits in "a primarily middle- to upper-middle-income residential community." There, Holmes ran cross-country, played soccer and later worked at a biotechnology internship at the Salk Institute and Miramar College, which attracts academically talented students. By then, his peers described him as standoffish and a bit of a wiseacre, the San Diego newspaper said. Holmes attended college fairly close to home, in a neighboring area known as Southern California\'s "inland empire" because it\'s more than an hour\'s drive from the coast, in a warm, low-desert climate. He entered the University of California, Riverside, in 2006 as a scholarship student. In 2008 he was a summer camp counselor for disadvantaged children, age 7 to 14, at Camp Max Straus, run by Jewish Big Brothers Big Sisters of Los Angeles. He graduated from UC Riverside in 2010 with the highest honors and a bachelor\'s degree in neuroscience. "Academically, he was at the top of the top," Chancellor Timothy P. White said. He seemed destined for even higher achievement. By 2011, he had enrolled as a doctoral student in the neuroscience program at the University of Colorado Anschutz Medical Campus in Aurora, the largest academic health center in the Rocky Mountain region. The doctoral in neuroscience program attended by Holmes focuses on how the brain works, with an emphasis on processing of information, behavior, learning and memory. Holmes was one of six pre-thesis Ph.D. students in the program who were awarded a neuroscience training grant from the National Institutes of Health. The grant rewards outstanding neuroscientists who will make major contributions to neurobiology. A syllabus that listed Holmes as a student at the medical school shows he was to have delivered a presentation about microRNA biomarkers. But Holmes struggled, and his own mental health took an ominous turn. In March 2012, he told a classmate he wanted to kill people, and that he would do so "when his life was over," court documents said. Holmes was "denied access to the school after June 12, 2012, after he made threats to a professor," according to court documents. About that time, Holmes was a patient of University of Colorado psychiatrist Lynne Fenton. Fenton was so concerned about Holmes\' behavior that she mentioned it to her colleagues, saying he could be a danger to others, CNN affiliate KMGH-TV reported, citing sources with knowledge of the investigation. Fenton\'s concerns surfaced in early June, sources told the Denver station. Holmes began to fantasize about killing "a lot of people" in early June, nearly six weeks before the shootings, the station reported, citing unidentified sources familiar with the investigation. Holmes\' psychiatrist contacted several members of a "behavioral evaluation and threat assessment" team to say Holmes could be a danger to others, the station reported. At issue was whether to order Holmes held for 72 hours to be evaluated by mental health professionals, the station reported. "Fenton made initial phone calls about engaging the BETA team" in "the first 10 days" of June, but it "never came together" because in the period Fenton was having conversations with team members, Holmes began the process of dropping out of school, a source told KMGH. Defense attorneys have rejected the prosecution\'s assertions that Holmes was barred from campus. Citing statements from the university, Holmes\' attorneys have argued that his access was revoked because that\'s normal procedure when a student drops enrollment. What caused this turn for the worse for Holmes has yet to be clearly detailed. In the months before the shooting, he bought four weapons and more than 6,000 rounds of ammunition, authorities said. Police said he also booby-trapped his third-floor apartment with explosives, but police weren\'t fooled. After Holmes was caught in the cinema parking lot immediately after the shooting, bomb technicians went to the apartment and neutralized the explosives. No one was injured at the apartment building. Nine minutes before Holmes went into the movie theater, he called a University of Colorado switchboard, public defender Brady has said in court. The number he called can be used to get in contact with faculty members during off hours, Brady said. Court documents have also revealed that investigators have obtained text messages that Holmes exchanged with someone before the shooting. That person was not named, and the content of the texts has not been made public. According to The New York Times, Holmes sent a text message to a fellow graduate student, a woman, about two weeks before the shooting. She asked if he had left Aurora yet, reported the newspaper, which didn\'t identify her. No, he had two months left on his lease, Holmes wrote back, according to the Times. He asked if she had heard of "dysphoric mania," a form of bipolar disorder marked by the highs of mania and the dark and sometimes paranoid delusions of major depression. The woman asked if the disorder could be managed with treatment. "It was," Holmes wrote her, according to the Times. But he warned she should stay away from him "because I am bad news," the newspaper reported. It was her last contact with Holmes. After the shooting, Holmes\' family issued a brief statement: "Our hearts go out to those who were involved in this tragedy and to the families and friends of those involved," they said, without giving any information about their son. Since then, prosecutors have refused to offer a plea deal to Holmes. For Holmes, "justice is death," said Brauchler, the district attorney. In December, Holmes\' parents, who will be attending the trial, issued another statement: They asked that their son\'s life be spared and that he be sent to an institution for mentally ill people for the rest of his life, if he\'s found not guilty by reason of insanity. "He is not a monster," Robert and Arlene Holmes wrote, saying the death penalty is "morally wrong, especially when the condemned is mentally ill." "He is a human being gripped by a severe mental illness," the parents said. The matter will be settled by the jury. CNN\'s Ana Cabrera and Sara Weisfeldt contributed to this report from Denver."""
input_ids = tokenizer(article, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)
print(tokenizer.decode(output_ids[0], skip_special_tokens=True))
```
I get the following error
```
AttributeError Traceback (most recent call last)
<ipython-input-16-6227477597c7> in <module>
8
9 input_ids = tokenizer(article, return_tensors="pt").input_ids
---> 10 output_ids = model.generate(input_ids)
11
12 # print(tokenizer.decode(output_ids[0], skip_special_tokens=True))
~/.virtualenvs/insights2/lib/python3.6/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
24 try:
25 with self:
---> 26 x = next(gen)
27 yield x
28 except StopIteration:
~/.virtualenvs/insights2/lib/python3.6/site-packages/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, **model_kwargs)
831 if self.config.is_encoder_decoder:
832 # add encoder_outputs to model_kwargs
--> 833 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)
834
835 # set input_ids as decoder_input_ids
~/.virtualenvs/insights2/lib/python3.6/site-packages/transformers/generation_utils.py in _prepare_encoder_decoder_kwargs_for_generation(self, input_ids, model_kwargs)
376 argument: value for argument, value in model_kwargs.items() if not argument.startswith("decoder_")
377 }
--> 378 model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)
379 return model_kwargs
380
~/.virtualenvs/insights2/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 self._forward_hooks.values()):
726 hook_result = hook(self, input, result)
--> 727 if hook_result is not None:
728 result = hook_result
729 if (len(self._backward_hooks) > 0) or (len(_global_backward_hooks) > 0):
~/.virtualenvs/insights2/lib/python3.6/site-packages/transformers/models/led/modeling_led.py in forward(self, input_ids, attention_mask, global_attention_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)
1703 return custom_forward
1704
-> 1705 layer_outputs = torch.utils.checkpoint.checkpoint(
1706 create_custom_forward(encoder_layer),
1707 hidden_states,
AttributeError: module 'torch.utils' has no attribute 'checkpoint'
```
I don't run into this error if I try loading the `patrickvonplaten/led-large-16384-pubmed` model. Not sure if I saved the model incorrectly, @patrickvonplaten or the rest of the community, I'd greatly appreciate any help with this | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9919/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9918 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9918/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9918/comments | https://api.github.com/repos/huggingface/transformers/issues/9918/events | https://github.com/huggingface/transformers/issues/9918 | 797,881,556 | MDU6SXNzdWU3OTc4ODE1NTY= | 9,918 | [doc] transformers.PreTrainedTokenizer.encode() doesn't get resolved to its doc | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You're right, but changing the class won't help us as `PreTrainedTokenizerBase` is not documented either. A quick fix would be to just add the `encode` method to the doc in `PreTrainedTokenizer` so that the reference gets resolved.\r\n\r\n> Also it seems that all modules have this\r\n> What are input IDs? <../glossary.html#input-ids>__\r\n> but not where input_ids are documented, instead after the mask - shouldn't that line be 2 records up?\r\n\r\nIt seems to be only in the T5 model from a quick look. This is missing in the `input_ids` arg (but it should also be in the `decoder_input_ids` args). Did you find other models where it's missing?\r\n\r\nCan do a quick PR to fix those tomorrow morning.",
"> You're right, but changing the class won't help us as PreTrainedTokenizerBase is not documented either.\r\n\r\nIsn't this the documentation?\r\n\r\nhttps://huggingface.co/transformers/internal/tokenization_utils.html#transformers.tokenization_utils_base.PreTrainedTokenizerBase.encode\r\n\r\n> It seems to be only in the T5 model from a quick look. This is missing in the input_ids arg (but it should also be in the decoder_input_ids args). Did you find other models where it's missing?\r\n\r\nIt's hard to devise a full-proof detector, because the input varies, but here is a quick attempt that catches a few of missing ones:\r\n```\r\n# from the root of the repo\r\ngrep -Inr -A30 'input_ids (:obj:' src/transformers/models/ | \\\r\nperl -ne '$x .= $_; END { for (split /--/, $x) { s/attention_mask.*//msg; print if !/What are input IDs/ } }'\r\n```\r\nBasically I'm trying to match every instance of the `*_input_ids` doc entries (assuming they all have the same pattern), I dump the subsequent text and then I check whether there is a matching \"What are input IDs\" in the next few lines. I also snip out any text after `attention_mask` to avoid overlap with entries like `decoder_input_ids`, which may have this pointer.\r\n\r\nIt dumps output where it's most likely missing, like in this entry:\r\n```\r\nsrc/transformers/models/t5/modeling_tf_t5.py:942: decoder_input_ids (:obj:`tf.Tensor` of shape :obj:`(batch_size, target_sequence_length)`, `optional`):\r\nsrc/transformers/models/t5/modeling_tf_t5.py-943- Provide for sequence to sequence training. T5 uses the :obj:`pad_token_id` as the starting token for\r\nsrc/transformers/models/t5/modeling_tf_t5.py-944- :obj:`decoder_input_ids` generation. If :obj:`past_key_values` is used, optionally only the last\r\nsrc/transformers/models/t5/modeling_tf_t5.py-945- :obj:`decoder_input_ids` have to be input (see :obj:`past_key_values`).\r\nsrc/transformers/models/t5/modeling_tf_t5.py-946-\r\nsrc/transformers/models/t5/modeling_tf_t5.py-947- To know more on how to prepare :obj:`decoder_input_ids` for pretraining take a look at `T5 Training\r\nsrc/transformers/models/t5/modeling_tf_t5.py-948- <./t5.html#training>`__. If :obj:`decoder_input_ids` and :obj:`decoder_inputs_embeds` are both unset,\r\nsrc/transformers/models/t5/modeling_tf_t5.py-949- :obj:`decoder_input_ids` takes the value of :obj:`input_ids`.\r\n```\r\n\r\nThe detected chunks are just double newline separated. There are probably a few false positives, but most seem to be true positives. You have the file and the line for the context.\r\n\r\nAnd in many places where \"What are input IDs\" are, in the same place the corresponding entry for attention is missing.\r\n\r\nAlso note I only scanned under `/models/`, there is more in non-model files, but I think it's by design.\r\n\r\n",
"> Isn't this the documentation?\r\n>https://huggingface.co/transformers/internal/tokenization_utils.html#transformers.tokenization_utils_base.PreTrainedTokenizerBase.encode\r\n\r\nAh yes, but this is under the \"internal\" tools, so let's have the subclasses show the documentation since I doubt users will go that far down.\r\n\r\nWill try your magic perl, thanks!",
"Well, I meant that the xref link could link to that page. So it's not about users browsing to it, but sphinx resolving to that doc. \r\n\r\nUnless I'm missing something and you are talking about something else."
] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | There are a few
> See: transformers.PreTrainedTokenizer.encode()
in the docstrings, but they don't resolve to anything in the online docs, since the `encode` method is in `PreTrainedTokenizerBase`
as it can be seen: https://huggingface.co/transformers/internal/tokenization_utils.html#transformers.tokenization_utils_base.PreTrainedTokenizerBase.encode
Should sphinx be able to resolve inheritance and still point to the right doc, or must the docs be modified to say:
> See: transformers.PreTrainedTokenizerBase.encode()
instead?
There are 117 of these.
Example:
https://huggingface.co/transformers/model_doc/t5.html#transformers.T5Model.forward
> Indices can be obtained using T5Tokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for detail.
the `encode` method doesn't get a link.
-----------------------
Also it seems that all modules have this
> `What are input IDs? <../glossary.html#input-ids>`__
but not where `input_ids` are documented, instead after the mask - shouldn't that line be 2 records up?

@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9918/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9917 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9917/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9917/comments | https://api.github.com/repos/huggingface/transformers/issues/9917/events | https://github.com/huggingface/transformers/pull/9917 | 797,824,342 | MDExOlB1bGxSZXF1ZXN0NTY0NzM4NDY0 | 9,917 | distilbert: fix creation of sinusoidal embeddings | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,612 | 1,612 | COLLABORATOR | null | Hi,
similar issue as reported by @stas00 with BART, see #8226.
The creation of sinusoidal embeddings is currently not working on PyTorch 1.8+.
It fails with:
```bash
File "/mnt/europeana-bert/flair/flair/embeddings/token.py", line 820, in __init__
self.model = AutoModel.from_pretrained(model, config=config, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/modeling_auto.py", line 728, in from_pretrained
return MODEL_MAPPING[type(config)].from_pretrained(
File "/opt/conda/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1034, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 419, in __init__
self.embeddings = Embeddings(config) # Embeddings
File "/opt/conda/lib/python3.8/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 88, in __init__
create_sinusoidal_embeddings(
File "/opt/conda/lib/python3.8/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 76, in create_sinusoidal_embeddings
out[:, 0::2] = torch.FloatTensor(np.sin(position_enc[:, 0::2]))
RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.
```
I've seen this problem when trying to train a model in Flair with DistilBERT as feature-based embeddings, as well as when training a DistilBERT model from scratch using the official example.
It can be reproduced in a `nvcr.io/nvidia/pytorch:20.12-py3` container, that comes with PyTorch 1.8. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9917/reactions",
"total_count": 4,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9917/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9917",
"html_url": "https://github.com/huggingface/transformers/pull/9917",
"diff_url": "https://github.com/huggingface/transformers/pull/9917.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9917.patch",
"merged_at": 1612370536000
} |
https://api.github.com/repos/huggingface/transformers/issues/9916 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9916/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9916/comments | https://api.github.com/repos/huggingface/transformers/issues/9916/events | https://github.com/huggingface/transformers/issues/9916 | 797,805,601 | MDU6SXNzdWU3OTc4MDU2MDE= | 9,916 | RAG + DPR model performance issues | {
"login": "krishanudb",
"id": 11831343,
"node_id": "MDQ6VXNlcjExODMxMzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/11831343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krishanudb",
"html_url": "https://github.com/krishanudb",
"followers_url": "https://api.github.com/users/krishanudb/followers",
"following_url": "https://api.github.com/users/krishanudb/following{/other_user}",
"gists_url": "https://api.github.com/users/krishanudb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krishanudb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krishanudb/subscriptions",
"organizations_url": "https://api.github.com/users/krishanudb/orgs",
"repos_url": "https://api.github.com/users/krishanudb/repos",
"events_url": "https://api.github.com/users/krishanudb/events{/privacy}",
"received_events_url": "https://api.github.com/users/krishanudb/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,612 | 1,614 | 1,614 | NONE | null | Hi.
I am trying to reproduce the results obtained in the Retriever Augmented Generation paper for Question Answering on the Natural Questions (NQ) Dataset (Exact Match accuracy 44%).
However, I am not able to reproduce them.
Can someone kindly let me know which DPR model, DPR dataset and RAG dataset was used to obtain the 44% EM Accuracy on NQ dataset?
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9916/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9915 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9915/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9915/comments | https://api.github.com/repos/huggingface/transformers/issues/9915/events | https://github.com/huggingface/transformers/issues/9915 | 797,728,536 | MDU6SXNzdWU3OTc3Mjg1MzY= | 9,915 | prediction_step() is not using compute_loss() | {
"login": "hadifar",
"id": 7101287,
"node_id": "MDQ6VXNlcjcxMDEyODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7101287?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hadifar",
"html_url": "https://github.com/hadifar",
"followers_url": "https://api.github.com/users/hadifar/followers",
"following_url": "https://api.github.com/users/hadifar/following{/other_user}",
"gists_url": "https://api.github.com/users/hadifar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hadifar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hadifar/subscriptions",
"organizations_url": "https://api.github.com/users/hadifar/orgs",
"repos_url": "https://api.github.com/users/hadifar/repos",
"events_url": "https://api.github.com/users/hadifar/events{/privacy}",
"received_events_url": "https://api.github.com/users/hadifar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Mmmm probably. This is a bit tricky to make sure it doesn't break anything but makes more sense. I'll try to look at this on Monday.",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,612 | 1,614 | 1,614 | NONE | null | Hi @sgugger , I think there is an issue with `prediction_step` in `trainer.py`. The problem arises when implementing a custom loss function that requires reshaping input labels. In `training_step`, for loss calculation, it calls `compute_loss()` which is totally fine but in the `prediction_step` it calculates loss without `compute_loss()` function. This inconsistency causes some issues. Would you think it would be better to call `compute_loss()` in both cases in order to avoid this problem?
Update: As I see the code, the loss is tightly coupled with `LabelSmoother` which makes it hard to do them in a single function. If you have any suggestion, it makes me happy to contribute to Huggingface ;)
[prediction_step()](https://huggingface.co/transformers/_modules/transformers/trainer.html#Trainer.prediction_step)
[training_step()](https://huggingface.co/transformers/_modules/transformers/trainer.html#Trainer.training_step) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9915/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9915/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9914 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9914/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9914/comments | https://api.github.com/repos/huggingface/transformers/issues/9914/events | https://github.com/huggingface/transformers/issues/9914 | 797,691,171 | MDU6SXNzdWU3OTc2OTExNzE= | 9,914 | AttributeError: 'torch.Size' object has no attribute 'as_list' | {
"login": "hiteshsom",
"id": 17461216,
"node_id": "MDQ6VXNlcjE3NDYxMjE2",
"avatar_url": "https://avatars.githubusercontent.com/u/17461216?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hiteshsom",
"html_url": "https://github.com/hiteshsom",
"followers_url": "https://api.github.com/users/hiteshsom/followers",
"following_url": "https://api.github.com/users/hiteshsom/following{/other_user}",
"gists_url": "https://api.github.com/users/hiteshsom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hiteshsom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hiteshsom/subscriptions",
"organizations_url": "https://api.github.com/users/hiteshsom/orgs",
"repos_url": "https://api.github.com/users/hiteshsom/repos",
"events_url": "https://api.github.com/users/hiteshsom/events{/privacy}",
"received_events_url": "https://api.github.com/users/hiteshsom/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread.",
"I have the same question.",
"Hello! You're using TensorFlow models (see the `TF` prefix) but you're asking the tokenizer to return PyTorch tensors. You should either stick to full PyTorch (remove the `TF` prefix) or full TF (ask the tokenizer to return `tf` values)",
"I met the same issue, I did not know how to fix it\r\n```\r\ntensor([[ 0, 24948, 5357, 88, 14, 397, 1176, 6724, 7, 35297,\r\n 18109, 5814, 16, 43, 167, 4446, 37361, 381, 2, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]],\r\n device='cuda:0')\r\ntensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],\r\n device='cuda:0')\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n[<ipython-input-13-1309f9063eea>](https://localhost:8080/#) in <module>\r\n 1 _, tokenizer = load_pho_bert()\r\n----> 2 infer('Cảm ơn bạn đã chạy thử model của mình. Chúc một ngày tốt lành nha!', tokenizer)\r\n\r\n2 frames\r\n[/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py](https://localhost:8080/#) in display_shape(shape)\r\n 269 \r\n 270 def display_shape(shape):\r\n--> 271 return str(tuple(shape.as_list()))\r\n 272 \r\n 273 \r\n\r\nAttributeError: 'torch.Size' object has no attribute 'as_list'\r\n```",
"> Hello! You're using TensorFlow models (see the `TF` prefix) but you're asking the tokenizer to return PyTorch tensors. You should either stick to full PyTorch (remove the `TF` prefix) or full TF (ask the tokenizer to return `tf` values)\r\n\r\nPlease help me how to fix this problem? How can I change my code?\r\ndef infer(text, tokenizer, max_len=120):\r\n device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')\r\n print(device)\r\n class_names = ['thế giới', 'thể thao', 'văn hóa', 'vi tính']\r\n\r\n model = tf.keras.models.load_model('./models/cnn_nlp_text_classification_4_classer.h5')\r\n\r\n encoded_review = tokenizer.encode_plus(\r\n text,\r\n max_length=max_len,\r\n truncation=True,\r\n add_special_tokens=True,\r\n padding='max_length',\r\n return_attention_mask=True,\r\n return_token_type_ids=False,\r\n return_tensors='pt',\r\n )\r\n\r\n input_ids = encoded_review['input_ids'].to(device)\r\n print(input_ids.shape)\r\n attention_mask = encoded_review['attention_mask'].to(device)\r\n print(attention_mask.shape)\r\n\r\n output = model(input_ids, attention_mask)\r\n ==> error happen here\r\n"
] | 1,612 | 1,663 | 1,614 | NONE | null | Hello,
I ran the follownig official example script from [longformerforquestionanswering](https://huggingface.co/transformers/model_doc/longformer.html#longformerforquestionanswering)
```
# Tokenizer
tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-large-4096-finetuned-triviaqa')
# Model
model = TFLongformerForQuestionAnswering.from_pretrained('allenai/longformer-large-4096-finetuned-triviaqa')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
encoding = tokenizer(question, text, return_tensors="pt")
input_ids = encoding["input_ids"]
# default is local attention everywhere
# the forward method will automatically set global attention on question tokens
attention_mask = encoding["attention_mask"]
outputs = model(input_ids, attention_mask=attention_mask)
start_logits = outputs.start_logits
end_logits = outputs.end_logits
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())
answer_tokens = all_tokens[torch.argmax(start_logits) :torch.argmax(end_logits)+1]
answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens)) # remove space prepending space token
```
But got following error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-18-4bf253125151> in <module>
7 attention_mask = encoding["attention_mask"]
8
----> 9 outputs = model(input_ids, attention_mask=attention_mask)
10 start_logits = outputs.start_logits
11 end_logits = outputs.end_logits
~\Documents\env\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in __call__(self, *args, **kwargs)
983
984 with ops.enable_auto_cast_variables(self._compute_dtype_object):
--> 985 outputs = call_fn(inputs, *args, **kwargs)
986
987 if self._activity_regularizer:
~\Documents\env\lib\site-packages\transformers\modeling_tf_longformer.py in call(self, inputs, attention_mask, global_attention_mask, token_type_ids, position_ids, inputs_embeds, output_attentions, output_hidden_states, return_dict, start_positions, end_positions, training)
1492 # put global attention on all tokens until `config.sep_token_id` is reached
1493 sep_token_indices = tf.where(input_ids == self.config.sep_token_id)
-> 1494 global_attention_mask = _compute_global_attention_mask(shape_list(input_ids), sep_token_indices)
1495
1496 outputs = self.longformer(
~\Documents\env\lib\site-packages\transformers\modeling_tf_utils.py in shape_list(x)
924 :obj:`List[int]`: The shape of the tensor as a list.
925 """
--> 926 static = x.shape.as_list()
927 dynamic = tf.shape(x)
928 return [dynamic[i] if s is None else s for i, s in enumerate(static)]
AttributeError: 'torch.Size' object has no attribute 'as_list'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9914/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/9914/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9913 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9913/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9913/comments | https://api.github.com/repos/huggingface/transformers/issues/9913/events | https://github.com/huggingface/transformers/issues/9913 | 797,662,260 | MDU6SXNzdWU3OTc2NjIyNjA= | 9,913 | Gradient accumulation and distributed parallelism will reduce the effect? | {
"login": "wulaoshi",
"id": 27938964,
"node_id": "MDQ6VXNlcjI3OTM4OTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/27938964?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wulaoshi",
"html_url": "https://github.com/wulaoshi",
"followers_url": "https://api.github.com/users/wulaoshi/followers",
"following_url": "https://api.github.com/users/wulaoshi/following{/other_user}",
"gists_url": "https://api.github.com/users/wulaoshi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wulaoshi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wulaoshi/subscriptions",
"organizations_url": "https://api.github.com/users/wulaoshi/orgs",
"repos_url": "https://api.github.com/users/wulaoshi/repos",
"events_url": "https://api.github.com/users/wulaoshi/events{/privacy}",
"received_events_url": "https://api.github.com/users/wulaoshi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The following is part of the reference code after my deletion:\r\n\r\n```\r\ndef get_binary_data(args, macro, tokenizer, read_data=False):\r\n if macro.local_rank not in [-1, 0]:\r\n torch.distributed.barrier()\r\n\r\n train_file = [\"a.txt\", \"b.txt\", \"c.txt\"]\r\n valid_file = [\"a_valid.txt\", \"b_valid.txt\", \"c_valid.txt\"]\r\n train_data, vaild_data = [],[]\r\n cached_features_file = args.train_path+\"utt_generator\"\r\n if read_data == False:\r\n for t_f in train_file:\r\n train_utterance = file_reader(args.train_path+t_f)\r\n datasets = get_data_loaders(train_utterance, tokenizer)\r\n train_data.extend(datasets)\r\n for t_f in valid_file:\r\n vaild_utterance = file_reader(args.train_path+t_f)\r\n datasets = get_data_loaders(vaild_utterance, tokenizer)\r\n vaild_data.extend(datasets)\r\n else:\r\n read_data = torch.load(cached_features_file)\r\n train_data = read_data[\"train_data\"]\r\n vaild_data = read_data[\"dev_data\"]\r\n\r\n train_len = len(train_data)\r\n logger.info(\"train len:%d, valid len:%d.\"%(len(train_data), len(vaild_data)))\r\n\r\n train_batch_size = args.batch_size * max(1, macro.n_gpu)\r\n train_sampler = RandomSampler(train_data) if macro.local_rank == -1 else DistributedSampler(train_data)\r\n train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=train_batch_size, collate_fn=collate_fn)\r\n\r\n eval_batch_size = args.batch_size * max(1, macro.n_gpu)\r\n eval_sampler = SequentialSampler(vaild_data)\r\n eval_dataloader = DataLoader(vaild_data, sampler=eval_sampler, batch_size=eval_batch_size, collate_fn=collate_fn_test)\r\n\r\n if macro.local_rank == 0:\r\n torch.distributed.barrier()\r\n return train_dataloader, eval_dataloader, train_len\r\n\r\ndef train(model, training_data, optimizer, device, scheduler, args, macro):\r\n model.train()\r\n batch_idx = 0\r\n epoch_loss = 0\r\n logging_loss = 0\r\n global global_step\r\n global tb_writer\r\n for batch in tqdm(\r\n training_data,\r\n mininterval=2,\r\n desc=\" - (Traning) \",\r\n leave=False,\r\n disable=macro.local_rank not in [-1, 0]\r\n ):\r\n batch_idx += 1\r\n input_ids, lm_labels, token_type_ids, attention_mask = list(map(lambda x: x.to(device), batch))\r\n (lm_loss), *_ = model(input_ids, token_type_ids=token_type_ids, lm_labels=lm_labels, attention_mask=attention_mask)\r\n\r\n if macro.n_gpu > 1:\r\n lm_loss = lm_loss.mean() # mean() to average on multi-gpu parallel (not distributed) training\r\n loss = lm_loss / args.gradient_accumulation_steps\r\n loss.backward()\r\n epoch_loss += loss.item()\r\n if (batch_idx) % args.gradient_accumulation_steps == 0:\r\n torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)\r\n optimizer.step()\r\n optimizer.zero_grad()\r\n scheduler.step()\r\n return epoch_loss/batch_idx\r\n\r\n@func_time\r\ndef train_epoch(model, train_data, dev_data, optimizer, device, scheduler, best_result, args, tokenizer, epoch, macro):\r\n if macro.local_rank in [-1, 0]:\r\n logger.info(\"----\" * 5)\r\n logger.info('Epoch: {}'.format(epoch))\r\n train_loss = train(model, train_data, optimizer, device, scheduler, args, macro)\r\n if epoch>=12:\r\n if macro.local_rank in [-1, 0]:\r\n model_to_evaluate = model.module if hasattr(model, \"module\") else model\r\n vaild_bleu = evaluate(model_to_evaluate, dev_data, device, tokenizer, args)\r\n if best_result < vaild_bleu:\r\n best_result = vaild_bleu\r\n torch.save(model_to_evaluate.state_dict(), args.output_model_path + \"_valid\")\r\n logger.info('save:{}'.format(args.output_model_path + \"_valid\"))\r\n logger.info(\"Val. Bleu: %4f\" % (vaild_bleu))\r\n if epoch%5==0:\r\n if macro.local_rank == -1 or torch.distributed.get_rank() == 0:\r\n model_to_save = model.module if hasattr(model, \"module\") else model\r\n torch.save(model_to_save.state_dict(), args.output_model_path+\"_\"+str(epoch))\r\n logger.info('save:{}'.format(args.output_model_path+\"_\"+str(epoch)))\r\n if macro.local_rank in [-1, 0]:\r\n logger.info(\"Train Loss:%.5f\"%(train_loss))\r\n logger.info(\"----\"*5)\r\n return best_result\r\n\r\ndef main():\r\n parser = argparse.ArgumentParser()\r\n parser.add_argument('--no_cuda', action=\"store_true\")\r\n parser.add_argument(\"--local_rank\", type=int, default=-1, help=\"local_rank for distributed training on gpus\")\r\n macro = parser.parse_args()\r\n args = OptionSet()\r\n\r\n args.model_name = \"test\"\r\n args.model_name = args.model_name+\"_distributed_nogrid\"\r\n\r\n global logger\r\n logger = create_logger(args)\r\n\r\n # Setup CUDA, GPU & distributed training\r\n if macro.local_rank == -1 or macro.no_cuda:\r\n device = torch.device(\"cuda\" if torch.cuda.is_available() and not macro.no_cuda else \"cpu\")\r\n macro.n_gpu = 0 if macro.no_cuda else torch.cuda.device_count()\r\n else: # Initializes the distributed backend which will take care of sychronizing nodes/GPUs\r\n torch.cuda.set_device(macro.local_rank)\r\n device = torch.device(\"cuda\", macro.local_rank)\r\n torch.distributed.init_process_group(backend=\"nccl\")\r\n macro.n_gpu = 1\r\n # args.device = device\r\n set_seed(macro, args.seed)\r\n # Setup logging\r\n logging.basicConfig(\r\n format=\"%(asctime)s - %(levelname)s - %(name)s - %(message)s\",\r\n datefmt=\"%m/%d/%Y %H:%M:%S\",\r\n level=logging.INFO if macro.local_rank in [-1, 0] else logging.WARN,\r\n )\r\n logger.warning(\r\n \"Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s\",\r\n macro.local_rank,\r\n device,\r\n macro.n_gpu,\r\n bool(macro.local_rank != -1),\r\n # args.fp16,\r\n )\r\n # Load pretrained model and tokenizer\r\n if macro.local_rank not in [-1, 0]:\r\n # Make sure only the first process in distributed training will download model & vocab\r\n torch.distributed.barrier()\r\n\r\n logger.info('using device:{}'.format(device))\r\n model, _, tokenizer = creat_model()\r\n if macro.local_rank == 0:\r\n # Make sure only the first process in distributed training will download model & vocab\r\n torch.distributed.barrier()\r\n model = model.to(device)\r\n global PAD_idx\r\n PAD_idx = tokenizer.convert_tokens_to_ids(SPECIAL_TOKENS[-3])\r\n train_data, dev_data, train_len = get_binary_data(args, macro, tokenizer, read_data=False)\r\n # Training phase.\r\n logger.info(\"Start training.\")\r\n instances_num = train_len\r\n train_steps = int(instances_num * args.epochs_num / args.batch_size) + 1\r\n\r\n logger.info('Batch size: {}'.format(args.batch_size))\r\n logger.info('The number of training instances:{}'.format(instances_num))\r\n\r\n num_parameters = 0\r\n parameters = model.parameters()\r\n for parameter in parameters:\r\n num_parameters += parameter.numel()\r\n logger.info('number of model parameters: {}'.format(num_parameters))\r\n\r\n decoder_layer = list(map(id, model.decoder.layers.parameters()))\r\n encoder_para = filter(lambda p: id(p) not in (decoder_layer), model.parameters())\r\n optimizer_grouped_parameters = [\r\n {'params': encoder_para, 'lr': args.learning_rate, 'weight_decay_rate': 0.01},\r\n {'params': model.decoder.layers.parameters(), 'lr': args.learning_rate * 5, 'weight_decay_rate': 0.01}\r\n ]\r\n optimizer = AdamW(optimizer_grouped_parameters, lr=args.learning_rate, correct_bias=False)\r\n scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=train_steps*args.warmup, num_training_steps=train_steps)\r\n\r\n # multi-gpu training (should be after apex fp16 initialization)\r\n if macro.n_gpu > 1:\r\n model = torch.nn.DataParallel(model)\r\n\r\n # Distributed training (should be after apex fp16 initialization)\r\n if macro.local_rank != -1:\r\n model = torch.nn.parallel.DistributedDataParallel(\r\n model, device_ids=[macro.local_rank], output_device=macro.local_rank, find_unused_parameters=True\r\n )\r\n\r\n best_result = 0.0\r\n for epoch in range(1, args.epochs_num+1):\r\n best_result = train_epoch(model, train_data, dev_data, optimizer, device, scheduler, best_result, args, tokenizer, epoch, macro)\r\n\r\n```",
"This is the result of my experiment:\r\n\r\n",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,612 | 1,614 | 1,614 | NONE | null | Reference the code: "transformers/examples/legacy/question-answering/run_squad.py"
I found:
1. normal results without using distributed code, using only gradient accumulation.
2. using the distributed code with the parameter gradient_accumulation_steps=1, the effect is normal.
3. using the distributed code with the parameter gradient_accumulation_steps set to other, the experimental effect is abnormal.
What is going on, please? Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9913/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9912 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9912/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9912/comments | https://api.github.com/repos/huggingface/transformers/issues/9912/events | https://github.com/huggingface/transformers/issues/9912 | 797,646,851 | MDU6SXNzdWU3OTc2NDY4NTE= | 9,912 | How to add more fields in TrainingArguments | {
"login": "g-jing",
"id": 44223191,
"node_id": "MDQ6VXNlcjQ0MjIzMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/44223191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-jing",
"html_url": "https://github.com/g-jing",
"followers_url": "https://api.github.com/users/g-jing/followers",
"following_url": "https://api.github.com/users/g-jing/following{/other_user}",
"gists_url": "https://api.github.com/users/g-jing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-jing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-jing/subscriptions",
"organizations_url": "https://api.github.com/users/g-jing/orgs",
"repos_url": "https://api.github.com/users/g-jing/repos",
"events_url": "https://api.github.com/users/g-jing/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-jing/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You could subclass the `TrainingArguments` class and add more fields to it. You could refer to `https://github.com/huggingface/transformers/blob/master/src/transformers/training_args_seq2seq.py` for an example https://github.com/huggingface/transformers/blob/master/src/transformers/training_args_seq2seq.py",
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,612 | 1,614 | 1,614 | NONE | null | I am using ```from transformers import TrainingArguments```. However, there are more training arguments in my own project. How can I add more fields (parameters) in to the ```args```? Besides, if I have some other ```Arguments Class``` that is similar to ```TrainingArguments```, how to merge them into one ```args```? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9912/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9911 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9911/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9911/comments | https://api.github.com/repos/huggingface/transformers/issues/9911/events | https://github.com/huggingface/transformers/pull/9911 | 797,613,181 | MDExOlB1bGxSZXF1ZXN0NTY0NTc4NzU1 | 9,911 | [seq2seq] fix logger format for non-main process | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@LysandreJik knows better for the centralized logging system so I'll defer to him."
] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | Currently, in `finetune_trainer.py` non-main process doesn't have any formatting at all, so we end up with:
```
[WARNING|modeling_t5.py:1645] 2021-01-30 20:01:37,246 >> [p0] got MPU
[WARNING|modeling_t5.py:1646] 2021-01-30 20:01:37,246 >> [p0] DP group [0]
[p1] got MPU
[p1] DP group [1]
```
as you can see the 2nd process in DDP misses formatting in logger.
this PR fixes it.
I looked in the take-over version `run_seq2seq.py` if it needed to be fixed too and it doesn't have these function calls, not sure why. They appear to be needed, unless they get called elsewhere.
@sgugger, @patil-suraj
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9911/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9911",
"html_url": "https://github.com/huggingface/transformers/pull/9911",
"diff_url": "https://github.com/huggingface/transformers/pull/9911.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9911.patch",
"merged_at": 1612166893000
} |
https://api.github.com/repos/huggingface/transformers/issues/9910 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9910/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9910/comments | https://api.github.com/repos/huggingface/transformers/issues/9910/events | https://github.com/huggingface/transformers/pull/9910 | 797,562,700 | MDExOlB1bGxSZXF1ZXN0NTY0NTM3OTA1 | 9,910 | Doc title in the template | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,612 | 1,612 | COLLABORATOR | null | # What does this PR do?
After reviewing a few PRs post-template, I'm noticing the doc pages are always misnamed -> they should use the cased name of the model, not the uppercase version. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9910/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9910",
"html_url": "https://github.com/huggingface/transformers/pull/9910",
"diff_url": "https://github.com/huggingface/transformers/pull/9910.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9910.patch",
"merged_at": 1612166732000
} |
https://api.github.com/repos/huggingface/transformers/issues/9909 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9909/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9909/comments | https://api.github.com/repos/huggingface/transformers/issues/9909/events | https://github.com/huggingface/transformers/issues/9909 | 797,553,548 | MDU6SXNzdWU3OTc1NTM1NDg= | 9,909 | run_seq2seq.py : Why we pad labels with -100? | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please use the [forums](https://discuss.huggingface.co/) for questions like this. We keep issues for bugs or feature requests only."
] | 1,612 | 1,612 | 1,612 | CONTRIBUTOR | null | As mentioned in [this line](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py#L437), why we add -100. Can't we just keep pad_token_id ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9909/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9908 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9908/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9908/comments | https://api.github.com/repos/huggingface/transformers/issues/9908/events | https://github.com/huggingface/transformers/issues/9908 | 797,540,297 | MDU6SXNzdWU3OTc1NDAyOTc= | 9,908 | [seq2seq] some logging for all processes in distributed mode | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"I'm definitely not an expert on logging so I'll leave it to @LysandreJik and @sgugger here. The idea of adding new `multi-process` logging functionality sounds very reasonable to me though!",
"I would advocate for using the `logger.warning` in cases where you want to display something from all processes. While in the library we should be strict about what we want to display as info/warn/error, I think we can be a bit more flexible and use the logger verbosity differently, as it's actively defined in the scripts. Of course, this is only if you're defining the logs in the script and not in the library, otherwise we'll need to reconsider.\r\n\r\nIf we don't want to go down this road, we could also have an approach defined on the log levels. We could potentially define additional log levels, between INFO and WARN that could get the job done.\r\n\r\n",
"Oh, I like an intermediate level between INFO and WARN, like `IMPORTANT_INFO`? (`ALL_PROCESS_INFO` wouldn't make sense as it would not technically control the number of processes).",
"I'm talking about the library here, Trainer that is. But, of course, the same should apply to custom scripts.\r\n\r\nI think a custom logger is a better idea. In particular since it needs to log the rank of the process - currently I have to add it manually.\r\n\r\nI don't think we should mess with levels. These should be sacrosanct. This is because it could make things very confusing for the user.\r\n\r\nBut being able to retrieve a 2nd logger object that logs for all processes with the format that includes a process rank and using the same log level would be useful. \r\n\r\nThough need to decide whether:\r\na. Such logger would be not logging anything unless invoked in a distributed environment.\r\nb. Or perhaps it's actually better for it to be identical to the normal logger under non-distributed env, so logs aren't missed - it's just it'll not include process rank.",
"> I'm talking about the library here, Trainer that is.\r\n\r\nThe problem is that only Trainer knows when it's executed in a distributed training but logs are in all parts of the library. Though maybe this new logger will only be used inside the `Trainer`?\r\n\r\n(Sorry my 3yo made a wrong click.) ",
"> The problem is that only Trainer knows when it's executed in a distributed training but logs are in all parts of the library. \r\n\r\nNot really. We have `torch.distributed.get_rank()` for most things to know whether we are under distributed, though the logger shouldn't be initialized until first use since the dist env comes a bit later in the game. Down the road if we have other methods that defined multiproc we will just try those too or provide one for a user to run if it's non-standard and it'll set a multi-proc flag in the logging library.\r\n\r\nIt probably could/should copy the same format from normal logger, but embed process rank into it.\r\n\r\ni.e. perhaps it can be created on the fly and require no special handling on the user side (or trainer side).\r\n\r\n> Though maybe this new logger will only be used inside the Trainer?\r\n\r\nNo, user scripts will need it too. It's not trainer-specific. Think `model.parallelize()` `model.enable_pipeline` (new)\r\n\r\nAnd just to clarify - this is for logging inside the model's code - perhaps I will find a way to abstract it out, but it still would be outside of Trainer and in the core library.\r\n\r\nAn example of this is building a custom device_map for PP or MP specific to the process and logging that. This would be the same whether it was called from Trainer or user's code.\r\n\r\n> (Sorry my 3yo made a wrong click.)\r\n\r\nYou have competition growing ;)",
"I have no issues with having a second logger used in multi-process environments. It would be nice if it could be handled by the `logging` class of `transformers`, so as to have a single front-end for the logging, otherwise we'll end up confusing the users as much as if we add intermediate logging levels.\r\n\r\nDo we actually need a second logger though, wouldn't it be simpler to adapt the formatter for those particular logs?",
"> I have no issues with having a second logger used in multi-process environments. It would be nice if it could be handled by the `logging` class of `transformers`, so as to have a single front-end for the logging, [...]\r\n\r\nYes, that is what I had in mind.\r\n\r\n> Do we actually need a second logger though, wouldn't it be simpler to adapt the formatter for those particular logs?\r\n\r\nCould you please give a example of what you have in mind? \r\n\r\nI have no attachment whatsoever to how this is done. So if you already have an idea on how to make this work I'm all ears.\r\n\r\nThank you.",
"Ok I gave it more thought and my proposal of using formatters was a mistake, it won't be possible this way. I looked for a solution using filters, but, alas, the logs are already filtered by the levels before they're handled by the filters.\r\n\r\nThinking about it further, I'm not 100% sure how I can see several loggers here as we already have one logger per module. If you know how to do it cleanly, by all means, please do!\r\n\r\n---\r\n\r\nAs an aside, I don't think having intermediate levels is a bad thing. The `logging` utility has an `addLevelName` method, and this specific use-case seems perfect. I understand why adding many different level names will get harder to understand, but this is the addition of one level for a situation that would benefit from it.\r\n\r\nIt would only require adding a level, which has a name. Here's how it could look like:\r\n\r\n```py\r\nimport logging\r\nimport sys\r\n\r\n# Add level > 30\r\nlogging.addLevelName(35, \"MODEL_PARALLEL_INFO\")\r\n\r\n# Setup logging like we do in our scripts\r\nlogger = logging.getLogger(__name__)\r\nlogging.basicConfig(\r\n format=\"%(asctime)s - %(levelname)s - %(name)s - %(message)s\",\r\n datefmt=\"%m/%d/%Y %H:%M:%S\",\r\n handlers=[logging.StreamHandler(sys.stdout)],\r\n)\r\n\r\nfor i in range(3):\r\n # Simulate our scripts' logging\r\n logger.setLevel(logging.INFO if i == 0 else logging.WARN)\r\n\r\n logger.warning(f\"Initiating {i}\")\r\n logger.info(\"Random information\")\r\n logger.log(logging.getLevelName(\"MODEL_PARALLEL_INFO\"), f\"Device Map of {i}: []\")\r\n```\r\n\r\nThis logs the following:\r\n\r\n```\r\n02/03/2021 15:40:28 - WARNING - __main__ - Initiating 0\r\n02/03/2021 15:40:28 - INFO - __main__ - Random information\r\n02/03/2021 15:40:28 - MODEL_PARALLEL_INFO - __main__ - Device Map of 0: []\r\n02/03/2021 15:40:28 - WARNING - __main__ - Initiating 1\r\n02/03/2021 15:40:28 - MODEL_PARALLEL_INFO - __main__ - Device Map of 1: []\r\n02/03/2021 15:40:28 - WARNING - __main__ - Initiating 2\r\n02/03/2021 15:40:28 - MODEL_PARALLEL_INFO - __main__ - Device Map of 2: []\r\n```\r\n\r\nHappy to drop the idea if you still think this would make things confusing for the user. I do agree that it could make things confusing for users not making use of the scripts and wondering why an INFO statement slipped when they set their verbosity to `warn`.",
"I think we can have the intermediate level be intermediate -> so not shown when the verbosity is set to `warn`. I expect the scripts to switch the statement from\r\n```\r\nlogger.setLevel(logging.INFO if i == 0 else logging.WARN)\r\nlogger.parallel_info(f\"Initiating {i}\")\r\n```\r\nto\r\n```\r\nlogger.setLevel(logging.INFO if i == 0 else logging.MODEL_PARALLEL_INFO)\r\nlogger.parallel_info(f\"Initiating {i}\")\r\n```\r\n\r\nI disagree with the name though, as this is a bit too specific ;-) PARALLEL_INFO is enough IMO\r\n",
"To the naming:\r\n\r\nWell, PP is too specific - not very generic either.\r\n\r\nWhat we have here is 2 specific events, which may happen under any distributed training. So the common is that it's distributed, the separate is:\r\n1. log only once for multiple processes (avoid duplicated logging)\r\n2. log for every process (only for unique per-process logging)\r\n\r\nand single process training is a special case of distributed with n_procs=1, so for a single proc both should be logged.\r\n\r\nSo I think this is what the name should reflect and not the specific circumstance it's used in.\r\n\r\n------------------\r\n\r\nTo the implementation, thank you for your specific code suggestions @LysandreJik and @sgugger - please let me experiment with your proposals and try other things out and I will come back to you.\r\n\r\n",
"Apologies for the delay, here is how I see a simple solution that doesn't break any conventions.\r\n\r\nWe create a second logger. Just need to think how to make it appear if the user didn't explicitly configure one and make it globally available from other modules.\r\n\r\nHere is a possible implementation:\r\n```\r\n# logger.py\r\n\r\nimport logging\r\nimport sys\r\nimport os\r\n\r\nlocal_rank = int(os.environ.get(\"LOCAL_RANK\", -1))\r\n\r\n# normal logger\r\nlogger = logging.getLogger(__name__)\r\nhandler_shared = logging.StreamHandler(sys.stdout)\r\nformatter_shared = logging.Formatter('%(asctime)s - %(levelname)s - %(name)s - %(message)s')\r\nhandler_shared.setFormatter(formatter_shared)\r\nlogger.addHandler(handler_shared)\r\n\r\n# rank-specific logger\r\nif local_rank != -1:\r\n logger_rank_specific = logging.getLogger(__name__ + \"rank_specific\")\r\n handler_rank_specific = logging.StreamHandler(sys.stdout)\r\n formatter_rank_specific = logging.Formatter(f'%(asctime)s - %(levelname)s - p{local_rank} - %(name)s - %(message)s')\r\n handler_rank_specific.setFormatter(formatter_rank_specific)\r\n logger_rank_specific.addHandler(handler_rank_specific)\r\nelse:\r\n logger_rank_specific = logger\r\n\r\n# the 2nd logger is just for special info that each process should print\r\nlogger_rank_specific.setLevel(logging.INFO)\r\n# set normal logger to just the main process INFO\r\nlogger.setLevel(logging.INFO if local_rank < 1 else logging.WARN)\r\n\r\n# test\r\nlogger.warning(f\"Initiating\")\r\nlogger.info(\"Random information\")\r\n\r\nlogger_rank_specific.info(f\"Device Map: {[1] * local_rank}\")\r\n\r\n```\r\n\r\nDist test:\r\n```\r\n$ python -m torch.distributed.launch --nproc_per_node 4 ./logger.py\r\n2021-02-11 19:57:45,889 - WARNING - __main__ - Initiating\r\n2021-02-11 19:57:45,889 - INFO - __main__ - Random information\r\n2021-02-11 19:57:45,889 - INFO - p0 - __main__rank_specific - Device Map: []\r\n2021-02-11 19:57:45,897 - WARNING - __main__ - Initiating\r\n2021-02-11 19:57:45,898 - INFO - p1 - __main__rank_specific - Device Map: [1]\r\n2021-02-11 19:57:45,905 - WARNING - __main__ - Initiating\r\n2021-02-11 19:57:45,905 - INFO - p2 - __main__rank_specific - Device Map: [1, 1]\r\n2021-02-11 19:57:45,914 - WARNING - __main__ - Initiating\r\n2021-02-11 19:57:45,914 - INFO - p3 - __main__rank_specific - Device Map: [1, 1, 1]\r\n```\r\n\r\nNon-dist test:\r\n```\r\n$ python ./logger.py\r\n2021-02-11 20:21:16,716 - WARNING - __main__ - Initiating\r\n2021-02-11 20:21:16,717 - INFO - __main__ - Random information\r\n2021-02-11 20:21:16,717 - INFO - __main__ - Device Map: []\r\n```\r\n\r\nAll works.\r\n\r\nNot sure what to call the second logger, open to suggestions.\r\n\r\nWhat do you think?\r\n\r\nThank you.",
"I am fine with using a second logger like this, I guess it could be called `multiprocess_logger` and that its name could be `+ \"rank_specific\"` like you said. Should we add a method `get_multiprocess_logger` in the logging module so that people don't have to remember the `__name__ + \"rank_specific\"` part? This would give an API like:\r\n```\r\nfrom .util import logging\r\n\r\nlogger = logging.get_logger(__name__)\r\nmultiprocess_logger = logging.get_multiprocess_logger(__name__)\r\n```\r\nin the modules where we need the `multiprocess_logger`. And the `set_verbosity_xxx` methods would affect both transformers logger.\r\n\r\nFor the scripts we would still need to do it manually though.",
"Yes, of course, we will have it all nicely wrapped up. If @LysandreJik is in agreement, I will work on a PR.\r\n\r\nIt'd be nice to have a somewhat shorter name for `multiprocess_logger`, but the one you proposed works too. Perhaps, in reversed to aid the completion? `logger_multiproc` or `logger_multiprocess` or `logger_mp`?\r\n\r\nAlso I'm not sure with `__name__ + \"rank_specific\"` - should it be in sync with the variable name - whichever we choose?\r\n\r\nHmm, what if instead of changing the format for that logger to be\r\n```\r\nlogging.Formatter(f'%(asctime)s - %(levelname)s - p{local_rank} - %(name)s - %(message)s')\r\n```\r\nWe keep the exact same format, but we simply append the actual rank to the name?\r\n```\r\nlogger_rank_specific = logging.getLogger(__name__ + f\"local_rank_{local_rank}\")\r\nlogging.Formatter(f'%(asctime)s - %(levelname)s - %(name)s - %(message)s')\r\n```\r\nBut again, either way works. Just one less thing to modify in this case.\r\n\r\n",
"Thanks for writing everything out, I'm ok with your proposal!"
] | 1,612 | 1,618 | null | CONTRIBUTOR | null | In 2D Parallelism, e.g. Pipeline + DeepSpeed I need to log unique device maps per process for the user to see, but currently `logger.info()` is only activated for the main process via `if is_main_process`. Currently only in `examples/seq2seq/run_seq2seq.py`, `examples/seq2seq/finetune_trainer.py`, but it'll be needed for other scripts as well down the road.
Any idea how I could accomplish that while keeping things as they are? I guess I could use `logger.warn` as a workaround, since it's not disabled for other processes. But it's not a good approach, since it's a WARNING after all. And I don't quite want to use `print()` as it might not be what the user wants if they want things quiet.
Perhaps you have some other ideas on how I could go about doing that.
I think perhaps adding another logger that's INFO-activated for all distributed processes, and is used only occasionally when the normal logger won't do.
I think as we are getting more and more into distributed training we will need to be able to log specific things for specific processes.
Thank you.
@LysandreJik, @patrickvonplaten, @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9908/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9907 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9907/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9907/comments | https://api.github.com/repos/huggingface/transformers/issues/9907/events | https://github.com/huggingface/transformers/pull/9907 | 797,517,412 | MDExOlB1bGxSZXF1ZXN0NTY0NTAwODk1 | 9,907 | Remove subclass for sortish sampler | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,612 | 1,612 | 1,612 | COLLABORATOR | null | # What does this PR do?
When putting the sortish sampler in the main `Trainer`, I forgot to remove the overide in `Seq2SeqTrainer` which led to an issue (see #9900). This in turn makes the old `finetune_trainer` script fail because its datasets don't have the right entries (the texts are processed during the data collation) so it requires reverting the changes in that script to use back the old `Seq2SeqTrainer` (which is fine since that script will soon go to legacy).
Fixes #9900 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9907/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9907",
"html_url": "https://github.com/huggingface/transformers/pull/9907",
"diff_url": "https://github.com/huggingface/transformers/pull/9907.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9907.patch",
"merged_at": 1612184793000
} |
https://api.github.com/repos/huggingface/transformers/issues/9906 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9906/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9906/comments | https://api.github.com/repos/huggingface/transformers/issues/9906/events | https://github.com/huggingface/transformers/issues/9906 | 797,443,618 | MDU6SXNzdWU3OTc0NDM2MTg= | 9,906 | Error "Expected input batch_size (16) to match target batch_size (1440)" in the WNUT NER example | {
"login": "dr-manhattan",
"id": 3458093,
"node_id": "MDQ6VXNlcjM0NTgwOTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3458093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dr-manhattan",
"html_url": "https://github.com/dr-manhattan",
"followers_url": "https://api.github.com/users/dr-manhattan/followers",
"following_url": "https://api.github.com/users/dr-manhattan/following{/other_user}",
"gists_url": "https://api.github.com/users/dr-manhattan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dr-manhattan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dr-manhattan/subscriptions",
"organizations_url": "https://api.github.com/users/dr-manhattan/orgs",
"repos_url": "https://api.github.com/users/dr-manhattan/repos",
"events_url": "https://api.github.com/users/dr-manhattan/events{/privacy}",
"received_events_url": "https://api.github.com/users/dr-manhattan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm guessing you're running the trainer code block from the sequence classification example verbatim. You want `DistilBertForTokenClassification` not `DistilBertForSequenceClassification`, so comment out:\r\n\r\n> model = DistilBertForSequenceClassification.from_pretrained(\"distilbert-base-uncased\") ",
"> I'm guessing you're running the trainer code block from the sequence classification example verbatim. You want `DistilBertForTokenClassification` not `DistilBertForSequenceClassification`, so comment out:\r\n> \r\n> > model = DistilBertForSequenceClassification.from_pretrained(\"distilbert-base-uncased\")\r\n\r\nDoh. Indeed, thanks for the catch.",
"Had to use DistilBertForTokenClassification to reproduce the example. "
] | 1,612 | 1,617 | 1,612 | NONE | null | ## Environment info
- `transformers` version: 4.2.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@joeddav
@sgugger
## Information
Model I am using: DistilBert
The problem arises when using:
* [ x] the official example script
Steps to reproduce the behavior:
reproducing the NER example from
https://huggingface.co/transformers/master/custom_datasets.html
verbatim (in colab) I get
```
"Expected input batch_size (16) to match target batch_size (1440)."
```
when running trainer.train
Stack trace:
```
ValueError Traceback (most recent call last)
<ipython-input-12-aa1378d94d0f> in <module>()
21 )
22
---> 23 trainer.train()
8 frames
/usr/local/lib/python3.6/dist-packages/transformers/trainer.py in train(self, model_path, trial)
886 tr_loss += self.training_step(model, inputs)
887 else:
--> 888 tr_loss += self.training_step(model, inputs)
889 self._total_flos += self.floating_point_ops(inputs)
890
/usr/local/lib/python3.6/dist-packages/transformers/trainer.py in training_step(self, model, inputs)
1248 loss = self.compute_loss(model, inputs)
1249 else:
-> 1250 loss = self.compute_loss(model, inputs)
1251
1252 if self.args.n_gpu > 1:
/usr/local/lib/python3.6/dist-packages/transformers/trainer.py in compute_loss(self, model, inputs)
1275 Subclass and override for custom behavior.
1276 """
-> 1277 outputs = model(**inputs)
1278 # Save past state if it exists
1279 # TODO: this needs to be fixed and made cleaner later.
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/usr/local/lib/python3.6/dist-packages/transformers/models/distilbert/modeling_distilbert.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict)
637 else:
638 loss_fct = nn.CrossEntropyLoss()
--> 639 loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
640
641 if not return_dict:
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py in forward(self, input, target)
960 def forward(self, input: Tensor, target: Tensor) -> Tensor:
961 return F.cross_entropy(input, target, weight=self.weight,
--> 962 ignore_index=self.ignore_index, reduction=self.reduction)
963
964
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
2466 if size_average is not None or reduce is not None:
2467 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2468 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
2469
2470
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
2260 if input.size(0) != target.size(0):
2261 raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).'
-> 2262 .format(input.size(0), target.size(0)))
2263 if dim == 2:
2264 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
ValueError: Expected input batch_size (16) to match target batch_size (1440).
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9906/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9905 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9905/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9905/comments | https://api.github.com/repos/huggingface/transformers/issues/9905/events | https://github.com/huggingface/transformers/issues/9905 | 797,432,640 | MDU6SXNzdWU3OTc0MzI2NDA= | 9,905 | exe executable file | {
"login": "tang-ed",
"id": 61105590,
"node_id": "MDQ6VXNlcjYxMTA1NTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/61105590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tang-ed",
"html_url": "https://github.com/tang-ed",
"followers_url": "https://api.github.com/users/tang-ed/followers",
"following_url": "https://api.github.com/users/tang-ed/following{/other_user}",
"gists_url": "https://api.github.com/users/tang-ed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tang-ed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tang-ed/subscriptions",
"organizations_url": "https://api.github.com/users/tang-ed/orgs",
"repos_url": "https://api.github.com/users/tang-ed/repos",
"events_url": "https://api.github.com/users/tang-ed/events{/privacy}",
"received_events_url": "https://api.github.com/users/tang-ed/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread."
] | 1,612 | 1,614 | 1,614 | NONE | null | I want to use pyinstaller to convert my test.py into an exe executable file, but unfortunately, I failed. I checked the reason, it may be that the transformer library did not successfully convert. Could it be that my method is wrong? The command I used is pyinstaller -D test.py, however, there are no transformers in the generated library.
pyinstaller -D test.py | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9905/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9904 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9904/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9904/comments | https://api.github.com/repos/huggingface/transformers/issues/9904/events | https://github.com/huggingface/transformers/issues/9904 | 797,408,932 | MDU6SXNzdWU3OTc0MDg5MzI= | 9,904 | Tokenizer return offsets | {
"login": "borsork377",
"id": 70897626,
"node_id": "MDQ6VXNlcjcwODk3NjI2",
"avatar_url": "https://avatars.githubusercontent.com/u/70897626?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borsork377",
"html_url": "https://github.com/borsork377",
"followers_url": "https://api.github.com/users/borsork377/followers",
"following_url": "https://api.github.com/users/borsork377/following{/other_user}",
"gists_url": "https://api.github.com/users/borsork377/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borsork377/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borsork377/subscriptions",
"organizations_url": "https://api.github.com/users/borsork377/orgs",
"repos_url": "https://api.github.com/users/borsork377/repos",
"events_url": "https://api.github.com/users/borsork377/events{/privacy}",
"received_events_url": "https://api.github.com/users/borsork377/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"All fast tokenizers have this feature, just pass along `return_offsets_mapping=True` in your call to the tokenizer. Also note that fast tokenizers are used by default when `AutoTokenizer` is called.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,612 | 1,619 | 1,619 | NONE | null | # 🚀 Feature request
Request for the feature raised in [Issue #1263](https://github.com/huggingface/transformers/issues/1263).
Previous PRs have attempted to address this but none of them were merged - https://github.com/huggingface/transformers/pull/1274 and https://github.com/huggingface/transformers/pull/2178.
## Motivation
Refer [Issue #1263](https://github.com/huggingface/transformers/issues/1263)
## Your contribution
Please guide me on how to submit a PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9904/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9903 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9903/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9903/comments | https://api.github.com/repos/huggingface/transformers/issues/9903/events | https://github.com/huggingface/transformers/pull/9903 | 797,392,530 | MDExOlB1bGxSZXF1ZXN0NTY0NDAyOTk3 | 9,903 | Clarify definition of seed argument in TrainingArguments | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Thanks for flagging the doc was incorrect! You changes are not entirely correct either so I made some suggestions.\r\n\r\nThanks for fixing my tweaks - I like the changes so committed them 😃 ",
"Sorry for missing that last suggestion of yours - should be ready to go now!",
"Yes, thanks a lot!"
] | 1,612 | 1,612 | 1,612 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Clarifies the definition of the `seed` argument in `TrainingArguments` to:
* Explain what "initialisation" refers to
* How to ensure reproducibility across runs
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
Link to discussion on the HF forum: https://discuss.huggingface.co/t/fixing-the-random-seed-in-the-trainer-does-not-produce-the-same-results-across-runs/3442?u=lewtun
## Who can review?
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9903/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9903",
"html_url": "https://github.com/huggingface/transformers/pull/9903",
"diff_url": "https://github.com/huggingface/transformers/pull/9903.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9903.patch",
"merged_at": 1612109371000
} |
https://api.github.com/repos/huggingface/transformers/issues/9902 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9902/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9902/comments | https://api.github.com/repos/huggingface/transformers/issues/9902/events | https://github.com/huggingface/transformers/issues/9902 | 797,385,698 | MDU6SXNzdWU3OTczODU2OTg= | 9,902 | PPLM example - AttributeError issue | {
"login": "OlegDurandin",
"id": 7582325,
"node_id": "MDQ6VXNlcjc1ODIzMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7582325?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OlegDurandin",
"html_url": "https://github.com/OlegDurandin",
"followers_url": "https://api.github.com/users/OlegDurandin/followers",
"following_url": "https://api.github.com/users/OlegDurandin/following{/other_user}",
"gists_url": "https://api.github.com/users/OlegDurandin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OlegDurandin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OlegDurandin/subscriptions",
"organizations_url": "https://api.github.com/users/OlegDurandin/orgs",
"repos_url": "https://api.github.com/users/OlegDurandin/repos",
"events_url": "https://api.github.com/users/OlegDurandin/events{/privacy}",
"received_events_url": "https://api.github.com/users/OlegDurandin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Could you install an earlier version of `transformers` to see if it works? I believe it was tested with `transformers==3.0.1`",
"It works fine up to and including version v4.2.2 but is broken in versions above that",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,612 | 1,619 | 1,619 | NONE | null | Hi all,
Thank you for great library.
Now I try to understand PPLM model (see reference https://eng.uber.com/pplm/ ), and when I try to start example from HuggingFace repository https://github.com/huggingface/transformers/tree/master/examples/research_projects/pplm (run_pplm.py) - I faced with the next issue:
```
Traceback (most recent call last):
File "run_pplm.py", line 820, in <module>
run_pplm_example(**vars(args))
File "run_pplm.py", line 678, in run_pplm_example
repetition_penalty=repetition_penalty,
File "run_pplm.py", line 405, in full_text_generation
repetition_penalty=repetition_penalty,
File "run_pplm.py", line 511, in generate_text_pplm
device=device,
File "run_pplm.py", line 115, in perturb_past
grad_accumulator = [(np.zeros(p.shape).astype("float32")) for p in past]
File "run_pplm.py", line 115, in <listcomp>
grad_accumulator = [(np.zeros(p.shape).astype("float32")) for p in past]
AttributeError: 'tuple' object has no attribute 'shape'
```
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: 4.3.0.dev0
- Python version: 3.8
- PyTorch version (GPU?): 1.7.1
I started this example on my own laptop and in Google Colab environment
## To reproduce
Steps to reproduce the behavior:
1. Follow to https://github.com/huggingface/transformers/tree/master/examples/research_projects/pplm and do Setup steps
2. Do command: `python run_pplm.py -B military --cond_text "The potato" --length 50 --gamma 1.5 --num_iterations 3 --num_samples 10 --stepsize 0.03 --window_length 5 --kl_scale 0.01 --gm_scale 0.99 --colorama --sample
`
## Expected behavior
This script should work without an error :)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9902/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9901 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9901/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9901/comments | https://api.github.com/repos/huggingface/transformers/issues/9901/events | https://github.com/huggingface/transformers/issues/9901 | 797,370,567 | MDU6SXNzdWU3OTczNzA1Njc= | 9,901 | Missing model license information | {
"login": "guillaume-be",
"id": 27071604,
"node_id": "MDQ6VXNlcjI3MDcxNjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/27071604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guillaume-be",
"html_url": "https://github.com/guillaume-be",
"followers_url": "https://api.github.com/users/guillaume-be/followers",
"following_url": "https://api.github.com/users/guillaume-be/following{/other_user}",
"gists_url": "https://api.github.com/users/guillaume-be/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guillaume-be/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guillaume-be/subscriptions",
"organizations_url": "https://api.github.com/users/guillaume-be/orgs",
"repos_url": "https://api.github.com/users/guillaume-be/repos",
"events_url": "https://api.github.com/users/guillaume-be/events{/privacy}",
"received_events_url": "https://api.github.com/users/guillaume-be/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Pinging @julien-c ",
"> I wanted to check if your conditions set a default license under which models are uploaded when not specified?\r\n\r\nNo, that's really the model author's call. But we will try to make it easier/more straightforward for a user to pick one in the future.\r\n\r\n> In order to have a missing license added when it is missing, could you please advise on the standard way to proceed?\r\n\r\nA GH issue is fine I think, otherwise a thread on [discuss.huggingface.co](https://discuss.huggingface.co) would work well too.",
"Thank you for the clarification!"
] | 1,611 | 1,612 | 1,612 | CONTRIBUTOR | null | Hello,
A significant number of models uploaded to the model hub do not contain any license information. I wanted to check if your conditions set a default license under which models are uploaded when not specified?
In order to have a missing license added when it is missing, could you please advise on the standard way to proceed?
- Should an issue be created and tagging the author of the model asking for the license?
- Should I contact the author directly without raising an issue?
The community now has contributed a large number of very useful models, but more transparency regarding licensing (or default license) would be great.
Below are a few models I would be very interested in getting license information for, but a more general approach would be very beneficial:
- https://huggingface.co/valhalla/longformer-base-4096-finetuned-squadv1 (@patil-suraj )
- https://huggingface.co/mrm8488/longformer-base-4096-finetuned-squadv2 (@mrm8488 )
- https://huggingface.co/mrm8488/mobilebert-uncased-finetuned-squadv2 (@mrm8488 )
- https://huggingface.co/mrm8488/mobilebert-finetuned-ner (@mrm8488 )
- https://huggingface.co/mrm8488/mobilebert-finetuned-pos (@mrm8488 )
Thank you,
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9901/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9900 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9900/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9900/comments | https://api.github.com/repos/huggingface/transformers/issues/9900/events | https://github.com/huggingface/transformers/issues/9900 | 797,351,020 | MDU6SXNzdWU3OTczNTEwMjA= | 9,900 | run_seq2seq.py doesn't work after enabling sortish sampler | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,612 | 1,612 | CONTRIBUTOR | null | It gives an error saying **AttributeError: 'Dataset' object has no attribute 'make_sortish_sampler'**. Seems like something is wrong with the pipeline or versions. I install both transformers and datasets from the sources.
[Exact line](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_seq2seq.py#L46)
```
self.train_dataset.make_sortish_sampler(
AttributeError: 'Dataset' object has no attribute 'make_sortish_sampler'
```
## Environment info
- `transformers` version: 4.3.0.dev0
- Platform: Linux-5.8.18-050818-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1+cu110 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@LysandreJik @patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9900/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9899 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9899/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9899/comments | https://api.github.com/repos/huggingface/transformers/issues/9899/events | https://github.com/huggingface/transformers/issues/9899 | 797,330,729 | MDU6SXNzdWU3OTczMzA3Mjk= | 9,899 | Does Sortish Sampler work with multiple GPUs in seq2seq? | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Currently `SortishSampler` will only work with the `finetune_trainer.py` scripts. It will be supported in `run_seq2seq.py` in the soon. And to answer your question, yes it works with multiple GPUs, and you won't need to enable distributer parameter if the training is launched on multi GPUs using `torch.distributed.launch`, it'll be enabled automatically. ",
"thanks a lot."
] | 1,611 | 1,612 | 1,612 | CONTRIBUTOR | null | I am referring to the training script of run_seq2seq.py. I am exactly referring to [this line in seq2seq_trainer.py](https://github.com/huggingface/transformers/blob/1420b5ff675ccdc3296c6776b339a08a22d2e941/src/transformers/trainer_seq2seq.py#L48). So when should I enable **distributed** parameters and how should I do it? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9899/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9898 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9898/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9898/comments | https://api.github.com/repos/huggingface/transformers/issues/9898/events | https://github.com/huggingface/transformers/pull/9898 | 797,327,228 | MDExOlB1bGxSZXF1ZXN0NTY0MzU0MDg1 | 9,898 | [doc] nested markup is invalid in rst | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,612 | 1,612 | CONTRIBUTOR | null | Apparently nested markup in RST is invalid: https://docutils.sourceforge.io/FAQ.html#is-nested-inline-markup-possible
So currently this line doesn't get rendered properly, leaving inner markdown unrendered, resulting in:
```
You can create a model repo directly from `the /new page on the website <https://huggingface.co/new>`__.
```
This PR removes the bold markdown which fixes the link.
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9898/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9898",
"html_url": "https://github.com/huggingface/transformers/pull/9898",
"diff_url": "https://github.com/huggingface/transformers/pull/9898.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9898.patch",
"merged_at": 1612018760000
} |
https://api.github.com/repos/huggingface/transformers/issues/9897 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9897/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9897/comments | https://api.github.com/repos/huggingface/transformers/issues/9897/events | https://github.com/huggingface/transformers/pull/9897 | 797,313,095 | MDExOlB1bGxSZXF1ZXN0NTY0MzQzMTU3 | 9,897 | [t5 tokenizer] add info logs | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't think this is something that should be done in `save_vocabulary`. You have the option in `save_pretrained` to set `legacy_format` to `False` to generate that `tokenizer.json` file. I'm not an expert in the tokenization side with all the stuff that was added for backward compatibility so I don't know if there is a better option.\r\n\r\nI wasn't aware havin this file was mandatory for some models to use the fast tokenizer. Are you sure you have sentencepiece installed? It might be due to this that the conversion slow to fast does not work automatically\r\n\r\nAnyhow, once we have found the right way to generate that `tokenizer.json` file, it should be added on the model sharing doc page, next to the section on how to generate TF/PyTorch checkpoints, so that people know what to do to have the most complete model on the hub.",
"I don't have a problem to add it anywhere else, who do we tag on this?\r\n\r\n1. Let the code speak for itself:\r\n```\r\npython -c \"from transformers import T5Tokenizer, T5TokenizerFast; mname_from='sshleifer/t5-tinier-random'; tokenizer = T5Tokenizer.from_pretrained(mname_from); tokenizer_fast = T5TokenizerFast.from_pretrained(mname_from)\"\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/mnt/disc1/data/trash/src/transformers/src/transformers/tokenization_utils_base.py\", line 1762, in from_pretrained\r\n return cls._from_pretrained(\r\n File \"/mnt/disc1/data/trash/src/transformers/src/transformers/tokenization_utils_base.py\", line 1835, in _from_pretrained\r\n tokenizer = cls(*init_inputs, **init_kwargs)\r\n File \"/mnt/disc1/data/trash/src/transformers/src/transformers/models/t5/tokenization_t5_fast.py\", line 139, in __init__\r\n super().__init__(\r\n File \"/mnt/disc1/data/trash/src/transformers/src/transformers/tokenization_utils_fast.py\", line 86, in __init__\r\n fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)\r\nException: No such file or directory (os error 2)\r\n```\r\n\r\n2. If `footokenizer.from_pretrained()` fetches `tokenizer.json` then `footokenizer.save_pretrained()` must save it too.\r\n \r\n> I wasn't aware havin gthis file was mandatory for some models to use the fast tokenizer. Are you sure you have sentencepiece installed? It might be due to this that the conversion slow to fast does not work automatically\r\n\r\n```\r\npip install sentencepiece\r\nRequirement already satisfied: sentencepiece in /mnt/nvme1/anaconda3/envs/main-38/lib/python3.8/site-packages (0.1.91)\r\n```\r\n\r\nIf you look at the trace it is hunting for that file and can't find it.\r\n\r\n> Anyhow, once we have found the right way to generate that tokenizer.json file, it should be added on the model sharing doc page, next to the section on how to generate TF/PyTorch checkpoints, so that people know what to do to have the most complete model on the hub.\r\n\r\nAgreed!\r\n\r\n@LysandreJik, @n1t0 ",
"ok, so as @sgugger suggested on slack, the fast tokenizer saving will be handled on the core-level some time in the future, so I removed that part from this PR, leaving just the logger part."
] | 1,611 | 1,613 | 1,613 | CONTRIBUTOR | null | This PR (was modified from the original):
- adds info logs that correlated to saved tokenizer files on `tokenizer.save_pretrained()`
--------------------------
original PR note
This PR
- adds code to save t5 fast tokenizer `tokenizer.json` file on `tokenizer.save_pretrained()`
- adds info logs that correlated to saved tokenizer files on `tokenizer.save_pretrained()`
Context:
- I needed to create a new t5 smallish model and the created model won't work w/o `tokenizer.json`.
- Also as I was debugging why I was missing that file, I enabled logging and saw that we were getting logs for every saved file, but tokenizer files, so this PR fixes that, so it's consistent and helps one to see if something is missing.
Here is an example:
```
TRANSFORMERS_VERBOSITY=info PYTHONPATH=/hf/transformers-master/src python t5-make-very-small-model.py
[....]
Configuration saved in t5-very-small-random/config.json
Model weights saved in t5-very-small-random/pytorch_model.bin
Configuration saved in t5-very-small-random/config.json
tokenizer config file saved in t5-very-small-random/tokenizer_config.json
Special tokens file saved in t5-very-small-random/special_tokens_map.json
Copy vocab file to t5-very-small-random/spiece.model
tokenizer config file saved in t5-very-small-random/tokenizer_config.json
Special tokens file saved in t5-very-small-random/special_tokens_map.json
Copy vocab file to t5-very-small-random/spiece.model
Copy tokenizer file to t5-very-small-random/tokenizer.json
```
I'm not sure why I needed to save both:
```
tokenizer.save_pretrained(mname_very_small)
tokenizer_fast.save_pretrained(mname_very_small)
```
note `tokenization_t5.py` doesn't have it! both t5 tokenizer files:
```
VOCAB_FILES_NAMES = {"vocab_file": "spiece.model"}
VOCAB_FILES_NAMES = {"vocab_file": "spiece.model", "tokenizer_file": "tokenizer.json"}
```
As I flagged on slack `https://huggingface.co/sshleifer/t5-tinier-random` fails to be used since it's missing this fast `tokenizer.json` file from the s3 set of files,
```
Traceback (most recent call last):
File "./finetune_trainer.py", line 373, in <module>
main()
File "./finetune_trainer.py", line 205, in main
tokenizer = AutoTokenizer.from_pretrained(
File "/home/stas/hf/transformers/src/transformers/models/auto/tokenization_auto.py", line 385, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home/stas/hf/transformers/src/transformers/tokenization_utils_base.py", line 1768, in from_pretrained
return cls._from_pretrained(
File "/home/stas/hf/transformers/src/transformers/tokenization_utils_base.py", line 1841, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/stas/hf/transformers/src/transformers/models/t5/tokenization_t5_fast.py", line 139, in __init__
super().__init__(
File "/home/stas/hf/transformers/src/transformers/tokenization_utils_fast.py", line 86, in __init__
fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
```
it could be a symptom for another problem in our code.
@LysandreJik, @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9897/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9897",
"html_url": "https://github.com/huggingface/transformers/pull/9897",
"diff_url": "https://github.com/huggingface/transformers/pull/9897.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9897.patch",
"merged_at": 1613225422000
} |
https://api.github.com/repos/huggingface/transformers/issues/9896 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9896/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9896/comments | https://api.github.com/repos/huggingface/transformers/issues/9896/events | https://github.com/huggingface/transformers/pull/9896 | 797,310,874 | MDExOlB1bGxSZXF1ZXN0NTY0MzQxMzYx | 9,896 | [wandb] restore WANDB_DISABLED=true to disable wandb | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If I understood correctly the user's issue, the problem was that any value was accepted. We can add \"True\" in `ENV_VAR_TRUE_VALUES` which seems to be missing, but if I set `WAND_DISABLED=False` for instance, I would expect wandb to not be disabled.\r\n\r\nIn any case those env variables are now deprecated (have to make a PR to issue a proper warning) since we have the `report_to` training argument that allows the user to set the reporting platform they want to use.",
"> If I understood correctly the user's issue, the problem was that any value was accepted. We can add \"True\" in `ENV_VAR_TRUE_VALUES` which seems to be missing, but if I set `WAND_DISABLED=False` for instance, I would expect wandb to not be disabled.\r\n\r\nThat is how I implemented it originally for this PR, but then read user's issue that triggered the PR that broke the original setting, and the issue writer requested a plain - any `WANDB_DISABLED` value. I'm fine with either. Do you want me to recode it to add `True`?\r\n\r\nAlso it needs to be documented, so that this disabling is solid and doesn't get changed again and again. If it's documented with just `Yes` that is already supported that is good enough for me.\r\n\r\n> In any case those env variables are now deprecated (have to make a PR to issue a proper warning) since we have the `report_to` training argument that allows the user to set the reporting platform they want to use.\r\n\r\nWell, except this new feature doesn't help in this particular case. As you can see from https://github.com/huggingface/transformers/issues/9623 and problems as recent as yesterday wandb is still a problem, even if you don't purposefully activate it or even have it installed. I won't be trying to fix this if it worked. \r\n\r\nPerhaps the default `report_to` should be `None` and have an option for `All` to ease up for those who want them all?\r\n\r\nWhatever the outcome, please let's fix so that if one doesn't have wandb installed it shouldn't break things.\r\n\r\nThank you.\r\n",
"> Do you want me to recode it to add True?\r\n\r\nYes, just as I said, adding `True` to the `ENV_VAR_TRUE_VALUES` should be enough to have this work (it's an oversight that `True` is not in that constant).\r\n\r\n> Also it needs to be documented, so that this disabling is solid and doesn't get changed again and again. If it's documented with just Yes that is already supported that is good enough for me.\r\n\r\nBy all means, please add documentation in this PR. For now it's documented with the [callback](https://huggingface.co/transformers/main_classes/callback.html#transformers.integrations.WandbCallback) but I'm open to any suggestion to make this better.\r\n\r\n> Whatever the outcome, please let's fix so that if one doesn't have wandb installed it shouldn't break things\r\n\r\nThe bug in #9623 with wandb not installed is linked to something weird in your env as I haven't been able to reproduce it by following your steps. I can add stronger checks that wandb is a proper module by checking its version/authors (like is done for [datasets](https://github.com/huggingface/transformers/blob/22121e813e2d043feb4484865ab5871870cb9dc3/src/transformers/file_utils.py#L130) but I have no idea if it will solve your bug or not (since I have no reproducer on my side).\r\n\r\nIf wandb is installed and you pass along `--report_to []`, you should not see either\r\n```\r\nwandb.errors.error.Error: You must call wandb.init() before wandb.log()\r\n```\r\nnor\r\n```\r\nAttributeError: module 'wandb' has no attribute 'ensure_configured'\r\n``` \r\nas the callback is not passed to the Trainer.\r\n\r\n> Perhaps the default report_to should be None and have an option for All to ease up for those who want them all?\r\n\r\nAs I explained before, that switch will be done in v5, as it is a breaking change.",
"Thank you for the feedback, @sgugger - PR updated as requested, plus synced trainer_tf with the same solution."
] | 1,611 | 1,612 | 1,612 | CONTRIBUTOR | null | This PR
* extends `ENV_VARS_TRUE_VALUES` with "true"
* restores `WANDB_DISABLED=true` to disable wandb
* documents this exact setting
* syncs trainer_tf with the same solution.
Context: we are still dealing with https://github.com/huggingface/transformers/issues/9623 where wandb fails no matter if you have it installed or not.
It looks like due to https://github.com/huggingface/transformers/issues/9699 a few days ago this behavior was changed to be one of `ENV_VARS_TRUE_VALUES = {"1", "ON", "YES"}`. And it's not documented anywhere.
This PR tries to restore the original behavior where any value of `WANDB_DISABLED` should disable wandb.
And wandb integration is broken, that's why we need a way disable it - it's so annoying when trying to develop and wandb keeps on breaking things whether it's installed or not. See: https://github.com/huggingface/transformers/issues/9623
Alternatively, instead of the proposed change in this PR, let's document this API that it has to be on of `{"1", "ON", "YES"}``, so that it doesn't change from day to day.
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9896/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9896",
"html_url": "https://github.com/huggingface/transformers/pull/9896",
"diff_url": "https://github.com/huggingface/transformers/pull/9896.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9896.patch",
"merged_at": 1612167247000
} |
https://api.github.com/repos/huggingface/transformers/issues/9895 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9895/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9895/comments | https://api.github.com/repos/huggingface/transformers/issues/9895/events | https://github.com/huggingface/transformers/issues/9895 | 797,285,093 | MDU6SXNzdWU3OTcyODUwOTM= | 9,895 | TFGPT2LMHeadModel unknown location | {
"login": "MacVej",
"id": 12750987,
"node_id": "MDQ6VXNlcjEyNzUwOTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/12750987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MacVej",
"html_url": "https://github.com/MacVej",
"followers_url": "https://api.github.com/users/MacVej/followers",
"following_url": "https://api.github.com/users/MacVej/following{/other_user}",
"gists_url": "https://api.github.com/users/MacVej/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MacVej/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MacVej/subscriptions",
"organizations_url": "https://api.github.com/users/MacVej/orgs",
"repos_url": "https://api.github.com/users/MacVej/repos",
"events_url": "https://api.github.com/users/MacVej/events{/privacy}",
"received_events_url": "https://api.github.com/users/MacVej/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Solved it by installing tensorflow-gpu=2.3.0 & cuda 10.1\r\n\r\nFollowing this guide:\r\nhttps://medium.com/analytics-vidhya/tensorflow-2-3-0-with-gpu-support-on-windows-10-f975a552ea7c\r\n\r\nUse this command to install gpu2.3.0\r\npython -m pip install https://storage.googleapis.com/tensorflow/windows/gpu/tensorflow_gpu-2.3.0-cp37-cp37m-win_amd64.whl"
] | 1,611 | 1,612 | 1,612 | NONE | null | I have been playing around with tensorflow (CPU), and some language model'ing - and it have been a blast so far - everything working great.
But after watching my old CPU slowly getting killed from all the model-training - i decided it was time to finally get some use out of my RTX 2080. I have been following the guide from [washinton university](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/install/tensorflow-install-jul-2020.ipynb):. Pretty quickly i got tensorflow-gpu running, ran it on some light grade-prediction and stuff like that.
But when i got to running GPT2 language model, i ran into some minor problems. I start by tokenizing the data:
from tokenizers.models import BPE
from tokenizers import Tokenizer
from tokenizers.decoders import ByteLevel as ByteLevelDecoder
from tokenizers.normalizers import NFKC, Sequence
from tokenizers.pre_tokenizers import ByteLevel
from tokenizers.trainers import BpeTrainer
class BPE_token(object):
def __init__(self):
self.tokenizer = Tokenizer(BPE())
self.tokenizer.normalizer = Sequence([
NFKC()
])
self.tokenizer.pre_tokenizer = ByteLevel()
self.tokenizer.decoder = ByteLevelDecoder()
def bpe_train(self, paths):
trainer = BpeTrainer(vocab_size=50000, show_progress=True, inital_alphabet=ByteLevel.alphabet(), special_tokens=[
"<s>",
"<pad>",
"</s>",
"<unk>",
"<mask>"
])
self.tokenizer.train(trainer, paths)
def save_tokenizer(self, location, prefix=None):
if not os.path.exists(location):
os.makedirs(location)
self.tokenizer.model.save(location, prefix)
# ////////// TOKENIZE DATA ////////////
from pathlib import Pa th
import os# the folder 'text' contains all the files
paths = [str(x) for x in Path("./da_corpus/").glob("**/*.txt")]
tokenizer = BPE_token()# train the tokenizer model
tokenizer.bpe_train(paths)# saving the tokenized data in our specified folder
save_path = 'tokenized_data'
tokenizer.save_tokenizer(save_path)
Code above works perfectly and tokenizes the data - just like with tensorflow (CPU). After having my data tokenized i start to train my model - but before it even gets start, i get the following ImportError:
from transformers import GPT2Config, TFGPT2LMHeadModel, GPT2Tokenizer # loading tokenizer from the saved model path
ImportError: cannot import name 'TFGPT2LMHeadModel' from 'transformers' (unknown location)
Transformers package seems to be installed correctly in the site-packages lib, and i seem to be able to use the other transformers - but not **TFGPT2LMHeadModel**
I have read everything on google and [hugging.co](https://huggingface.co/transformers/) - tried different versions of tensorflow-gpu, transformers, tokenizers and alot of other packages - - sadly nothing helps.
**Packages:**
- Python, 3.7.1
- Tensorflow 2.1.0
- Tensorflow-gpu 2.1.0
- Tensorflow-base 2.1.0
- Tensorflow-estimator 2.1.0
- Transformers 4.2.2
- Tokenizers 0.9.4
- cudnn 7.6.5
- cudatoolkit 10.1.243
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9895/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9894 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9894/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9894/comments | https://api.github.com/repos/huggingface/transformers/issues/9894/events | https://github.com/huggingface/transformers/issues/9894 | 797,275,429 | MDU6SXNzdWU3OTcyNzU0Mjk= | 9,894 | ImportError: cannot import name 'PreTrainedEncoderDecoder' from 'transformers' (unknown location) | {
"login": "gianfilippo",
"id": 10429140,
"node_id": "MDQ6VXNlcjEwNDI5MTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/10429140?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gianfilippo",
"html_url": "https://github.com/gianfilippo",
"followers_url": "https://api.github.com/users/gianfilippo/followers",
"following_url": "https://api.github.com/users/gianfilippo/following{/other_user}",
"gists_url": "https://api.github.com/users/gianfilippo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gianfilippo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gianfilippo/subscriptions",
"organizations_url": "https://api.github.com/users/gianfilippo/orgs",
"repos_url": "https://api.github.com/users/gianfilippo/repos",
"events_url": "https://api.github.com/users/gianfilippo/events{/privacy}",
"received_events_url": "https://api.github.com/users/gianfilippo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm not sure where you have seen those objects: they are nowhere in the transformers library. The library provides `EncoderDecoderModel`, see the [encoder/decoder doc page](https://huggingface.co/transformers/model_doc/encoderdecoder.html).",
"Hi, I was reading this (https://medium.com/huggingface/encoder-decoders-in-transformers-a-hybrid-pre-trained-architecture-for-seq2seq-af4d7bf14bb8). I also found someone reporting some issue while using the same object here\r\nhttps://github.com/huggingface/transformers/issues/2206.\r\nPerhaps I am looking at some older version ?\r\n\r\n",
"This is indeed from an older version (I guess 2 something or even 1 something). ",
"Thanks. I will look at the EncoderDecoderModel"
] | 1,611 | 1,612 | 1,612 | NONE | null | Hi,
I am using the library to pretrain my model of choice. I am now interested in setting up an encoder-decoder architecture with my pretrained models, and the "combiners" seems quite a straightforward way to do that.
Unfortunately, I am getting an import error on both "PreTrainedEncoderDecoder" and "Model2Model"
What am I missing ?
Thanks
Gianfilippo
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.0.dev0
- Platform: Linux-3.10.0-1062.33.1.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.6
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: NA
- Using distributed or parallel set-up in script?: NA
## To reproduce
Steps to reproduce the behavior:
1.python -c "from transformers import PreTrainedEncoderDecoder"
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
ImportError: cannot import name 'PreTrainedEncoderDecoder' from 'transformers' (unknown location)
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
no error
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9894/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9893 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9893/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9893/comments | https://api.github.com/repos/huggingface/transformers/issues/9893/events | https://github.com/huggingface/transformers/issues/9893 | 797,203,374 | MDU6SXNzdWU3OTcyMDMzNzQ= | 9,893 | rfc: new benchmark tool | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2604155188,
"node_id": "MDU6TGFiZWwyNjA0MTU1MTg4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Benchmarks",
"name": "Benchmarks",
"color": "2DF372",
"default": false,
"description": "Issues related to Memory regressions in tests and scripts"
},
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"I was thinking about one feature if possible,\r\n\r\nHow about when we run an example script a benchmarking script is automatically run and store the results in one file if the user passes an optional argument.\r\n\r\nWhen the user uploads the model on the model hub we can directly sort the model based on benchmarking results file.",
"All data files on the model hub for the same model arch will give the same speed performance results, since they are just data points. \r\n\r\nTherefore it's the model code that needs to be benchmarked (and the trainer if there is more than one).\r\n\r\nAnd given that currently we have only one model implementation of each there is nothing to compare it to.\r\n\r\nThe main idea of this issue is to do regression testing, to ensure that we don't accidentally make models slower while changing the code. For an example of this happening, please see: https://github.com/huggingface/transformers/pull/11218"
] | 1,611 | 1,623 | null | CONTRIBUTOR | null | This issue is to collect notes and ideas on creating a new benchmarking tool.
This is not about the other speed/memory regression project we have been discussing elsewhere.
This is about integration and various comparisons that we need to run in order to give users the best advice on how to deploy transformers in the most efficient way.
Please share the comments ideas/suggestions/concerns/needs, and I will compile them here.
- important: not part of examples - the goal is performance and integration tooling and not user-facing - totally different needs and priorities
- the cmd line has to continue working the same months later - so that old benchmarks could be re-run - ok to change interface with back-compat option so that the old benchmarks can be still re-validated and compared to
- ideally work with any transformers model - a single tool to rule them all
- minimal amount of arguments - just the important ones
- ability to generate markdown table entries directly and json files that contain not just the outcome but also the key variables that are being tested -
- the report to include critical hardware/software params as well in a compact form and allow these to be merged from multiple recordings - i.e. if the hw/sw are the same - they can be merged into a single report. will need to figure out how to record hardware nuances
* e.g. the same DDP test with 2 gpus connected w/ NVLink gives dramatically different results than the same 2 gpus w/o NVLink.
* not sure how to record CPU-capacity/ free RAM, etc., since all these impact the outcome
- crucial to be able to truncate the dataset | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9893/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/9893/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9892 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9892/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9892/comments | https://api.github.com/repos/huggingface/transformers/issues/9892/events | https://github.com/huggingface/transformers/issues/9892 | 797,198,206 | MDU6SXNzdWU3OTcxOTgyMDY= | 9,892 | Seeking clarification on T5 prefix for summarization | {
"login": "ari9dam",
"id": 14134882,
"node_id": "MDQ6VXNlcjE0MTM0ODgy",
"avatar_url": "https://avatars.githubusercontent.com/u/14134882?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ari9dam",
"html_url": "https://github.com/ari9dam",
"followers_url": "https://api.github.com/users/ari9dam/followers",
"following_url": "https://api.github.com/users/ari9dam/following{/other_user}",
"gists_url": "https://api.github.com/users/ari9dam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ari9dam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ari9dam/subscriptions",
"organizations_url": "https://api.github.com/users/ari9dam/orgs",
"repos_url": "https://api.github.com/users/ari9dam/repos",
"events_url": "https://api.github.com/users/ari9dam/events{/privacy}",
"received_events_url": "https://api.github.com/users/ari9dam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @ari9dam \r\nPlease use the [forum](https://discuss.huggingface.co/) for such questions, and there's a discussion about this in the post \r\nhttps://discuss.huggingface.co/t/t5-finetuning-tips/684"
] | 1,611 | 1,613 | 1,613 | NONE | null | In the paper, I see the the prefix for summarization is "TL;DR:" . If I look into the model [config.json](https://huggingface.co/t5-base/blob/main/config.json) of T5-Base, I see it is "summarization:".
"task_specific_params": {
"summarization": {
"early_stopping": true,
"length_penalty": 2.0,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
.....
If I want to finetune Huggingface T5 for summarization, which prefix should I use?
Thank you
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9892/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9891 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9891/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9891/comments | https://api.github.com/repos/huggingface/transformers/issues/9891/events | https://github.com/huggingface/transformers/issues/9891 | 797,115,433 | MDU6SXNzdWU3OTcxMTU0MzM= | 9,891 | Remove Token from Vocab? | {
"login": "BigSalmon2",
"id": 61605789,
"node_id": "MDQ6VXNlcjYxNjA1Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/61605789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BigSalmon2",
"html_url": "https://github.com/BigSalmon2",
"followers_url": "https://api.github.com/users/BigSalmon2/followers",
"following_url": "https://api.github.com/users/BigSalmon2/following{/other_user}",
"gists_url": "https://api.github.com/users/BigSalmon2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BigSalmon2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BigSalmon2/subscriptions",
"organizations_url": "https://api.github.com/users/BigSalmon2/orgs",
"repos_url": "https://api.github.com/users/BigSalmon2/repos",
"events_url": "https://api.github.com/users/BigSalmon2/events{/privacy}",
"received_events_url": "https://api.github.com/users/BigSalmon2/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,611 | 1,619 | 1,619 | NONE | null | Is there a way I can remove a token from vocab.json? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9891/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9890 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9890/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9890/comments | https://api.github.com/repos/huggingface/transformers/issues/9890/events | https://github.com/huggingface/transformers/pull/9890 | 797,021,708 | MDExOlB1bGxSZXF1ZXN0NTY0MDk5OTM0 | 9,890 | Restore TF embeddings and attention layers to their previous version | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Pinging @mfuntowicz ",
"> Morgan was mentioning that the transpose_for_score method was called right after the Q/K/V projection, but that there was no need to split this dimension if we're not doing head masking.\r\nWhat do you think? Maybe that's some work for another PR, though.\r\n\r\nI think it seems doable but not sure, I prefer to keep things like this to be sure we revert properly as it was before and we get at least a proper version, and we can take care of this change in another PR.",
"Sounds good to me!",
"LGTM! @patrickvonplaten feel free to merge if you approve the changes ^^",
"I won't have time to do a proper review today (can do it tomorrow), but feel free to merge without me if @LysandreJik and @sgugger are ok with it",
"@patrickvonplaten if you can take a look at it today and merge it if it's fine with you, that would be great"
] | 1,611 | 1,612 | 1,612 | CONTRIBUTOR | null | # What does this PR do?
This PR restores the attention layers and the embeddings as it was in v4.1, even though the embeddings got few improvements compared to their original version to keep XLA compliancy. The reason is because we realized that some used operators were not compatible with some NN SDKs such as the one from Qualcomm or ONNX.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9890/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9890",
"html_url": "https://github.com/huggingface/transformers/pull/9890",
"diff_url": "https://github.com/huggingface/transformers/pull/9890.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9890.patch",
"merged_at": 1612784191000
} |
https://api.github.com/repos/huggingface/transformers/issues/9889 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9889/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9889/comments | https://api.github.com/repos/huggingface/transformers/issues/9889/events | https://github.com/huggingface/transformers/pull/9889 | 796,923,988 | MDExOlB1bGxSZXF1ZXN0NTY0MDE5NTUw | 9,889 | m2m_100 | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [] | 1,611 | 1,613 | 1,613 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9889/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/9889/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9889",
"html_url": "https://github.com/huggingface/transformers/pull/9889",
"diff_url": "https://github.com/huggingface/transformers/pull/9889.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9889.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9888 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9888/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9888/comments | https://api.github.com/repos/huggingface/transformers/issues/9888/events | https://github.com/huggingface/transformers/issues/9888 | 796,844,143 | MDU6SXNzdWU3OTY4NDQxNDM= | 9,888 | [Quick poll] Give your opinion on the future of 🤗 transformers: 40k edition! | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
}
] | closed | false | null | [] | [
"Just did, thanks a lot @LysandreJik, the form is super quick to fill-in and interesting!\r\nEveryone, we're waiting for you",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,611 | 1,619 | 1,619 | MEMBER | null | Thanks to all of you, Transformers just passed 40k :star2: this week!
Our libraries have always been about the community and we need your input to define the direction of the next 40k stars.
If you have a couple of minutes and want to participate in shaping the future of the library, please share your thoughts: https://forms.gle/FackvXzWJBWQz2WY8
(please reply in the above feedback form rather than to this thread)
Thank you all on behalf of the HuggingFace team! 🤗 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9888/reactions",
"total_count": 20,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 4,
"rocket": 5,
"eyes": 3
} | https://api.github.com/repos/huggingface/transformers/issues/9888/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9887 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9887/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9887/comments | https://api.github.com/repos/huggingface/transformers/issues/9887/events | https://github.com/huggingface/transformers/pull/9887 | 796,783,152 | MDExOlB1bGxSZXF1ZXN0NTYzOTA0MjE5 | 9,887 | Fit chinese wwm to new datasets | {
"login": "wlhgtc",
"id": 16603773,
"node_id": "MDQ6VXNlcjE2NjAzNzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/16603773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wlhgtc",
"html_url": "https://github.com/wlhgtc",
"followers_url": "https://api.github.com/users/wlhgtc/followers",
"following_url": "https://api.github.com/users/wlhgtc/following{/other_user}",
"gists_url": "https://api.github.com/users/wlhgtc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wlhgtc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wlhgtc/subscriptions",
"organizations_url": "https://api.github.com/users/wlhgtc/orgs",
"repos_url": "https://api.github.com/users/wlhgtc/repos",
"events_url": "https://api.github.com/users/wlhgtc/events{/privacy}",
"received_events_url": "https://api.github.com/users/wlhgtc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger @LysandreJik \r\nCould you help me review these code ?",
"> Hi there! Thanks for updating your example. We have now created a `research_projects` project for the examples not directly maintained by the core team, and I think the `run_mlm_wwm` script and the chine_ref file could all go there in a new folder. Would you mind adjusting your PR in that direction?\r\n\r\nSure, maybe move `run_chinese_ref.py` to `research_projects` folder and leave `run_mlm_wwm.py` in where it was would be better ? And I don't know which folder is better ?\r\nThe two files are independent, we could move it to anywhere.",
"The `run_mlm_wwm` file is not maintained by us directly and it only works for BERT-models, compared to the other examples, so I think it can all go together there. You can create a new folder named `mlm_wwm` (since it's not just Chinese) for instance and have the specific requirements in the `requirements.txt` file there?",
"> The `run_mlm_wwm` file is not maintained by us directly and it only works for BERT-models, compared to the other examples, so I think it can all go together there. You can create a new folder named `mlm_wwm` (since it's not just Chinese) for instance and have the specific requirements in the `requirements.txt` file there?\r\n\r\ndone!",
"Last thing is to run `make style` to make sure the files are properly formatted, let me know if you have any issue doing this!",
"> Last thing is to run `make style` to make sure the files are properly formatted, let me know if you have any issue doing this!\r\n\r\nyeah, seem my previous PR also failed in format :(\r\nI got error as follow:\r\n```\r\n#!/bin/bash -eo pipefail\r\nblack --check examples tests src utils\r\nwould reformat /home/circleci/transformers/examples/research_projects/mlm_wwm/run_chinese_ref.py\r\nwould reformat /home/circleci/transformers/src/transformers/trainer.py\r\nOh no! 💥 💔 💥\r\n2 files would be reformatted, 706 files would be left unchanged.\r\n\r\nExited with code exit status 1\r\n```\r\nBut I formate my code.\r\n\r\n\r\nMaybe you could help me do this part ?",
"@sgugger My pleasure. Maybe you could help me fix the formate error :(\r\nMy python version `3.9.1` black `20.8b1`, why I got diff result in CI."
] | 1,611 | 1,612 | 1,612 | CONTRIBUTOR | null | Sorry for my later update.
I make my code(especially in chinese mlm_wwm) fit the newest code.
Here are the changes:
1. add `chinese_ref` key to avoid miss ref inf.
2. fix the type bug in `data_collator.py`
3. re-add `run_chinese_ref.py` cause it could run with the newest version code (4.2.2).
4. update readme | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9887/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9887",
"html_url": "https://github.com/huggingface/transformers/pull/9887",
"diff_url": "https://github.com/huggingface/transformers/pull/9887.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9887.patch",
"merged_at": 1612168680000
} |
https://api.github.com/repos/huggingface/transformers/issues/9886 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9886/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9886/comments | https://api.github.com/repos/huggingface/transformers/issues/9886/events | https://github.com/huggingface/transformers/issues/9886 | 796,774,815 | MDU6SXNzdWU3OTY3NzQ4MTU= | 9,886 | Conversion of BPE tokenizer for Marian models | {
"login": "SaricVr",
"id": 19590330,
"node_id": "MDQ6VXNlcjE5NTkwMzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/19590330?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaricVr",
"html_url": "https://github.com/SaricVr",
"followers_url": "https://api.github.com/users/SaricVr/followers",
"following_url": "https://api.github.com/users/SaricVr/following{/other_user}",
"gists_url": "https://api.github.com/users/SaricVr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaricVr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaricVr/subscriptions",
"organizations_url": "https://api.github.com/users/SaricVr/orgs",
"repos_url": "https://api.github.com/users/SaricVr/repos",
"events_url": "https://api.github.com/users/SaricVr/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaricVr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1920687293,
"node_id": "MDU6TGFiZWwxOTIwNjg3Mjkz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Fast%20Tokenizers",
"name": "Fast Tokenizers",
"color": "b60205",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"cc'ing @n1t0 on this in case he didn't see it",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,611 | 1,619 | 1,619 | NONE | null | Hello,
I was searching for the pt-en model of Marian and noticed that it has not been converted for the huggingface library apparently because it uses a BPE tokenizer. Is it possible to convert BPE-based models to be used in huggingface somehow?
Thank you | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9886/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9885 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9885/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9885/comments | https://api.github.com/repos/huggingface/transformers/issues/9885/events | https://github.com/huggingface/transformers/issues/9885 | 796,713,099 | MDU6SXNzdWU3OTY3MTMwOTk= | 9,885 | Finetune_Trainer Question | {
"login": "caincdiy",
"id": 43126828,
"node_id": "MDQ6VXNlcjQzMTI2ODI4",
"avatar_url": "https://avatars.githubusercontent.com/u/43126828?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/caincdiy",
"html_url": "https://github.com/caincdiy",
"followers_url": "https://api.github.com/users/caincdiy/followers",
"following_url": "https://api.github.com/users/caincdiy/following{/other_user}",
"gists_url": "https://api.github.com/users/caincdiy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/caincdiy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/caincdiy/subscriptions",
"organizations_url": "https://api.github.com/users/caincdiy/orgs",
"repos_url": "https://api.github.com/users/caincdiy/repos",
"events_url": "https://api.github.com/users/caincdiy/events{/privacy}",
"received_events_url": "https://api.github.com/users/caincdiy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead? You'll get more help there.\r\n\r\nThe docs regarding the maintained trainer are available [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq#new-script) and may be useful to you.\r\nThanks!",
"Oh sorry about that, Thank you very much"
] | 1,611 | 1,611 | 1,611 | NONE | null | Hi, I'm new to HuggingFace. I want to fine-tune a BARTForConditionalGeneration model by finetunr_trainer.py for the translation task on google colab, but I didn't figure out how to use the script to fine-tune the model. Could anyone help me to show a quick example? Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9885/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9884 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9884/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9884/comments | https://api.github.com/repos/huggingface/transformers/issues/9884/events | https://github.com/huggingface/transformers/issues/9884 | 796,683,286 | MDU6SXNzdWU3OTY2ODMyODY= | 9,884 | Exporting model to onnx increases the model size | {
"login": "hetpandya",
"id": 55797177,
"node_id": "MDQ6VXNlcjU1Nzk3MTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/55797177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hetpandya",
"html_url": "https://github.com/hetpandya",
"followers_url": "https://api.github.com/users/hetpandya/followers",
"following_url": "https://api.github.com/users/hetpandya/following{/other_user}",
"gists_url": "https://api.github.com/users/hetpandya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hetpandya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hetpandya/subscriptions",
"organizations_url": "https://api.github.com/users/hetpandya/orgs",
"repos_url": "https://api.github.com/users/hetpandya/repos",
"events_url": "https://api.github.com/users/hetpandya/events{/privacy}",
"received_events_url": "https://api.github.com/users/hetpandya/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I wouldn't be surprised that ONNX serializes each of the layers as independant layers when they're all repeated. I don't know enough about the ONNX export to know it that's the issue or what to do to fix it though.\r\n\r\nDo you get similar increases in size with other models? With BERT for example?",
"Hi @LysandreJik , I faced the issue only on albert models so far, exporting to onnx for other BERT models worked fine and use them for prediction as well.",
" I solved the issue using [this code](https://github.com/thehetpandya/onnx-shared-weights-remove/blob/main/onnx_remove_shared_weights.ipynb) that removes shared weights from the ONNX model."
] | 1,611 | 1,613 | 1,613 | NONE | null | Hi, I'm trying to convert some models as mentioned below to onnx as follows:
ktrapeznikov/albert-xlarge-v2-squad-v2
albert-xlarge-v1
albert-xlarge-v2
The common issue with exporting all these models is that I get an exception that the protobuf size increases to more than 2gb while all these are less than 800mb. When I use the use_external_data_format=True flag, the exported model files(network layers as I searched in other issues) sum up to in gbs of size. For eg, the model ktrapeznikov/albert-xlarge-v2-squad-v2 is sized to 210mb but when I convert it to onnx using use_external_data_format flag, the model size sums up to 4gb.
## Code example
```
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
import torch
model_name = "ktrapeznikov/albert-xlarge-v2-squad-v2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
model.eval()
question = "what is google specialization"
text = "Google LLC is an American multinational technology company that specializes in Internet-related services and products, which include online advertising technologies, a search engine, cloud computing, software, and hardware."
encoding = tokenizer.encode_plus(question, text)
input_ids, attention_mask, token_type_ids = encoding["input_ids"],encoding["attention_mask"], encoding["token_type_ids"]
input_ids = torch.tensor([input_ids])
attention_mask = torch.tensor([attention_mask])
token_type_ids = torch.tensor([token_type_ids])
torch.onnx.export(
model,
(input_ids,attention_mask, token_type_ids),
f"{model_name}.onnx",
input_names = ['input_ids','attention_mask', 'token_type_ids'],
output_names = ['qa_outputs'],
opset_version=12, ##opset has to be set to 12
do_constant_folding=True,
use_external_data_format=True,
dynamic_axes = {
'input_ids' : {0: 'batch', 1: 'sequence'},
'attention_mask' : {0: 'batch', 1: 'sequence'},
'token_type_ids' : {0: 'batch', 1: 'sequence'},
'qa_outputs': {0: 'batch'}
}
)
```


## System Info
PyTorch version: 1.7.0+cu101
CUDA used to build PyTorch: 10.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.12.0
Python version: 3.6 (64-bit runtime)
Is CUDA available: False
CUDA runtime version: 10.1.243
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.5
[pip3] torch==1.7.0+cu101
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.3.1
[pip3] torchvision==0.8.1+cu101
Any help would be much appreciated.
Thanks in advance! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9884/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9883 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9883/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9883/comments | https://api.github.com/repos/huggingface/transformers/issues/9883/events | https://github.com/huggingface/transformers/issues/9883 | 796,614,180 | MDU6SXNzdWU3OTY2MTQxODA= | 9,883 | examples/seq2seq , where can I find the definition for the sortish_sampler argument? | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I am trying to understand the use of sortish sampler. Right now it is not used in the run_seq2seq.py script. ",
"I found it. It is in the t**rainer_seq2seq.p**y script."
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9883/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/9882 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9882/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9882/comments | https://api.github.com/repos/huggingface/transformers/issues/9882/events | https://github.com/huggingface/transformers/issues/9882 | 796,538,683 | MDU6SXNzdWU3OTY1Mzg2ODM= | 9,882 | Some weights of {} were not initialized from the model checkpoint | {
"login": "yeounyi",
"id": 41869778,
"node_id": "MDQ6VXNlcjQxODY5Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/41869778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yeounyi",
"html_url": "https://github.com/yeounyi",
"followers_url": "https://api.github.com/users/yeounyi/followers",
"following_url": "https://api.github.com/users/yeounyi/following{/other_user}",
"gists_url": "https://api.github.com/users/yeounyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yeounyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yeounyi/subscriptions",
"organizations_url": "https://api.github.com/users/yeounyi/orgs",
"repos_url": "https://api.github.com/users/yeounyi/repos",
"events_url": "https://api.github.com/users/yeounyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/yeounyi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Thanks for opening an issue. I see two issues with your setup here:\r\n- Why are you using `from_pretrained` to load the `RobertaModel` inside your pre-trained model? You should just initialize a `RobertaModel` from the configuration imo.\r\n- Instead of `PreTrainedModel`, I would instead use `RobertaPreTrainedModel`.\r\n\r\nSee the below script for an example of what I would recommend. I'm saving & reloading the model to make sure that all the weights get saved/loaded:\r\n\r\n```py\r\nfrom transformers import RobertaModel, RobertaConfig, logging\r\nfrom transformers.models.roberta.modeling_roberta import RobertaPreTrainedModel\r\nimport torch\r\n\r\nlogging.set_verbosity_info()\r\n\r\nclass MaskClassifier(RobertaPreTrainedModel):\r\n def __init__(self, config):\r\n super().__init__(config=config)\r\n self.roberta = RobertaModel(config)\r\n self.max_mask = 10\r\n self.hidden_size = config.hidden_size\r\n self.linear1 = torch.nn.Linear(2 * self.hidden_size, self.hidden_size)\r\n self.linear2 = torch.nn.Linear(self.hidden_size, self.max_mask + 1)\r\n self.softmax = torch.nn.Softmax(dim=1)\r\n\r\n self.init_weights()\r\n\r\nmodel = MaskClassifier.from_pretrained(\"roberta-base\")\r\n```\r\n\r\nLet's see the logs now, for the first load using the `roberta-base` checkpoint:\r\n\r\n```\r\nSome weights of the model checkpoint at roberta-base were not used when initializing MaskClassifier: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight']\r\n- This IS expected if you are initializing MaskClassifier from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing MaskClassifier from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of MaskClassifier were not initialized from the model checkpoint at roberta-base and are newly initialized: ['roberta.embeddings.position_ids', 'linear1.weight', 'linear1.bias', 'linear2.weight', 'linear2.bias']\r\n```\r\n\r\nThe warning tells you: you're not using the `lm_head` weights, and the following layers are initialized: `linear1` and `linear2`.\r\nSince you're not using the LM head, and the two layers are the ones you just added, then there's nothing to worry about.\r\n\r\nLet's try saving the model and reloading it again:\r\n\r\n```py\r\nmodel.save_pretrained(\"here\")\r\nMaskClassifier.from_pretrained(\"here\")\r\n```\r\n\r\nThe logs show:\r\n```\r\nAll model checkpoint weights were used when initializing MaskClassifier.\r\nAll the weights of MaskClassifier were initialized from the model checkpoint at here.\r\n```\r\n\r\nSuccess :tada: ",
"Thanks a lot!!! It works ",
"@LysandreJik \r\n\r\nI really appreciate your help! You saved me from nightmares... \r\nActually I have one more custom model, and I tried the same structure you showed me, but it fails to load the weights. The only difference is that I'm using RobertaForMaskedLM, not RobertaModel here. \r\n\r\nModel Structure\r\n```\r\nclass MaskedLM(RobertaPreTrainedModel):\r\n def __init__(self, config):\r\n super().__init__(config=config)\r\n self.roberta = RobertaForMaskedLM(config)\r\n # self.tokenizer = RobertaTokenizer.from_pretrained('roberta-base')\r\n self.refinement_num = 3\r\n # self.mask_id = self.tokenizer.convert_tokens_to_ids([tokenizer.mask_token])[0] # 50264\r\n self.init_weights()\r\n def forward( ... )\r\n```\r\nInitialize Model\r\n```\r\nmodel = MaskedLM.from_pretrained('roberta-base')\r\n```\r\n\r\nError Message \r\n```\r\nSome weights of the model checkpoint at roberta-base were not used when initializing MaskedLM: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'roberta.embeddings.word_embeddings.weight', 'roberta.embeddings.position_embeddings.weight', 'roberta.embeddings.token_type_embeddings.weight', 'roberta.embeddings.LayerNorm.weight', 'roberta.embeddings.LayerNorm.bias', 'roberta.encoder.layer.0.attention.self.query.weight', ... ]\r\n- This IS expected if you are initializing MaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing MaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of MaskedLM were not initialized from the model checkpoint at roberta-base and are newly initialized: ['roberta.roberta.embeddings.position_ids', 'roberta.roberta.embeddings.word_embeddings.weight', 'roberta.roberta.embeddings.position_embeddings.weight', 'roberta.roberta.embeddings.token_type_embeddings.weight', 'roberta.roberta.embeddings.LayerNorm.weight', 'roberta.roberta.embeddings.LayerNorm.bias', 'roberta.roberta.encoder.layer.0.attention.self.query.weight', ... ]\r\n```\r\n\r\nI don't know why this model has '**roberta.roberta**.embeddings.position_ids', not '**roberta**.embeddings.position_ids'",
"Hmmm, the issue here is that there is a difference between `RobertaModel`, which has the following weights:\r\n```\r\nembeddings.position_ids\r\nembeddings.xxx\r\n[...]\r\n```\r\nand `RobertaForMaskedLM`, which contains `RobertaModel` under the `roberta` prefix:\r\n```\r\nroberta.embeddings.position_ids\r\nroberta.embeddings.xxx\r\n[...]\r\nlm_head.dense\r\nlm_head.bias\r\n[...]\r\n```\r\n\r\nI'm not entirely sure of what you're trying to achieve as I don't see your forward function, but I think you could prevent a lot of pain by redefining your model somewhat like `RobertaForMaskedLM` is setup:\r\n\r\n\r\n```py\r\n# Import the RobertaLMHead\r\nfrom transformers.models.roberta.modeling_roberta import RobertaPreTrainedModel, RobertaLMHead\r\n\r\n\r\nclass MaskedLM(RobertaPreTrainedModel):\r\n def __init__(self, config):\r\n super().__init__(config=config)\r\n\r\n # Create the RoBERTa model and its head like in the MaskedLM layer\r\n self.roberta = RobertaModel(config)\r\n self.lm_head = RobertaLMHead(config)\r\n\r\n self.refinement_num = 3\r\n self.init_weights()\r\n\r\n def forward( ... )\r\n outputs = self.roberta(xxx)\r\n sequence_output = outputs[0]\r\n prediction_scores = self.lm_head(sequence_output)\r\n\r\n # Do your stuff!\r\n```\r\n\r\nThis way you can load the checkpoint seamlessly in your model, as the naming with the prefixes will be correct.",
"Thanks!! I tried to build a MaskedLM with some refinements. After predicting multiple <mask> tokens, mask two random predicted tokens and predict them again. Anyway thanks a lot 🤗 🤗 🤗 ",
"Glad I could help!",
"> (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\n\r\nHi~ \r\nI just have a question about why ” (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).“ IS NOT expected? the knowledge in pretrained model is same as your task that you are going to finetune?\r\nThanks~\r\nBest,\r\nPengbo"
] | 1,611 | 1,677 | 1,612 | CONTRIBUTOR | null | I keep failing to load model checkpoint.
I built a model inheriting PreTrainedModel and have roberta inside initialization.
Training this model with trainer works fine, but when I try to load the checkpoint using ```from_pretrained```, it keeps failing to load the checkpoint. Can someone help me out? Thanks
Structure of my model
```
class MaskClassifier(PreTrainedModel):
def __init__(self, config, path):
super().__init__(config=config)
self.roberta = RobertaModel.from_pretrained(path)
self.max_mask = 10
self.hidden_size = RobertaConfig().hidden_size
self.linear1 = torch.nn.Linear(2 * self.hidden_size, self.hidden_size)
self.linear2 = torch.nn.Linear(self.hidden_size, self.max_mask + 1)
self.softmax = torch.nn.Softmax(dim=1)
def forward(self, input_ids, attention_mask, token_type_ids, labels=None):
...
# Feed input to RoBERTa
```
Initialize before training
```
config = RobertaConfig()
config.max_position_embeddings = 514
config.type_vocab_size = 1
config.vocab_size = 50265
model = MaskClassifier(config=config, path='roberta-base')
```
Saving after training
```trainer.save_model('./slogan_pretrained')```
Loading the checkpoint
```
config = RobertaConfig()
config.max_position_embeddings = 514
config.type_vocab_size = 1
config.vocab_size = 50265
model = MaskClassifier.from_pretrained(path, config=config, path='roberta-base')
```
I found similair issue(https://github.com/huggingface/transformers/issues/2886), but I don't know exactly how I should override the function ```from_pretrained``` and even I tried overriding this functions, it still can't load the checkpoint.
Error Message
> Some weights of MaskClassifier were not initialized from the model checkpoint at /home/yeoun/slogans/slogan_pretrained and are newly initialized: ['.roberta.embeddings.position_ids', '.roberta.embeddings.word_embeddings.weight', '.roberta.embeddings.position_embeddings.weight', '.roberta.embeddings.token_type_embeddings.weight', '.roberta.embeddings.LayerNorm.weight', '.roberta.embeddings.LayerNorm.bias', '.roberta.encoder.layer.0.attention.self.query.weight', '.roberta.encoder.layer.0.attention.self.query.bias', '.roberta.encoder.layer.0.attention.self.key.weight', '.roberta.encoder.layer.0.attention.self.key.bias', '.roberta.encoder.layer.0.attention.self.value.weight', ...
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9882/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9881 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9881/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9881/comments | https://api.github.com/repos/huggingface/transformers/issues/9881/events | https://github.com/huggingface/transformers/issues/9881 | 796,525,169 | MDU6SXNzdWU3OTY1MjUxNjk= | 9,881 | DeBERTa pretraining using MLM: model gradients become NAN | {
"login": "mansimane",
"id": 23171195,
"node_id": "MDQ6VXNlcjIzMTcxMTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/23171195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mansimane",
"html_url": "https://github.com/mansimane",
"followers_url": "https://api.github.com/users/mansimane/followers",
"following_url": "https://api.github.com/users/mansimane/following{/other_user}",
"gists_url": "https://api.github.com/users/mansimane/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mansimane/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mansimane/subscriptions",
"organizations_url": "https://api.github.com/users/mansimane/orgs",
"repos_url": "https://api.github.com/users/mansimane/repos",
"events_url": "https://api.github.com/users/mansimane/events{/privacy}",
"received_events_url": "https://api.github.com/users/mansimane/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"hi @mansimane \r\n\r\nIn your code in `TrainingArguments`, `adam_epsilon` is set to `1e06`, which quite a large value, I believe it's a typo, should be 1e-6 as mentioned in the comment. This could be the reason for `nan` gradients. ",
"Thanks @patil-suraj for the catch. I fixed the Adam epsilon, but still some gradients are becoming infinity and nan after first backward pass. Following is the config I tried \r\n\r\n```python\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./deberta\",\r\n overwrite_output_dir=True,\r\n\r\n num_train_epochs=1000,\r\n per_gpu_train_batch_size=32,\r\n learning_rate=1e-10,\r\n\r\n warmup_steps=10000,\r\n weight_decay=0.01,\r\n adam_beta1=0.9,\r\n adam_beta2=0.999,\r\n adam_epsilon=1e-6,\r\n max_grad_norm=1.0,\r\n save_steps=10_000,\r\n save_total_limit=2,\r\n logging_first_step=False,\r\n logging_steps=1,\r\n max_steps=10000,\r\n gradient_accumulation_steps=1,\r\n\r\n)\r\n```",
"Hi, \r\n\r\nsorry for the late reply. I tested MLM with `DeBertaForMaskedLM` using the `run_mlm.py` script, and everything seems to be working fine. So it seems like a hyperparameter issue (I would suggest using the same hyperparameter values as this script). Your learning rate for example seems way too low.\r\n\r\nMy Google colab to reproduce: https://colab.research.google.com/drive/1Rk5JoBTzK0I8J3FjG2R4J9HCeOrUpRTt?usp=sharing",
"I am having the same issue but with MobileBert after loading a pre-trained model. I trained from scratch a LM 23000 steps. Now loading the model mobilebert.from_pretrained() to reload the model and keep training. Now when I try to keep training the loss i NaN. I have removed all related to learning rate in the training args and the nans keep appearing.\r\n\r\n```\r\nfrom transformers import Trainer, TrainingArguments\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./mobile_linear_att_4Heads_8L_128_512_03layerdrop_shared_all_dataset_1\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=1,\r\n per_device_train_batch_size=95,\r\n save_steps=50,\r\n save_total_limit=2,\r\n logging_first_step=True,\r\n logging_steps=50,\r\n gradient_accumulation_steps=8,\r\n fp16=True,\r\n dataloader_num_workers=19,\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=big_dataset,\r\n tokenizer=tokenizer)\r\n\r\n\r\ntrainer.train()\r\n```\r\n\r\nEDIT: After some debugging I looked into the \"trainer_state.json\" and I have seen that before finishing the last training I got NaNs into the model so, it is nothing related to learning rate o something at this moment.\r\n\r\n```\r\n{\r\n \"cuda max_memory_reserved\": 23460839424,\r\n \"cuda memory cached\": 23460839424,\r\n \"cuda memory consumption\": 111139328,\r\n \"epoch\": 0.99,\r\n \"learning_rate\": 0.0004937288135593219,\r\n \"loss\": 4.5816,\r\n \"num_parameters\": 5920442,\r\n \"step\": 22900\r\n },\r\n {\r\n \"cuda max_memory_reserved\": 23460839424,\r\n \"cuda memory cached\": 23460839424,\r\n \"cuda memory consumption\": 111139328,\r\n \"epoch\": 0.99,\r\n \"learning_rate\": 0.0004934745762711864,\r\n \"loss\": NaN,\r\n \"num_parameters\": 5920442,\r\n \"step\": 22950\r\n },\r\n```\r\n\r\nEDIT2: I think that my issue is related to the scheduler in the learning rate. I am trying to train in batches of 20% of the dataset, so the learning rate scheduler I think, it calculate the learning rate based on the epoch and not on the current step, so I hardcoded in: \r\n\r\n```\r\nself.lr_scheduler = get_scheduler(\r\n self.args.lr_scheduler_type,\r\n self.optimizer,\r\n num_warmup_steps=self.args.warmup_steps,\r\n num_training_steps=num_training_steps, # <- here I hardcoded the calculated final (20%+20%+20%...) training steps\r\n )\r\n```\r\n\r\nSo when I was approximating the final of the training in the first 20% it got something weird.",
"it's a pain to train on shards of (bookcorpus + wikipedia + openwebtext) I am processing the 20% of each one because I dont have more than 1 TB of disk. But I am figthing with the learning rate scheduler, because I have to do engineering to train on all the dataset. ",
"Thank you @NielsRogge . I was able to train DeBERTa with run_mlm.py script. Not sure what was the issue in my code, it gave nan after trying learning rate that you used as well. ",
"@mansimane are you using fp16 or fp32 ?"
] | 1,611 | 1,672 | 1,612 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.3.0.dev0
- Platform: Ubuntu
- Python version: 3.6.12
- PyTorch version : 1.7.1
- Using GPU in script?: Y
- Using distributed or parallel set-up in script?: Y, using 8 GPU machine.
### Who can help
@BigBird01 @NielsRogge
Models:
DeBERTa Base
## Information
I am using DeBERTa base model and training it with Masked Language Modeling task using single file from wikipedia text
dataset. For the first step the loss is around 11 and after backward pass, gradients become nan and gradient norm goes to infinity.
I reduced learning rate from 1e-4 to 5e-10, still the issue persists. Batch size per GPU is 32 and with 8 GPUs, total batchsize becomes 256. Configured hyperparameters according to paper are as below.
* Number of Layers: 12
* Hidden size: 768
* FNN inner hidden size: 3072
* Attention Heads: 12
* Attention Head size: 64
* Dropout: 0.1
* Warmup Steps: 10k
* Learning Rates: 1e-4
* Batch Size: 256
* Weight Decay: 0.01
* Max Steps: 1M
* Learning Rate Decay: Linear
* Adam ε: 1e-6
* Adam β1: 0.9
* Adam β2: 0.999
* Gradient Clipping: 1.0
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import (
DebertaConfig,
DebertaTokenizer,
DebertaForMaskedLM,
LineByLineTextDataset,
DataCollatorForLanguageModeling,
Trainer,
TrainingArguments
)
tokenizer = DebertaTokenizer.from_pretrained('microsoft/deberta-base')
train_dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path="/data/wikidemo/wiki_01",
block_size=128,
)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=True, mlm_probability=0.15
)
config = DebertaConfig()
model = DebertaForMaskedLM(config=config)
training_args = TrainingArguments(
output_dir="./deberta",
overwrite_output_dir=True,
num_train_epochs=1000,
per_gpu_train_batch_size=2,
learning_rate=5e-10,
weight_decay=0.01,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e06,
max_grad_norm=1.0,
save_steps=10_000,
save_total_limit=2,
logging_first_step=False,
logging_steps=1,
max_steps=10000,
gradient_accumulation_steps=10,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
)
print("Starting training")
trainer.train()
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9881/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9880 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9880/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9880/comments | https://api.github.com/repos/huggingface/transformers/issues/9880/events | https://github.com/huggingface/transformers/pull/9880 | 796,485,945 | MDExOlB1bGxSZXF1ZXN0NTYzNjU5NTU5 | 9,880 | [trainer] [deepspeed] refactor deepspeed setup devices | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | Following the discussion at https://github.com/huggingface/transformers/pull/9798#pullrequestreview-578822445 as we now have multiple integrations with complex unique setups, @sgugger and I agreed that it's better to have a small duplication of a few lines of code but to make it much easier to understand what goes on for a specific integration, so rather than further refactoring the recently added sage branch, this PR creates a dedicated branch for DeepSpeed and thus simplifies the general case when straight DDP is used.
There is no functionality change - just a small code reshuffle.
@sgugger
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9880/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9880/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9880",
"html_url": "https://github.com/huggingface/transformers/pull/9880",
"diff_url": "https://github.com/huggingface/transformers/pull/9880.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9880.patch",
"merged_at": 1611937084000
} |
https://api.github.com/repos/huggingface/transformers/issues/9879 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9879/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9879/comments | https://api.github.com/repos/huggingface/transformers/issues/9879/events | https://github.com/huggingface/transformers/pull/9879 | 796,469,627 | MDExOlB1bGxSZXF1ZXN0NTYzNjQ2MzEx | 9,879 | [seq2seq] correctly handle mt5 | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> when you port this to the new run_seq2seq, it would be great to try to find a way to make this not use any special code for a given model\r\n\r\nI'm working on it in #9844, it's not finished though. We might need to add `get_input_embeddings` and `get_pos_embeddings` methods to every s2s model, to avoid special cases.",
"If we need to add some methods to deal with the special cases, I would prefer it (otherwise the script might fail with new seq2seq models)."
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | This PR fixes `seq2seq/utils.py` to handle `mt5` like it does `t5`.
Ideally there should be a test, which would require creating a tiny model for mt5, but I'm being told this code is going away anyway, so there is no point investing energy into it.
Fixes: https://github.com/huggingface/transformers/issues/9865
@patil-suraj, @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9879/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9879",
"html_url": "https://github.com/huggingface/transformers/pull/9879",
"diff_url": "https://github.com/huggingface/transformers/pull/9879.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9879.patch",
"merged_at": 1611936682000
} |
https://api.github.com/repos/huggingface/transformers/issues/9878 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9878/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9878/comments | https://api.github.com/repos/huggingface/transformers/issues/9878/events | https://github.com/huggingface/transformers/issues/9878 | 796,413,418 | MDU6SXNzdWU3OTY0MTM0MTg= | 9,878 | [DOCS] curl links go to 404 not found in NER tutorial | {
"login": "INF800",
"id": 45640029,
"node_id": "MDQ6VXNlcjQ1NjQwMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/45640029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/INF800",
"html_url": "https://github.com/INF800",
"followers_url": "https://api.github.com/users/INF800/followers",
"following_url": "https://api.github.com/users/INF800/following{/other_user}",
"gists_url": "https://api.github.com/users/INF800/gists{/gist_id}",
"starred_url": "https://api.github.com/users/INF800/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/INF800/subscriptions",
"organizations_url": "https://api.github.com/users/INF800/orgs",
"repos_url": "https://api.github.com/users/INF800/repos",
"events_url": "https://api.github.com/users/INF800/events{/privacy}",
"received_events_url": "https://api.github.com/users/INF800/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"the doc link you shared if for V2.2.0, we have updated all the examples in a recent version. You can find the new ner examples here https://github.com/huggingface/transformers/tree/master/examples/token-classification",
"Thank you @patil-suraj "
] | 1,611 | 1,611 | 1,611 | NONE | null | Hey, in [NER tutorial](https://huggingface.co/transformers/v2.2.0/examples.html#named-entity-recognition) the curl commands seem outdated
```
curl -L 'https://sites.google.com/site/germeval2014ner/data/NER-de-train.tsv?attredirects=0&d=1' \
| grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > train.txt.tmp
curl -L 'https://sites.google.com/site/germeval2014ner/data/NER-de-dev.tsv?attredirects=0&d=1' \
| grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > dev.txt.tmp
curl -L 'https://sites.google.com/site/germeval2014ner/data/NER-de-test.tsv?attredirects=0&d=1' \
| grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > test.txt.tmp
```
Can you please share new curl requests? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9878/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9877 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9877/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9877/comments | https://api.github.com/repos/huggingface/transformers/issues/9877/events | https://github.com/huggingface/transformers/pull/9877 | 796,403,840 | MDExOlB1bGxSZXF1ZXN0NTYzNTkxMzk0 | 9,877 | Fix head masking for TFT5 models | {
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@stancld can you please rebase on master in order to solve the conflicts?",
"Thanks! The PR should be merged once @LysandreJik and @patrickvonplaten will have reviewed it.",
"Thanks for fixing it!"
] | 1,611 | 1,613 | 1,613 | CONTRIBUTOR | null | * This PR fixes head masking in TFT5 models (#9859)
* This PR further fixes the name of an error message variable from `__HEAD_MASK_WARNING_MSG` to `_HEAD_MASK_WARNING_MSG` as the former one was not working properly and raised error (double underscore made troubles)
<hr>
Fixes: #9859
Reviewers: @jplu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9877/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9877",
"html_url": "https://github.com/huggingface/transformers/pull/9877",
"diff_url": "https://github.com/huggingface/transformers/pull/9877.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9877.patch",
"merged_at": 1613577609000
} |
https://api.github.com/repos/huggingface/transformers/issues/9876 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9876/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9876/comments | https://api.github.com/repos/huggingface/transformers/issues/9876/events | https://github.com/huggingface/transformers/pull/9876 | 796,377,159 | MDExOlB1bGxSZXF1ZXN0NTYzNTY4MjM3 | 9,876 | When on sagemaker use their env variables for saves | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | COLLABORATOR | null | # What does this PR do?
When on SageMaker, the content of the env variable "SM_OUTPUT_DATA_DIR" should be used to save training artifacts (such as our checkpoint) so make it overwrite the `output_dir` (and make that argument optional so it doesn't need to be passed for sagemaker training).
Then the final model will be easy to deploy if it's also saved to the content of the env variable "SM_MODEL_DIR" so adding that as well.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9876/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9876",
"html_url": "https://github.com/huggingface/transformers/pull/9876",
"diff_url": "https://github.com/huggingface/transformers/pull/9876.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9876.patch",
"merged_at": 1611931947000
} |
https://api.github.com/repos/huggingface/transformers/issues/9875 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9875/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9875/comments | https://api.github.com/repos/huggingface/transformers/issues/9875/events | https://github.com/huggingface/transformers/pull/9875 | 796,334,224 | MDExOlB1bGxSZXF1ZXN0NTYzNTMwMjIx | 9,875 | Clarify use of unk_token in slow tokenizers' docstrings | {
"login": "ethch18",
"id": 12580176,
"node_id": "MDQ6VXNlcjEyNTgwMTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/12580176?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ethch18",
"html_url": "https://github.com/ethch18",
"followers_url": "https://api.github.com/users/ethch18/followers",
"following_url": "https://api.github.com/users/ethch18/following{/other_user}",
"gists_url": "https://api.github.com/users/ethch18/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ethch18/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ethch18/subscriptions",
"organizations_url": "https://api.github.com/users/ethch18/orgs",
"repos_url": "https://api.github.com/users/ethch18/repos",
"events_url": "https://api.github.com/users/ethch18/events{/privacy}",
"received_events_url": "https://api.github.com/users/ethch18/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
Currently, the docstrings for slow tokenizers' `tokenize()` method claim that unknown tokens will be left in place, in contrast to the fast tokenizers' behavior. In reality, both convert unknown tokens to `unk_token`.
Fixes #9714
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9875/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9875/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9875",
"html_url": "https://github.com/huggingface/transformers/pull/9875",
"diff_url": "https://github.com/huggingface/transformers/pull/9875.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9875.patch",
"merged_at": 1611915114000
} |
https://api.github.com/repos/huggingface/transformers/issues/9874 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9874/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9874/comments | https://api.github.com/repos/huggingface/transformers/issues/9874/events | https://github.com/huggingface/transformers/pull/9874 | 796,248,404 | MDExOlB1bGxSZXF1ZXN0NTYzNDU3NTM1 | 9,874 | pin_memory -> dataloader_pin_memory | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Updated with review comments. Please let me know if/when it's okay to merge :) ",
"Good for me, thanks a lot!",
"this is much better, thank you for the adjustment, @abhishekkrthakur "
] | 1,611 | 1,611 | 1,611 | MEMBER | null | Ref: https://github.com/huggingface/transformers/pull/9857#issuecomment-769256215
This PR adds a new argument `dataloader_pin_memory` to `TrainingArguments`. You can use this to pin memory in `DataLoader`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9874/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9874",
"html_url": "https://github.com/huggingface/transformers/pull/9874",
"diff_url": "https://github.com/huggingface/transformers/pull/9874.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9874.patch",
"merged_at": 1611864646000
} |
https://api.github.com/repos/huggingface/transformers/issues/9873 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9873/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9873/comments | https://api.github.com/repos/huggingface/transformers/issues/9873/events | https://github.com/huggingface/transformers/issues/9873 | 796,188,222 | MDU6SXNzdWU3OTYxODgyMjI= | 9,873 | Strange hyperparameter warning | {
"login": "avacaondata",
"id": 35173563,
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avacaondata",
"html_url": "https://github.com/avacaondata",
"followers_url": "https://api.github.com/users/avacaondata/followers",
"following_url": "https://api.github.com/users/avacaondata/following{/other_user}",
"gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions",
"organizations_url": "https://api.github.com/users/avacaondata/orgs",
"repos_url": "https://api.github.com/users/avacaondata/repos",
"events_url": "https://api.github.com/users/avacaondata/events{/privacy}",
"received_events_url": "https://api.github.com/users/avacaondata/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @alexvaca0 thanks for bringing this up. Indeed our example is outdated. I updated it here https://github.com/amogkam/ray/blob/hf-pbt/python/ray/tune/examples/pbt_transformers/pbt_transformers.py and when running on the latest Ray wheels and the latest transformer release (4.2.2), I am seeing perturbations happening as expected\r\n\r\n\r\nI'll also try this out on transformers master branch and see if it's working on that as well, and will update here again. Thanks!",
"I just tried on transformers master and am seeing it work as well. Please let me know if this updated example works for you @alexvaca0! ",
"Thank you so much for your quick and very helpful answer @amogkam I'm using also the latest ray wheels and master version of transformers, therefore that example should work for me too! I'm going to try it as soon as I can so that I can tell you if it works for me or not! Thank you :) ",
"I just checked your code and I don't find any change with respect to the official example except for the evaluation strategy, which was steps and now it's epochs, but maybe I'm missing something. I'll try first that example and then I'll try to apply it to my dataset. I have one more question regarding PBT with transformers: I'm observing that from time to time the models \"re-start\" from the beggining (that is, a model that had trained for 1.53 epochs suddenly returns to step 0 and starts from there). In my configuration I set number of epochs to 10, expecting that each of the 4 models in the population trains for 10 epochs, but mutating their configurations in the process. However, they'd never reach that number of epochs if they continue restarting from the beginning... Is there something I'm missing here? Do you think this issue will be also solved with the new training script? @amogkam Thank you !! :) ",
"@amogkam \r\n\r\n2021-01-29 15:01:01,078\tWARNING trial_runner.py:370 -- Trial Runner checkpointing failed: Checkpoint must not be in-memory.\r\n\r\nIt still throws this error, and I've checked that another anomaly still persists: models not always re-start training from the point where they left, but start from the beginning again... Any clues why this may be happening? It's strange that this doesn't occur always, as sometimes models do re-start training from the point where they left...",
"Hey @alexvaca0, could you share what your stdout looks like please? Also is this with any modifications to the example, and can you share the full code that you are using? Thanks!",
"Could you give me your email so that I can share it with you that way? :) @amogkam ",
"Hey yes you can send it to [email protected]",
"Great! Already sent :) @amogkam ",
"I am having the same problem ```Trial Runner checkpointing failed: Checkpoint must not be in-memory.```, but it does some times manage to create checkpoints, as I have ```PopulationBasedTraining: 4 checkpoints, 2 perturbs```\r\nModels seem to start training from the beginning again most of the time. I'm guessing it happens when the checkpoiting fails.\r\n\r\nI also notice that I am getting some errors and warnings:\r\n\r\n>WARNING function_runner.py:541 -- Function checkpointing is disabled. This may result in unexpected behavior when using checkpointing features or certain schedulers. To enable, set the train function arguments to be `func(config, checkpoint_dir=None)\r\n\r\n>ERROR syncer.py:72 -- Log sync requires rsync to be installed.\r\n\r\nDid you find a solution? \r\n\r\nAlso, I don't really understand how the ```perturbation_interval``` and ```time_attr``` arguments work together. It seems to consider the models for perturbations at the number of ```logging_steps``` I set in ```TrainingArguments```, but as I understand it, it is supposed to do so after every training step (so after every minibatch?) when ```time_attr=training_iteration``` and ```perturbation_interval=1.``` Since that's how it seems to work, I set ```checkpoint_freq``` to the same value as ```logging_steps``` in ```TrainingArguments```\r\n\r\n\r\nHere's what I think is the relevant part of my code:\r\n\r\n``` python\r\nclass UCCTrainer(Trainer):\r\n def compute_loss(self, model, inputs, return_outputs=False):\r\n outputs = model(\r\n input_ids=inputs['input_ids'],\r\n attention_mask=inputs['attention_mask'],\r\n token_type_ids=inputs['token_type_ids']\r\n )\r\n loss = th.nn.BCEWithLogitsLoss()(outputs['logits'], inputs['labels'])\r\n return (loss, outputs) if return_outputs else loss\r\n\r\n\r\ndef model_init():\r\n return BertForSequenceClassification.from_pretrained(\r\n config.MODEL_NAME, return_dict=True\r\n )\r\n\r\n\r\ndef objective(metrics):\r\n try:\r\n return metrics[config.COMPUTE_OBJECTIVE]\r\n except KeyError:\r\n return metrics[f'eval_{config.COMPUTE_OBJECTIVE}']\r\n\r\n\r\ndef hp_space(trial):\r\n return {\r\n 'learning_rate': tune.uniform(1e-5, 5e-5),\r\n 'num_train_epochs': tune.choice([2, 3, 4, 5]),\r\n 'seed': tune.choice(range(1, 50)),\r\n 'weight_decay': tune.uniform(0.0, 0.3),\r\n 'per_device_train_batch_size': tune.choice([10, 15, 20])\r\n }\r\n\r\n\r\ndef compute_metrics(eval_pred: EvalPrediction):\r\n scores = eval_pred.predictions # np.array 4427x2\r\n labels = eval_pred.label_ids # np.array 4427x2\r\n pred = np.argmax(scores, axis=1)\r\n labels_flat = np.argmax(labels, axis=1)\r\n return get_binary_metrics(pred, labels_flat)\r\n\r\nif __name__ == '__main__':\r\n os.environ['WANDB_WATCH'] = 'all'\r\n tokenizer = BertTokenizer.from_pretrained(\r\n config.MODEL_NAME,\r\n do_lower_case=config.DO_LOWER_CASE\r\n )\r\n train_df = dataframe_from_json('data/train_balanced.json')\r\n train_binary = make_binary_df(train_df)\r\n train_data = UCCDataset(train_binary, tokenizer, config.MAX_LEN)\r\n total_steps = len(train_data)/config.TRAIN_BATCH_SIZE\r\n warmup_steps = round(0.1*total_steps)\r\n training_args = TrainingArguments(\r\n output_dir=config.OUTPUT_DIR,\r\n do_train=True,\r\n do_eval=True,\r\n evaluation_strategy='steps',\r\n learning_rate=config.LEARNING_RATE,\r\n weight_decay=0.1,\r\n logging_steps=config.LOG_INTERVAL,\r\n seed=1,\r\n disable_tqdm=True,\r\n report_to=['wandb'],\r\n run_name=config.RUN_NAME,\r\n load_best_model_at_end=config.LOAD_BEST_LAST,\r\n metric_for_best_model=config.COMPUTE_OBJECTIVE,\r\n logging_first_step=True,\r\n lr_scheduler_type='linear',\r\n warmup_steps=warmup_steps\r\n )\r\n val_df = get_clean_df(pd.read_csv('data/val.csv'))\r\n val_binary = make_binary_df(val_df)\r\n val_data = UCCDataset(val_df, tokenizer, config.MAX_LEN)\r\n model_config = BertConfig(\r\n vocab_size=tokenizer.vocab_size,\r\n pretrained_model_name_or_path=config.MODEL_NAME,\r\n num_labels=config.N_LABELS,\r\n return_dict=True\r\n )\r\n\r\n trainer = UCCTrainer(\r\n args=training_args,\r\n train_dataset=train_data,\r\n eval_dataset=val_data,\r\n tokenizer=tokenizer,\r\n model_init=model_init,\r\n compute_metrics=compute_metrics\r\n )\r\n\r\n ray_scheduler = PopulationBasedTraining(\r\n time_attr='training_iteration',\r\n metric=f'eval_{config.COMPUTE_OBJECTIVE}',\r\n mode='max',\r\n perturbation_interval=1,\r\n hyperparam_mutations={\r\n 'learning_rate': tune.uniform(1e-5, 5e-5),\r\n 'num_train_epochs': tune.choice([2, 3, 4, 5]),\r\n 'seed': tune.choice(range(1, 50)),\r\n 'weight_decay': tune.uniform(0.0, 0.3),\r\n 'per_device_train_batch_size': tune.choice([10, 15, 20])\r\n }\r\n )\r\n best_model = trainer.hyperparameter_search(\r\n hp_space=hp_space,\r\n compute_objective=objective,\r\n n_trials=3,\r\n direction='maximize',\r\n backend='ray',\r\n # the following arguments are kwargs for tune.run\r\n scheduler=ray_scheduler,\r\n name='testmars5',\r\n resources_per_trial={'cpu': 1, 'gpu': 1},\r\n keep_checkpoints_num=3,\r\n checkpoint_score_attr=\"training_iteration\",\r\n checkpoint_freq=config.LOG_INTERVAL\r\n )\r\n```",
"Hey folks, is this still an issue?\r\n\r\ncc @jwa018 @khrystynaFaryna"
] | 1,611 | 1,626 | 1,611 | NONE | null | ## Environment info
- `transformers` version: Master Branch
- Platform: Ubuntu 18.04
- Python version: 3.7
- PyTorch version (GPU?): 1.7 (YES)
- Tensorflow version (GPU?):
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
### Who can help
@amogkam
## Information
Model I am using (Bert, XLNet ...):
BART
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Go to https://docs.ray.io/en/master/tune/examples/pbt_transformers.html and copy that code.
2. Execute that code.
3. Wait and when some iterations have been made, you'll see that a constant warning persists:
```{python}
2021-01-28 17:33:56,863 WARNING trial_runner.py:420 -- Trial Runner checkpointing failed: Checkpoint must not be in-memory.
```
Although it seems a ray-related problem, from reading https://github.com/huggingface/transformers/pull/6747 I have arrived to the conclusion that maybe as the checkpointing integration has been removed from Transformers, PBT is no longer working.
When I look into the logs, I see that effectively no perturbation is being made, and it should because perturbation_interval is set to
## Expected behavior
It's expected that if I set perturbation_interval to 1, perturbations are made every 1 training iteration, but PBT is not doing any perturbation at all and I think it's because of some problem in the integration for checkpointing between Transformers and Ray Tune. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9873/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9872 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9872/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9872/comments | https://api.github.com/repos/huggingface/transformers/issues/9872/events | https://github.com/huggingface/transformers/pull/9872 | 796,166,450 | MDExOlB1bGxSZXF1ZXN0NTYzMzkwMjgz | 9,872 | on_log event should occur *after* the current log is written | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9872/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9872",
"html_url": "https://github.com/huggingface/transformers/pull/9872",
"diff_url": "https://github.com/huggingface/transformers/pull/9872.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9872.patch",
"merged_at": 1611857465000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/9871 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9871/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9871/comments | https://api.github.com/repos/huggingface/transformers/issues/9871/events | https://github.com/huggingface/transformers/issues/9871 | 796,003,221 | MDU6SXNzdWU3OTYwMDMyMjE= | 9,871 | Exception: You're trying to run a `Unigram` model but you're file was trained with a different algorithm | {
"login": "jiyanbio",
"id": 66310065,
"node_id": "MDQ6VXNlcjY2MzEwMDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/66310065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiyanbio",
"html_url": "https://github.com/jiyanbio",
"followers_url": "https://api.github.com/users/jiyanbio/followers",
"following_url": "https://api.github.com/users/jiyanbio/following{/other_user}",
"gists_url": "https://api.github.com/users/jiyanbio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiyanbio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiyanbio/subscriptions",
"organizations_url": "https://api.github.com/users/jiyanbio/orgs",
"repos_url": "https://api.github.com/users/jiyanbio/repos",
"events_url": "https://api.github.com/users/jiyanbio/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiyanbio/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Use \"AlbertTokenizer\" rather than \"AutoTokenizer\", this should solve your issue.\r\nPlease, check the updated notebook version.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Prot_albert tokenizer is returning none type, what changed?"
] | 1,611 | 1,641 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.2
- Platform: Linux-3.10.107-1-tlinux2_kvm_guest-0049-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [1 ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. open https://github.com/agemagician/ProtTrans/blob/master/Embedding/PyTorch/Basic/ProtAlbert.ipynb
2. when run the code 'tokenizer = AutoTokenizer.from_pretrained("Rostlab/prot_albert", do_lower_case=False )'
3. report errors as the follow:
Downloading: 100%|█████████████████████████████████████████████████████████████████| 505/505 [00:00<00:00, 516kB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████| 238k/238k [00:03<00:00, 77.0kB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/anaconda3/envs/prottrans/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 385, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home/anaconda3/envs/prottrans/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1768, in from_pretrained
return cls._from_pretrained(
File "/home/anaconda3/envs/prottrans/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1841, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/anaconda3/envs/prottrans/lib/python3.8/site-packages/transformers/models/albert/tokenization_albert_fast.py", line 136, in __init__
super().__init__(
File "/home/anaconda3/envs/prottrans/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 89, in __init__
fast_tokenizer = convert_slow_tokenizer(slow_tokenizer)
File "/home/anaconda3/envs/prottrans/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py", line 659, in convert_slow_tokenizer
return converter_class(transformer_tokenizer).converted()
File "/home/anaconda3/envs/prottrans/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py", line 349, in converted
tokenizer = self.tokenizer(self.proto)
File "/home/anaconda3/envs/prottrans/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py", line 335, in tokenizer
raise Exception(
Exception: You're trying to run a `Unigram` model but you're file was trained with a different algorithm
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9871/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9870 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9870/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9870/comments | https://api.github.com/repos/huggingface/transformers/issues/9870/events | https://github.com/huggingface/transformers/issues/9870 | 795,993,762 | MDU6SXNzdWU3OTU5OTM3NjI= | 9,870 | IndexError when finetuning barthez on summarization | {
"login": "moussaKam",
"id": 28675016,
"node_id": "MDQ6VXNlcjI4Njc1MDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/28675016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moussaKam",
"html_url": "https://github.com/moussaKam",
"followers_url": "https://api.github.com/users/moussaKam/followers",
"following_url": "https://api.github.com/users/moussaKam/following{/other_user}",
"gists_url": "https://api.github.com/users/moussaKam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moussaKam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moussaKam/subscriptions",
"organizations_url": "https://api.github.com/users/moussaKam/orgs",
"repos_url": "https://api.github.com/users/moussaKam/repos",
"events_url": "https://api.github.com/users/moussaKam/events{/privacy}",
"received_events_url": "https://api.github.com/users/moussaKam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hmmmm the -100 id should be linked to the ignored values. It shouldn't try to decode this. Pinging @sgugger ",
"Not sure why there would be `-100` in the labels with the old script. Note that we are not maintaining that one anymore and will replace it with `run_seq2seq` which is almost ready for use (misses a few features you don't seem to be using in your command anyway).\r\n\r\nIf you really need the old one, you should add the line\r\n```\r\nlabels = np.where(labels != -100, labels, tokenizer.pad_token_id)\r\n```\r\nin the `compute_metric` function to replace the -100s by the pad token id.",
"We also fixed this very recently (yesterday) in the model, see: https://github.com/huggingface/transformers/commit/74f16b82765a05eccee45e80d79370202a958873 => so you should also be able to run: \r\n```\r\npython finetune_trainer.py --learning_rate 3e-5 --fp16 --evaluation_strategy steps --predict_with_generate --model_name_or_path moussaKam/barthez --data_dir xsum --do_train --do_eval --output_dir welcome_back --per_device_train_batch_size 4 --task summarization --max_target_length 50 --overwrite_output_dir --eval_steps 50 --n_val 20\r\n```\r\non master now.\r\n\r\nHowever, as @sgugger points out, we strongly recommend using the `run_seq2seq.py` script from now on as we won't continue maintaining `finetune_trainer.py` anymore."
] | 1,611 | 1,612 | 1,612 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.3.0.dev0
- Platform: Linux-4.4.0-197-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): BARThez
## To reproduce
Steps to reproduce the behavior:
```
python finetune_trainer.py --learning_rate 3e-5 --fp16 --evaluation_strategy steps --predict_with_generate --model_name_or_path moussaKam/barthez --data_dir xsum --do_train --do_eval --output_dir welcome_back --per_device_train_batch_size 4 --task summarization --max_target_length 50 --overwrite_output_dir --eval_steps 50 --n_val 20
```
```
Traceback (most recent call last):
File "finetune_trainer.py", line 373, in <module>
main()
File "finetune_trainer.py", line 303, in main
train_result = trainer.train(
File "/datadisks/datadisk1/transformers/src/transformers/trainer.py", line 942, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/datadisks/datadisk1/transformers/src/transformers/trainer.py", line 1017, in _maybe_log_save_evaluate
metrics = self.evaluate()
File "/datadisks/datadisk1/transformers/src/transformers/trainer_seq2seq.py", line 96, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/datadisks/datadisk1/transformers/src/transformers/trainer.py", line 1458, in evaluate
output = self.prediction_loop(
File "/datadisks/datadisk1/transformers/src/transformers/trainer.py", line 1617, in prediction_loop
metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))
File "/datadisks/datadisk1/transformers/examples/seq2seq/utils.py", line 92, in summarization_metrics
pred_str, label_str = decode_pred(pred)
File "/datadisks/datadisk1/transformers/examples/seq2seq/utils.py", line 86, in decode_pred
label_str = tokenizer.batch_decode(pred.label_ids, skip_special_tokens=True)
File "/datadisks/datadisk1/transformers/src/transformers/tokenization_utils_base.py", line 3070, in batch_decode
return [
File "/datadisks/datadisk1/transformers/src/transformers/tokenization_utils_base.py", line 3071, in <listcomp>
self.decode(
File "/datadisks/datadisk1/transformers/src/transformers/tokenization_utils_base.py", line 3109, in decode
return self._decode(
File "/datadisks/datadisk1/transformers/src/transformers/tokenization_utils.py", line 711, in _decode
filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens)
File "/datadisks/datadisk1/transformers/src/transformers/tokenization_utils.py", line 695, in convert_ids_to_tokens
tokens.append(self._convert_id_to_token(index))
File "/datadisks/datadisk1/transformers/src/transformers/models/barthez/tokenization_barthez.py", line 237, in _convert_id_to_token
return self.sp_model.IdToPiece(index)
File "/home/dascim/anaconda3/envs/transformers/lib/python3.8/site-packages/sentencepiece/__init__.py", line 501, in _batched_func
return _func(self, arg)
File "/home/dascim/anaconda3/envs/transformers/lib/python3.8/site-packages/sentencepiece/__init__.py", line 494, in _func
raise IndexError('piece id is out of range.')
IndexError: piece id is out of range.
```
## Expected behavior
For some reason the tokenizer is trying to decode some -100 id.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9870/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9869 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9869/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9869/comments | https://api.github.com/repos/huggingface/transformers/issues/9869/events | https://github.com/huggingface/transformers/pull/9869 | 795,902,907 | MDExOlB1bGxSZXF1ZXN0NTYzMTcxOTcx | 9,869 | Added do_lower_case parameters for tokenizer in mlm training. | {
"login": "K-Mike",
"id": 22145213,
"node_id": "MDQ6VXNlcjIyMTQ1MjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/22145213?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/K-Mike",
"html_url": "https://github.com/K-Mike",
"followers_url": "https://api.github.com/users/K-Mike/followers",
"following_url": "https://api.github.com/users/K-Mike/following{/other_user}",
"gists_url": "https://api.github.com/users/K-Mike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/K-Mike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/K-Mike/subscriptions",
"organizations_url": "https://api.github.com/users/K-Mike/orgs",
"repos_url": "https://api.github.com/users/K-Mike/repos",
"events_url": "https://api.github.com/users/K-Mike/events{/privacy}",
"received_events_url": "https://api.github.com/users/K-Mike/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there, thanks for your PR!\r\nThe examples scripts are kept simple and without too much functionality so users can easily understand and tweak them for their needs (they are just examples, they do not mean to cover **everything**). As you saw, it's super easy to add things like this option, if needed. The PR will stay this to demonstrate how, but I don't think we will merge it.",
"As far as I know this is already done in the Tokenizer logic. You can define this lower casing option in the `tokenizer_config.json` - and this is done for quite a lot models. I dont't see any reason to have this as an extra cli option 🤔 ",
"I just tried to be useful :)",
"No problem at all. If you want to make sure something you are working on will be accepted, don't hesitate to open an issue about it first, that way we can tell you if it's desirable or not :-)\r\nBe sure to check the [good first issues](https://github.com/huggingface/transformers/issues?q=is%3Aissue+is%3Aopen+label%3A%22Good+First+Issue%22) if you want to try something else!"
] | 1,611 | 1,611 | 1,611 | NONE | null | Added do_lower_case while training mlm. Useful for training cased BER, for instance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9869/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9869",
"html_url": "https://github.com/huggingface/transformers/pull/9869",
"diff_url": "https://github.com/huggingface/transformers/pull/9869.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9869.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9868 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9868/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9868/comments | https://api.github.com/repos/huggingface/transformers/issues/9868/events | https://github.com/huggingface/transformers/pull/9868 | 795,807,978 | MDExOlB1bGxSZXF1ZXN0NTYzMDk0Njcw | 9,868 | Remove submodule | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | MEMBER | null | Removes the `datasets` submodule that was introduced in https://github.com/huggingface/transformers/pull/9825. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9868/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9868",
"html_url": "https://github.com/huggingface/transformers/pull/9868",
"diff_url": "https://github.com/huggingface/transformers/pull/9868.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9868.patch",
"merged_at": 1611824634000
} |
https://api.github.com/repos/huggingface/transformers/issues/9867 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9867/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9867/comments | https://api.github.com/repos/huggingface/transformers/issues/9867/events | https://github.com/huggingface/transformers/issues/9867 | 795,785,534 | MDU6SXNzdWU3OTU3ODU1MzQ= | 9,867 | where is position_embedding_type used | {
"login": "awdrgyjilplij",
"id": 21336173,
"node_id": "MDQ6VXNlcjIxMzM2MTcz",
"avatar_url": "https://avatars.githubusercontent.com/u/21336173?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/awdrgyjilplij",
"html_url": "https://github.com/awdrgyjilplij",
"followers_url": "https://api.github.com/users/awdrgyjilplij/followers",
"following_url": "https://api.github.com/users/awdrgyjilplij/following{/other_user}",
"gists_url": "https://api.github.com/users/awdrgyjilplij/gists{/gist_id}",
"starred_url": "https://api.github.com/users/awdrgyjilplij/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/awdrgyjilplij/subscriptions",
"organizations_url": "https://api.github.com/users/awdrgyjilplij/orgs",
"repos_url": "https://api.github.com/users/awdrgyjilplij/repos",
"events_url": "https://api.github.com/users/awdrgyjilplij/events{/privacy}",
"received_events_url": "https://api.github.com/users/awdrgyjilplij/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It is used quite a lot! Here for example:\r\n\r\nhttps://github.com/huggingface/transformers/blob/4c3ae89ad3215c3252ebf8ce964795ba8813d810/src/transformers/models/electra/modeling_electra.py#L194-L196\r\n\r\nActually just Ctrl+F \"position_embedding_type\" in this file and you should be able to find out where it's used :) (11 occurrences)",
"thanks "
] | 1,611 | 1,611 | 1,611 | NONE | null | When I was using pytorch Electra Model, I read its source code but I didn't find where position_embedding_type is used.
So did I miss something? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9867/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9866 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9866/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9866/comments | https://api.github.com/repos/huggingface/transformers/issues/9866/events | https://github.com/huggingface/transformers/issues/9866 | 795,761,153 | MDU6SXNzdWU3OTU3NjExNTM= | 9,866 | Whole word mask in run_mlm_wwm.py | {
"login": "JiachengLi1995",
"id": 44625638,
"node_id": "MDQ6VXNlcjQ0NjI1NjM4",
"avatar_url": "https://avatars.githubusercontent.com/u/44625638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JiachengLi1995",
"html_url": "https://github.com/JiachengLi1995",
"followers_url": "https://api.github.com/users/JiachengLi1995/followers",
"following_url": "https://api.github.com/users/JiachengLi1995/following{/other_user}",
"gists_url": "https://api.github.com/users/JiachengLi1995/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JiachengLi1995/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JiachengLi1995/subscriptions",
"organizations_url": "https://api.github.com/users/JiachengLi1995/orgs",
"repos_url": "https://api.github.com/users/JiachengLi1995/repos",
"events_url": "https://api.github.com/users/JiachengLi1995/events{/privacy}",
"received_events_url": "https://api.github.com/users/JiachengLi1995/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I noticed this myself, `DataCollatorForWholeWordMask` seems to be very specific to the BERT tokenizer. It seems that it should be using the special tokens mask, and the word_ids() from the tokenizer rather than rely on [CLS],[SEP] tokens and subwords starting with ## (so it fails with metaspace for example).\r\n\r\nEdit: It also calls `self._tensorize_batch` which as far as I can see isn't implemented, so I assume this class isn't maintained?",
"It's a bit old but I wanted to share my quick fix for Roberta-like tokenizers (I think this can be more general propose, but I just needed this for Herbert tokenizer):\r\n```\r\ndef _whole_word_mask(self, input_tokens: List[str], max_predictions=512):\r\n \"\"\"\r\n Get 0/1 labels for masked tokens with whole word mask proxy\r\n \"\"\"\r\n if not isinstance(self.tokenizer, (BertTokenizer, BertTokenizerFast,\r\n RobertaTokenizer, RobertaTokenizerFast,\r\n XLMRobertaTokenizer, XLMRobertaTokenizerFast,\r\n HerbertTokenizer, HerbertTokenizerFast,\r\n XLMTokenizer)):\r\n warnings.warn(\r\n \"DataCollatorForWholeWordMask is only suitable for BertTokenizer or RobertaTokenizer-like tokenizers. \"\r\n \"Please refer to the documentation for more information.\"\r\n )\r\n\r\n cand_indexes = []\r\n special_tokens = [val for key, val in self.tokenizer.special_tokens_map.items()\r\n if key not in ['unk_token', 'mask_token']]\r\n is_bert_tokenizer = isinstance(self.tokenizer, (BertTokenizer, BertTokenizerFast))\r\n for (i, token) in enumerate(input_tokens):\r\n if token in special_tokens:\r\n continue\r\n\r\n if is_bert_tokenizer:\r\n if len(cand_indexes) >= 1 and token.startswith(\"##\"):\r\n cand_indexes[-1].append(i)\r\n else:\r\n cand_indexes.append([i])\r\n else: # Roberta-like tokenizers have </w> token at the end to indicate end of word\r\n # edge case for chinese (##) are added in DataCollatorForWholeWordMask\r\n if token.startswith(\"##\"):\r\n token = token[2:]\r\n if token.endswith(\"</w>\"):\r\n token = token[:-4]\r\n if len(cand_indexes) == 0:\r\n cand_indexes.append([i])\r\n else:\r\n cand_indexes[-1].append(i)\r\n\r\n if token.endswith(\"</w>\"):\r\n cand_indexes.append([])\r\n\r\n if len(cand_indexes[-1]) == 0:\r\n cand_indexes = cand_indexes[:-1]\r\n\r\n random.shuffle(cand_indexes)\r\n num_to_predict = min(max_predictions, max(1, int(round(len(input_tokens) * self.mlm_probability))))\r\n masked_lms = []\r\n covered_indexes = set()\r\n for index_set in cand_indexes:\r\n if len(masked_lms) >= num_to_predict:\r\n break\r\n # If adding a whole-word mask would exceed the maximum number of\r\n # predictions, then just skip this candidate.\r\n if len(masked_lms) + len(index_set) > num_to_predict:\r\n continue\r\n is_any_index_covered = False\r\n for index in index_set:\r\n if index in covered_indexes:\r\n is_any_index_covered = True\r\n break\r\n if is_any_index_covered:\r\n continue\r\n for index in index_set:\r\n covered_indexes.add(index)\r\n masked_lms.append(index)\r\n\r\n if len(covered_indexes) != len(masked_lms):\r\n raise ValueError(\"Length of covered_indexes is not equal to length of masked_lms.\")\r\n mask_labels = [1 if i in covered_indexes else 0 for i in range(len(input_tokens))]\r\n return mask_labels\r\n```",
"If anybody else has this issue. I fixed it for RoBERTa by adding a few lines that deal with the way RoBERTa tokenizes. Note that it's not a general purpose solution for other LMs. The previous comment did not work for me. See here:\r\n\r\nhttps://github.com/RikVN/transformers/blob/main/src/transformers/data/data_collator.py#L948"
] | 1,611 | 1,652 | 1,619 | NONE | null | I find that `run_mlm_wwm.py` uses the whole word mask class `DataCollatorForWholeWordMask`.
But in this class `_whole_word_mask` function, we recognize if a token is the beginning of a word by this way:
```
cand_indexes = []
for (i, token) in enumerate(input_tokens):
if token == "[CLS]" or token == "[SEP]":
continue
if len(cand_indexes) >= 1 and token.startswith("##"):
cand_indexes[-1].append(i)
else:
cand_indexes.append([i])
```
I also notice that `run_mlm_wwm.py` is also used for Roberta per-training in the examples. However, the tokenizer for Roberta doesn't contain tokens like `[CLS]` and `[SEP]`. Also, the subwords do not start with `##`.
How can this code handle language models use Roberta-like tokenizer?
Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9866/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9866/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9865 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9865/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9865/comments | https://api.github.com/repos/huggingface/transformers/issues/9865/events | https://github.com/huggingface/transformers/issues/9865 | 795,747,456 | MDU6SXNzdWU3OTU3NDc0NTY= | 9,865 | [trainer] seq2seq doesn't handle mt5 correctly | {
"login": "mxa4646",
"id": 37767536,
"node_id": "MDQ6VXNlcjM3NzY3NTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/37767536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mxa4646",
"html_url": "https://github.com/mxa4646",
"followers_url": "https://api.github.com/users/mxa4646/followers",
"following_url": "https://api.github.com/users/mxa4646/following{/other_user}",
"gists_url": "https://api.github.com/users/mxa4646/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mxa4646/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mxa4646/subscriptions",
"organizations_url": "https://api.github.com/users/mxa4646/orgs",
"repos_url": "https://api.github.com/users/mxa4646/repos",
"events_url": "https://api.github.com/users/mxa4646/events{/privacy}",
"received_events_url": "https://api.github.com/users/mxa4646/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"OK, I can reproduce the problem with just google/mt5-small and 2 gpus:\r\n```\r\nexport BS=1; PYTHONPATH=../../src USE_TF=0 deepspeed --num_gpus=2 ./finetune_trainer.py --model_name_or_path google/mt5-small --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 1 --per_device_train_batch_size 1 --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --n_train 60 --n_val 10 --n_test 10 --deepspeed ds_config.json --fp16\r\n```\r\n\r\nWe will get it sorted out today.",
"ok, the problem had nothing to do with DeepSpeed, it's just a seq2seq neglect. \r\n\r\nThe fix is:\r\n\r\n```\r\ndiff --git a/examples/seq2seq/utils.py b/examples/seq2seq/utils.py\r\nindex 8b24bfda..303b89f7 100644\r\n--- a/examples/seq2seq/utils.py\r\n+++ b/examples/seq2seq/utils.py\r\n@@ -563,7 +563,7 @@ def freeze_embeds(model):\r\n \"\"\"Freeze token embeddings and positional embeddings for bart, just token embeddings for t5.\"\"\"\r\n model_type = model.config.model_type\r\n\r\n- if model_type == \"t5\":\r\n+ if model_type in [\"t5\", \"mt5\"]:\r\n freeze_params(model.shared)\r\n for d in [model.encoder, model.decoder]:\r\n freeze_params(d.embed_tokens)\r\n```\r\n\r\nPlease let me know if you can manage to apply this fix. I will make a proper PR later, but it'll take some work, since I need to make a tiny mt5 model and add a test.\r\n\r\nYou can just edit the file if you don't know how to apply a patch. ",
"The fix should be merged shortly https://github.com/huggingface/transformers/pull/9879\r\n",
"I can solve the `--freeze_embeds` bug now, thanks for your help! @stas00 \r\n\r\nAs for questions 3 and 4, I noticed that the title of the issue has been edited. I don't know if these questions are caused by the model or the seq2seq trainer. Maybe I should raise them in a new issue?",
"Oh, you wrote those items as steps to reproduce the problem, so I didn't know that those were issues that needed to/could be fixed. \r\n\r\nOnce I discovered that the issue you posted was unrelated to DeepSpeed I took the liberty to adjust the subject.\r\n\r\nIn general, yes, let's try to keep each issue separate, so that it makes it much easier to track things and not let things fall between the cracks.\r\n\r\nBack to your follow up question:\r\n\r\nLooking just at the params:\r\n\r\n- t5-3b ~10GB\r\n- mt5-xl ~15GB\r\n\r\nSo the 2nd model is substantially larger, and if t5-3b fit tightly onto a 24GB card it's not surprising that the larger model didn't. \r\n\r\nand in addition to model params you also need to allocate memory for:\r\n- inputs\r\n- gradients\r\n- optimizer states\r\n\r\n\r\nI tried mt5-xl on 4x 40gb gpu setup and it worked, but took ~29GB on each GPU, so there is the problem - you're 5GB short.\r\n\r\nThe command I run was:\r\n```\r\nexport BS=1; PYTHONPATH=../../src USE_TF=0 /usr/bin/time -v deepspeed --num_gpus=4 ./finetune_trainer.py --model_name_or_path google/mt5-xl --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 1 --per_device_train_batch_size 1 --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --n_train 60 --n_val 10 --n_test 10 --deepspeed ds_config.json --fp16\r\n```\r\n\r\nYou may try to tweak the buffer sizes in `ds_config.json` but I think the gap is too big. \r\n\r\nI'm working on a 2D Parallelism solution that will combine pipe|model-parallelism w/ ZeRO-DP (DeepSpeed), which should enable such feats with huge models, but it might take some time. The docs aren't quite there so it takes a lot of trial and error to move forward. You may want to track this PR https://github.com/huggingface/transformers/pull/9765 for updates.\r\n\r\nAlternatively when fairscale or DeepSpeed releases ZeRO phase 3, you shouldn't have a problem loading this model onto 4x 24GB gpus. Currently the problem is that the model params are too big w/o phase 3. In phase 3 params are partitioned too - problem solved.\r\n",
"> I tried mt5-xl on 4x 40gb gpu setup and it worked, but took ~29GB on each GPU, so there is the problem - you're 5GB short.\r\n\r\nThat's help a lot! Thank you!\r\n\r\nI am also looking forward to ZeRO stage 3 and your pipe|model-parallelism. Hope one day we can working on it. Thank you again!",
"> And I got the overflow problem. This is not surprising me because MT5-large seems not fixed FP16 yet.\r\n\r\nDid you get `nan` loss or gradient overflow warning ? And yes, fp16 is still not working for mT5-large\r\n\r\n> I assume that T5-3b and MT5-xl should be in the same order of magnitude\r\n\r\nmT5-xl is actually quite bigger than T5-3b for two reasons\r\n1. It's vocab_size is huge (250112), which results in bigger token_embedding layer and final linear layer.\r\n2. It's based on t51.1 which uses `gated-gelu` activation instead of `relu`. `gated-gelu` adds one extra linear layer in every feed-forward layer.",
"@patil-suraj That's very helpful! Thank you a lot!\r\n\r\nNow I understand that there are many differences between mT5-xl and T5-3b, and I will set up separate experiments for them in the future. By the way, do you have any plans to repair the FP16 in mt5-large/xl ?",
"Dear @patil-suraj, here you have mentioned for mt5-small you have made it work with fp16? since you did not mention this model, do you mind telling me how you made it work? I am having a hard time with mt5-small with fp16 thanks a lot for your advice ",
"I have a similar error here\r\n\r\n```python\r\nfrom transformers import T5TokenizerFast, MT5ForConditionalGeneration\r\n\r\ntokenizer = T5TokenizerFast.from_pretrained('google/mt5-base') # \"google/mt5-base\" \"google/mt5-large\" \"google/mt5-xl\"\r\n\r\nmodel = MT5ForConditionalGeneration.from_pretrained('google/mt5-base', return_dict=True)\r\n\r\ncondition = \"translate English to German: \"\r\ninput = \"My name is Azeem and I live in India\"\r\n\r\n# You can also use \"translate English to French\" and \"translate English to Romanian\"\r\ninput_ids = tokenizer(condition+input, return_tensors=\"pt\").input_ids # Batch size 1\r\n\r\noutputs = model.generate(input_ids)\r\n\r\ndecoded = tokenizer.decode(outputs[0], skip_special_tokens=True)\r\n\r\nprint(decoded)\r\n```\r\n\r\n\r\nStacktrace:\r\n```\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n[<ipython-input-8-f9822d331a70>](https://localhost:8080/#) in <module>()\r\n 3 tokenizer = T5TokenizerFast.from_pretrained('google/mt5-base') # \"google/mt5-base\" \"google/mt5-large\" \"google/mt5-xl\"\r\n 4 \r\n----> 5 model = AutoModelForSeq2SeqLM.from_pretrained('google/mt5-base', return_dict=True)\r\n 6 \r\n 7 condition = \"translate English to German: \"\r\n\r\n8 frames\r\n[/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py](https://localhost:8080/#) in __getattribute__(self, key)\r\n 250 if key != \"attribute_map\" and key in super().__getattribute__(\"attribute_map\"):\r\n 251 key = super().__getattribute__(\"attribute_map\")[key]\r\n--> 252 return super().__getattribute__(key)\r\n 253 \r\n 254 def __init__(self, **kwargs):\r\n\r\nAttributeError: 'MT5Config' object has no attribute 'relative_attention_max_distance'\r\n```\r\n\r\n@stas00 any idea? I'm using HF master:\r\n```\r\n!pip install git+https://github.com/huggingface/transformers.git\r\n```",
"@loretoparisi \r\n\r\nThis is because T5Config now has `relative_attention_max_distance` attribute introduced in the #16155 which was missing from `MT5Config`. Fix is here #16170\r\n"
] | 1,611 | 1,647 | 1,611 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.2
- Platform: Linux-5.4.0-58-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.7
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <yes>
- Using distributed or parallel set-up in script?: <yes>
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@stas00,@patrickvonplaten, @patil-suraj
## Information
Model I am using (MT5-xl,MT5-large):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (official example scripts task)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. The script I used is `exmaples/seq2seq/finetune_trainer.py`, which was originally used to reproduce the training of T5-3b on single 3090. All processes are the same as [#8771](https://github.com/huggingface/transformers/issues/8771#issuecomment-759176685) and it can reproduce the training of T5-3b(whether single card or 2/4 cards).
2. Here is the problem, when I try to train MT5-xl, `--freeze_embeds` seems to bring bugs. I used 4*3090, My script is
```
export BS=1; PYTHONPATH=../../src; USE_TF=0;
/usr/bin/time -v deepspeed --num_gpus=4 ./finetune_trainer.py --model_name_or_path /<my_model_dir>/models/mt5/xl/v0 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 1 --per_device_train_batch_size 1 --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --n_train 60 --n_val 10 --n_test 10 --deepspeed ds_config.json --fp16
```
Here is my report:
```
[2021-01-27 14:59:52,982] [WARNING] [runner.py:117:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2021-01-27 14:59:57,024] [INFO] [runner.py:358:main] cmd = /<my_dir>/miniconda3/envs/nlp/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMSwgMiwgM119 --master_addr=127.0.0.1 --master_port=29500 ./finetune_trainer.py --model_name_or_path /<my_model_dir>/models/mt5/xl/v0 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 1 --per_device_train_batch_size 1 --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --n_train 60 --n_val 10 --n_test 10 --deepspeed ds_config.json --fp16
[2021-01-27 14:59:57,793] [INFO] [launch.py:78:main] WORLD INFO DICT: {'localhost': [0, 1, 2, 3]}
[2021-01-27 14:59:57,793] [INFO] [launch.py:87:main] nnodes=1, num_local_procs=4, node_rank=0
[2021-01-27 14:59:57,793] [INFO] [launch.py:99:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1, 2, 3]})
[2021-01-27 14:59:57,793] [INFO] [launch.py:100:main] dist_world_size=4
[2021-01-27 14:59:57,793] [INFO] [launch.py:103:main] Setting CUDA_VISIBLE_DEVICES=0,1,2,3
[2021-01-27 15:00:01,106] [INFO] [distributed.py:40:init_distributed] Initializing torch distributed with backend: nccl
[2021-01-27 15:00:01,340] [INFO] [distributed.py:40:init_distributed] Initializing torch distributed with backend: nccl
[2021-01-27 15:00:01,672] [INFO] [distributed.py:40:init_distributed] Initializing torch distributed with backend: nccl
[2021-01-27 15:00:01,870] [INFO] [distributed.py:40:init_distributed] Initializing torch distributed with backend: nccl
01/27/2021 15:00:05 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: True, 16-bits training: True
01/27/2021 15:00:05 - WARNING - __main__ - Process rank: 2, device: cuda:2, n_gpu: 1, distributed training: True, 16-bits training: True
01/27/2021 15:00:05 - WARNING - __main__ - Process rank: 1, device: cuda:1, n_gpu: 1, distributed training: True, 16-bits training: True
01/27/2021 15:00:05 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(output_dir='output_dir', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=True, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=1, per_device_eval_batch_size=1, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=3e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-06, max_grad_norm=1.0, num_train_epochs=1.0, max_steps=-1, lr_scheduler_type=<SchedulerType.LINEAR: 'linear'>, warmup_steps=5, logging_dir='runs/Jan27_15-00-01_user-SYS-4029GP-TRT', logging_first_step=True, logging_steps=1000, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=True, fp16_opt_level='O1', fp16_backend='auto', local_rank=0, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=25000, dataloader_num_workers=0, past_index=-1, run_name='output_dir', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=False, deepspeed='ds_config.json', label_smoothing_factor=0.1, adafactor=False, sortish_sampler=True, predict_with_generate=True)
01/27/2021 15:00:05 - WARNING - __main__ - Process rank: 3, device: cuda:3, n_gpu: 1, distributed training: True, 16-bits training: True
[INFO|configuration_utils.py:443] 2021-01-27 15:00:05,352 >> loading configuration file /<my_model_dir>/models/mt5/xl/v0/config.json
[INFO|configuration_utils.py:481] 2021-01-27 15:00:05,353 >> Model config MT5Config {
"_name_or_path": "/home/patrick/t5/mt5-xl",
"architectures": [
"T5ForConditionalGeneration"
],
"d_ff": 5120,
"d_kv": 64,
"d_model": 2048,
"decoder_start_token_id": 0,
"dropout_rate": 0.1,
"eos_token_id": 1,
"feed_forward_proj": "gated-gelu",
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"layer_norm_epsilon": 1e-06,
"model_type": "mt5",
"num_decoder_layers": 24,
"num_heads": 32,
"num_layers": 24,
"output_past": true,
"pad_token_id": 0,
"relative_attention_num_buckets": 32,
"tie_word_embeddings": false,
"tokenizer_class": "T5Tokenizer",
"transformers_version": "4.2.1",
"use_cache": true,
"vocab_size": 250112
}
[INFO|configuration_utils.py:443] 2021-01-27 15:00:05,353 >> loading configuration file /<my_model_dir>/models/mt5/xl/v0/config.json
[INFO|configuration_utils.py:481] 2021-01-27 15:00:05,354 >> Model config MT5Config {
"_name_or_path": "/home/patrick/t5/mt5-xl",
"architectures": [
"T5ForConditionalGeneration"
],
"d_ff": 5120,
"d_kv": 64,
"d_model": 2048,
"decoder_start_token_id": 0,
"dropout_rate": 0.1,
"eos_token_id": 1,
"feed_forward_proj": "gated-gelu",
"initializer_factor": 1.0,
"is_encoder_decoder": true,
"layer_norm_epsilon": 1e-06,
"model_type": "mt5",
"num_decoder_layers": 24,
"num_heads": 32,
"num_layers": 24,
"output_past": true,
"pad_token_id": 0,
"relative_attention_num_buckets": 32,
"tie_word_embeddings": false,
"tokenizer_class": "T5Tokenizer",
"transformers_version": "4.2.1",
"use_cache": true,
"vocab_size": 250112
}
[INFO|tokenization_utils_base.py:1685] 2021-01-27 15:00:05,354 >> Model name '/<my_model_dir>/models/mt5/xl/v0' not found in model shortcut name list (t5-small, t5-base, t5-large, t5-3b, t5-11b). Assuming '/<my_model_dir>/models/mt5/xl/v0' is a path, a model identifier, or url to a directory containing tokenizer files.
[INFO|tokenization_utils_base.py:1718] 2021-01-27 15:00:05,354 >> Didn't find file /<my_model_dir>/models/mt5/xl/v0/tokenizer.json. We won't load it.
[INFO|tokenization_utils_base.py:1718] 2021-01-27 15:00:05,355 >> Didn't find file /<my_model_dir>/models/mt5/xl/v0/added_tokens.json. We won't load it.
[INFO|tokenization_utils_base.py:1718] 2021-01-27 15:00:05,355 >> Didn't find file /<my_model_dir>/models/mt5/xl/v0/special_tokens_map.json. We won't load it.
[INFO|tokenization_utils_base.py:1718] 2021-01-27 15:00:05,355 >> Didn't find file /<my_model_dir>/models/mt5/xl/v0/tokenizer_config.json. We won't load it.
[INFO|tokenization_utils_base.py:1764] 2021-01-27 15:00:05,355 >> loading file /<my_model_dir>/models/mt5/xl/v0/spiece.model
[INFO|tokenization_utils_base.py:1764] 2021-01-27 15:00:05,355 >> loading file None
[INFO|tokenization_utils_base.py:1764] 2021-01-27 15:00:05,355 >> loading file None
[INFO|tokenization_utils_base.py:1764] 2021-01-27 15:00:05,355 >> loading file None
[INFO|tokenization_utils_base.py:1764] 2021-01-27 15:00:05,355 >> loading file None
[INFO|modeling_utils.py:1025] 2021-01-27 15:00:06,472 >> loading weights file /<my_model_dir>/models/mt5/xl/v0/pytorch_model.bin
Traceback (most recent call last):
File "./finetune_trainer.py", line 367, in <module>
main()
File "./finetune_trainer.py", line 230, in main
freeze_embeds(model)
File "/<my_dir>/transformers/examples/seq2seq/utils.py", line 567, in freeze_embeds
[INFO|modeling_utils.py:1143] 2021-01-27 15:05:03,683 >> All model checkpoint weights were used when initializing MT5ForConditionalGeneration.
[INFO|modeling_utils.py:1152] 2021-01-27 15:05:03,683 >> All the weights of MT5ForConditionalGeneration were initialized from the model checkpoint at /<my_model_dir>/models/mt5/xl/v0.
If your task is similar to the task the model of the checkpoint was trained on, you can already use MT5ForConditionalGeneration for predictions without further training.
Traceback (most recent call last):
File "./finetune_trainer.py", line 367, in <module>
main()
File "./finetune_trainer.py", line 230, in main
freeze_embeds(model)
File "/<my_dir>/transformers/examples/seq2seq/utils.py", line 567, in freeze_embeds
freeze_params(model.model.shared)
File "/<my_dir>/miniconda3/envs/nlp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 779, in __getattr__
freeze_params(model.model.shared)
File "/<my_dir>/miniconda3/envs/nlp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 779, in __getattr__
type(self).__name__, name))
torch.nn.modules.module.ModuleAttributeError: 'MT5ForConditionalGeneration' object has no attribute 'model'
type(self).__name__, name))
torch.nn.modules.module.ModuleAttributeError: 'MT5ForConditionalGeneration' object has no attribute 'model'
Traceback (most recent call last):
File "./finetune_trainer.py", line 367, in <module>
main()
File "./finetune_trainer.py", line 230, in main
freeze_embeds(model)
File "/<my_dir>/transformers/examples/seq2seq/utils.py", line 567, in freeze_embeds
freeze_params(model.model.shared)
File "/<my_dir>/miniconda3/envs/nlp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 779, in __getattr__
type(self).__name__, name))
torch.nn.modules.module.ModuleAttributeError: 'MT5ForConditionalGeneration' object has no attribute 'model'
Traceback (most recent call last):
File "./finetune_trainer.py", line 367, in <module>
main()
File "./finetune_trainer.py", line 230, in main
freeze_embeds(model)
File "/<my_dir>/transformers/examples/seq2seq/utils.py", line 567, in freeze_embeds
freeze_params(model.model.shared)
File "/<my_dir>/miniconda3/envs/nlp/lib/python3.7/site-packages/torch/nn/modules/module.py", line 779, in __getattr__
type(self).__name__, name))
torch.nn.modules.module.ModuleAttributeError: 'MT5ForConditionalGeneration' object has no attribute 'model'
Command being timed: "deepspeed --num_gpus=4 ./finetune_trainer.py --model_name_or_path /<my_model_dir>/models/mt5/xl/v0 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 1 --per_device_train_batch_size 1 --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --n_train 60 --n_val 10 --n_test 10 --deepspeed ds_config.json --fp16"
User time (seconds): 348.34
System time (seconds): 177.55
Percent of CPU this job got: 166%
Elapsed (wall clock) time (h:mm:ss or m:ss): 5:15.88
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 33558800
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 1
Minor (reclaiming a frame) page faults: 67111048
Voluntary context switches: 132337
Involuntary context switches: 6635761
Swaps: 0
File system inputs: 29248712
File system outputs: 32
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
```
3. So I removed `--freeze_embeds` and tried to train MT5-xl again, but I got CUDA out of memory. My device is 4*24G 3090, with BS=1, ZeRO stage=2, and CPU_offload=true. I assume that T5-3b and MT5-xl should be in the same order of magnitude, and I can do it on t5-3b, so I think this should not happen.
4. I also tried training MT5-large. Just replace mt5-xl to mt5-large, under the same conditions in 3. And I got the overflow problem. This is not surprising me because MT5-large seems not fixed FP16 yet. In short, I want to know if there is any problem with my operation or if this is the case. If it is because the MT5-large has not been repaired, does huggingface have any plans to repair it?
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
1. Why can't mt5-xl train on 4*3090? Or what should I do?
2. Can mt5-large FP16 (mainly DeepSpeed) be used? If not, is there any plan to fix it?
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9865/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9864 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9864/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9864/comments | https://api.github.com/repos/huggingface/transformers/issues/9864/events | https://github.com/huggingface/transformers/issues/9864 | 795,652,112 | MDU6SXNzdWU3OTU2NTIxMTI= | 9,864 | Longformer: raise TypeError("pred must not be a Python bool", pred) | {
"login": "xuxingya",
"id": 13343428,
"node_id": "MDQ6VXNlcjEzMzQzNDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/13343428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xuxingya",
"html_url": "https://github.com/xuxingya",
"followers_url": "https://api.github.com/users/xuxingya/followers",
"following_url": "https://api.github.com/users/xuxingya/following{/other_user}",
"gists_url": "https://api.github.com/users/xuxingya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xuxingya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xuxingya/subscriptions",
"organizations_url": "https://api.github.com/users/xuxingya/orgs",
"repos_url": "https://api.github.com/users/xuxingya/repos",
"events_url": "https://api.github.com/users/xuxingya/events{/privacy}",
"received_events_url": "https://api.github.com/users/xuxingya/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @jplu do you maybe have a good idea here?",
"Hey @xuxingya !! Thanks a lot for reporting the issue! Indeed Longformer has a bug in the `_pad_to_window_size` method. We will work on fixing this ASAP.\r\n\r\nEven though there is indeed a bug, your piece of code is wrong and should be:\r\n```python\r\nfrom transformers.models.longformer.modeling_tf_longformer import TFLongformerMainLayer\r\nfrom tensorflow.keras.layers import Input, Dense\r\nfrom tensorflow.keras.models import Model\r\nfrom transformers import LongformerConfig\r\nimport tensorflow as tf\r\nimport numpy as np\r\n\r\ntf.random.set_seed(200)\r\n\r\n\r\nclass CustomLongFormer(tf.keras.layers.Layer):\r\n def __init__(self, name='longformer', **kwargs):\r\n super().__init__(name=name, **kwargs)\r\n config = LongformerConfig(attention_window=4, num_hidden_layers=1, vocab_size=10)\r\n self.longformer = TFLongformerMainLayer(config)\r\n\r\n def call(self, inputs):\r\n x = self.longformer(input_ids=None, inputs_embeds=inputs)[0]\r\n return x\r\n\r\n\r\nlongformer = CustomLongFormer()\r\ninputs = Input(shape=(None, None), dtype='float32', name=\"inputs_embeds\")\r\noutput = longformer(inputs)\r\noutput = Dense(9, activation='softmax')(output)\r\nmodel = Model(inputs, output)\r\nmodel.compile(optimizer='adam', loss='sparse_categorical_crossentropy')\r\n\r\nx = np.array([np.random.uniform(0,1, (3, 768))] * 100)\r\ny = np.array([[1]*3] * 100)\r\nmodel.fit(x=x, y=y, epochs=10, batch_size=4, validation_split=0.1)\r\n```",
"Fixed in #9942 "
] | 1,611 | 1,612 | 1,612 | NONE | null | ## Environment info
- `transformers` version: 4.2.2
- Platform: Ubuntu 18.04
- Python version: 3.7.6
- PyTorch version (GPU?): None
- Tensorflow version (GPU?): 2.3.1, 2.3.2 (with or without GPU)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help@patrick-s-h-lewis
Models:
- longformer @patrickvonplaten
## Information
Errors occur when I use the TFLongformerMainLayer as a layer of my model. I will give a simple example below to reproduce this bug.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Errors occurs in 'transformers/models/longformer/modeling_tf_longformer.py:1799 _pad_to_window_size *
inputs_embeds = tf.cond(padding_len > 0, pad_embeddings, lambda: inputs_embeds)'
It looks like `padding_len >0` is a python bool which caused this error.
According to the [official guide of tf.cond](https://www.tensorflow.org/api_docs/python/tf/cond) example: `result = tf.cond(x < y, lambda: tf.add(x, z), lambda: tf.square(y))`, I think this is because both 'padding_len' and '0' are not tensors, so `padding_len >0` just returns a python bool.
```
TypeError: in user code:
/home/xingya/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:806 train_function *
return step_function(self, iterator)
test.py:44 call *
x = self.longformer(input_ids=None, inputs_embeds=inputs)[0]
/home/xingya/.conda/envs/tf2/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1680 call *
(
/home/xingya/.conda/envs/tf2/lib/python3.7/site-packages/transformers/models/longformer/modeling_tf_longformer.py:1799 _pad_to_window_size *
inputs_embeds = tf.cond(padding_len > 0, pad_embeddings, lambda: inputs_embeds)
/home/xingya/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201 wrapper **
return target(*args, **kwargs)
/home/xingya/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow/python/ops/control_flow_ops.py:1396 cond_for_tf_v2
return cond(pred, true_fn=true_fn, false_fn=false_fn, strict=True, name=name)
/home/xingya/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/home/xingya/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py:507 new_func
return func(*args, **kwargs)
/home/xingya/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow/python/ops/control_flow_ops.py:1180 cond
return cond_v2.cond_v2(pred, true_fn, false_fn, name)
/home/xingya/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow/python/ops/cond_v2.py:62 cond_v2
raise TypeError("pred must not be a Python bool", pred)
TypeError: ('pred must not be a Python bool', True)
```
Here is a snipt to reproduce this bug:
```
from transformers.models.longformer.modeling_tf_longformer import TFLongformerMainLayer
from tensorflow.keras.layers import Input, Embedding, Dense
from tensorflow.keras.models import Model
from transformers import LongformerConfig
import tensorflow as tf
import numpy as np
tf.random.set_seed(200)
class LongFormerMain(tf.keras.layers.Layer):
def __init__(self, name='longformer', **kwargs):
super(LongFormerMain, self).__init__(name=name, **kwargs)
config = LongformerConfig(attention_window=4, num_hidden_layers=1, vocab_size=10)
self.longformer = TFLongformerMainLayer(config)
def call(self, inputs):
x = self.longformer(input_ids=None, inputs_embeds=inputs)[0]
return x
inputs = Input(shape=(None,), dtype='int32')
output = Embedding(100, 768)(inputs)
longformer = LongFormerMain()
output = longformer(output)
output = Dense(9, activation='softmax')(output)
model = Model(inputs, output)
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
x = np.array([[5, 2, 3] * 3] * 100)
y = np.array([[1, 2, 3] * 3] * 100)
model.fit(x=x, y=y, epochs=10, batch_size=4, validation_split=0.1)
print(model.predict([[5, 2, 3] * 3]))
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9864/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9863 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9863/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9863/comments | https://api.github.com/repos/huggingface/transformers/issues/9863/events | https://github.com/huggingface/transformers/issues/9863 | 795,532,336 | MDU6SXNzdWU3OTU1MzIzMzY= | 9,863 | Add support for tf2 encoder_decoder | {
"login": "thevasudevgupta",
"id": 53136577,
"node_id": "MDQ6VXNlcjUzMTM2NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thevasudevgupta",
"html_url": "https://github.com/thevasudevgupta",
"followers_url": "https://api.github.com/users/thevasudevgupta/followers",
"following_url": "https://api.github.com/users/thevasudevgupta/following{/other_user}",
"gists_url": "https://api.github.com/users/thevasudevgupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thevasudevgupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thevasudevgupta/subscriptions",
"organizations_url": "https://api.github.com/users/thevasudevgupta/orgs",
"repos_url": "https://api.github.com/users/thevasudevgupta/repos",
"events_url": "https://api.github.com/users/thevasudevgupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/thevasudevgupta/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Shall I start working on it if no one else is doing it?",
"Feel free to give it a try and tag me if you encounter any issues along the way! Just to set expectations, such a PR will be a longer project (~1 month) and is a relatively low priority for the library at the moment, so I might not be able to reply daily. \r\n\r\nBut nevertheless, I'm more than happy to guide you through a PR :-) ",
"Thanks! I will start the PR soon :)"
] | 1,611 | 1,634 | 1,634 | CONTRIBUTOR | null | # 🌟 New model addition
I would like to add `TensorFlow-2` support for `encoder_decoder` model. I will soon create a PR, if this is approved. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9863/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9862 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9862/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9862/comments | https://api.github.com/repos/huggingface/transformers/issues/9862/events | https://github.com/huggingface/transformers/issues/9862 | 795,486,856 | MDU6SXNzdWU3OTU0ODY4NTY= | 9,862 | AttributeError with T5Tokenizer | {
"login": "snat1505027",
"id": 18405970,
"node_id": "MDQ6VXNlcjE4NDA1OTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/18405970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/snat1505027",
"html_url": "https://github.com/snat1505027",
"followers_url": "https://api.github.com/users/snat1505027/followers",
"following_url": "https://api.github.com/users/snat1505027/following{/other_user}",
"gists_url": "https://api.github.com/users/snat1505027/gists{/gist_id}",
"starred_url": "https://api.github.com/users/snat1505027/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/snat1505027/subscriptions",
"organizations_url": "https://api.github.com/users/snat1505027/orgs",
"repos_url": "https://api.github.com/users/snat1505027/repos",
"events_url": "https://api.github.com/users/snat1505027/events{/privacy}",
"received_events_url": "https://api.github.com/users/snat1505027/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think the errors could be more explicit here, here I think it comes from the fact that you don't have SentencePiece installed. Can you try to install it and let me know if it fixes your issue?",
"Hi @LysandreJik. I had the same issue, with sentencepiece installed. I also notice that my previous notebooks with T5Tokenizer and t5-base don't also work as well.\r\n\r\nHere's my error:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-42-63aa129d4b0c> in <module>()\r\n----> 1 dataset = ParaphraseDataset(tokenizer, 'data', 'dev', 256)\r\n 2 print(\"Val dataset: \",len(dataset))\r\n 3 \r\n 4 data = dataset[61]\r\n 5 print(tokenizer.decode(data['source_ids']))\r\n\r\n1 frames\r\n<ipython-input-39-2fa4af2cad5a> in __init__(self, tokenizer, data_dir, type_path, max_len)\r\n 12 self.targets = []\r\n 13 \r\n---> 14 self._build()\r\n 15 \r\n 16 def __len__(self):\r\n\r\n<ipython-input-39-2fa4af2cad5a> in _build(self)\r\n 34 \r\n 35 # tokenize inputs\r\n---> 36 tokenized_inputs = self.tokenizer.batch_encode_plus(\r\n 37 [input_], max_length=self.max_len, pad_to_max_length=True, return_tensors=\"pt\", truncation='longest_first'\r\n 38 )\r\n\r\nAttributeError: 'NoneType' object has no attribute 'batch_encode_plus'\r\n```\r\n\r\nIt seems to me that T5Tokenizer isn't loading the T5-base tokenizer properly",
"Hmmm I'm pretty sure this only happens when you don't have sentencepiece installed. Do you mind pasting your environment info as well as `pip list`? If you're running on colab, can you make sure you restart the runtime before trying again? Thank you for your patience.",
"Here's mine (sorry for the long scroll!) (PS: I just found that downgrading the version of the transformers package solves the issue):\r\n\r\n```\r\nPackage Version \r\n----------------------------- ---------------\r\nabsl-py 0.10.0 \r\naiohttp 3.7.3 \r\nalabaster 0.7.12 \r\nalbumentations 0.1.12 \r\naltair 4.1.0 \r\nappdirs 1.4.4 \r\nargon2-cffi 20.1.0 \r\nasgiref 3.3.1 \r\nastor 0.8.1 \r\nastropy 4.1 \r\nastunparse 1.6.3 \r\nasync-generator 1.10 \r\nasync-timeout 3.0.1 \r\natari-py 0.2.6 \r\natomicwrites 1.4.0 \r\nattrs 20.3.0 \r\naudioread 2.1.9 \r\nautograd 1.3 \r\nBabel 2.9.0 \r\nbackcall 0.2.0 \r\nbeautifulsoup4 4.6.3 \r\nbleach 3.2.2 \r\nblis 0.4.1 \r\nbokeh 2.1.1 \r\nBottleneck 1.3.2 \r\nbranca 0.4.2 \r\nbs4 0.0.1 \r\nCacheControl 0.12.6 \r\ncachetools 4.2.1 \r\ncatalogue 1.0.0 \r\ncertifi 2020.12.5 \r\ncffi 1.14.4 \r\nchainer 7.4.0 \r\nchardet 3.0.4 \r\nclick 7.1.2 \r\ncloudpickle 1.3.0 \r\ncmake 3.12.0 \r\ncmdstanpy 0.9.5 \r\ncolorlover 0.3.0 \r\ncommunity 1.0.0b1 \r\ncontextlib2 0.5.5 \r\nconvertdate 2.2.0 \r\ncoverage 3.7.1 \r\ncoveralls 0.5 \r\ncrcmod 1.7 \r\ncufflinks 0.17.3 \r\ncupy-cuda101 7.4.0 \r\ncvxopt 1.2.5 \r\ncvxpy 1.0.31 \r\ncycler 0.10.0 \r\ncymem 2.0.5 \r\nCython 0.29.21 \r\ndaft 0.0.4 \r\ndask 2.12.0 \r\ndataclasses 0.8 \r\ndatascience 0.10.6 \r\ndebugpy 1.0.0 \r\ndecorator 4.4.2 \r\ndefusedxml 0.6.0 \r\ndescartes 1.1.0 \r\ndill 0.3.3 \r\ndistributed 1.25.3 \r\nDjango 3.1.5 \r\ndlib 19.18.0 \r\ndm-tree 0.1.5 \r\ndocopt 0.6.2 \r\ndocutils 0.16 \r\ndopamine-rl 1.0.5 \r\nearthengine-api 0.1.238 \r\neasydict 1.9 \r\necos 2.0.7.post1 \r\neditdistance 0.5.3 \r\nen-core-web-sm 2.2.5 \r\nentrypoints 0.3 \r\nephem 3.7.7.1 \r\net-xmlfile 1.0.1 \r\nfa2 0.3.5 \r\nfancyimpute 0.4.3 \r\nfastai 1.0.61 \r\nfastdtw 0.3.4 \r\nfastprogress 1.0.0 \r\nfastrlock 0.5 \r\nfbprophet 0.7.1 \r\nfeather-format 0.4.1 \r\nfilelock 3.0.12 \r\nfirebase-admin 4.4.0 \r\nfix-yahoo-finance 0.0.22 \r\nFlask 1.1.2 \r\nflatbuffers 1.12 \r\nfolium 0.8.3 \r\nfsspec 0.8.5 \r\nfuture 0.18.2 \r\ngast 0.3.3 \r\nGDAL 2.2.2 \r\ngdown 3.6.4 \r\ngensim 3.6.0 \r\ngeographiclib 1.50 \r\ngeopy 1.17.0 \r\ngin-config 0.4.0 \r\nglob2 0.7 \r\ngoogle 2.0.3 \r\ngoogle-api-core 1.16.0 \r\ngoogle-api-python-client 1.7.12 \r\ngoogle-auth 1.17.2 \r\ngoogle-auth-httplib2 0.0.4 \r\ngoogle-auth-oauthlib 0.4.2 \r\ngoogle-cloud-bigquery 1.21.0 \r\ngoogle-cloud-bigquery-storage 1.1.0 \r\ngoogle-cloud-core 1.0.3 \r\ngoogle-cloud-datastore 1.8.0 \r\ngoogle-cloud-firestore 1.7.0 \r\ngoogle-cloud-language 1.2.0 \r\ngoogle-cloud-storage 1.18.1 \r\ngoogle-cloud-translate 1.5.0 \r\ngoogle-colab 1.0.0 \r\ngoogle-pasta 0.2.0 \r\ngoogle-resumable-media 0.4.1 \r\ngoogleapis-common-protos 1.52.0 \r\ngoogledrivedownloader 0.4 \r\ngraphviz 0.10.1 \r\ngrpcio 1.32.0 \r\ngspread 3.0.1 \r\ngspread-dataframe 3.0.8 \r\ngym 0.17.3 \r\nh5py 2.10.0 \r\nHeapDict 1.0.1 \r\nholidays 0.10.4 \r\nholoviews 1.13.5 \r\nhtml5lib 1.0.1 \r\nhttpimport 0.5.18 \r\nhttplib2 0.17.4 \r\nhttplib2shim 0.0.3 \r\nhumanize 0.5.1 \r\nhyperopt 0.1.2 \r\nideep4py 2.0.0.post3 \r\nidna 2.10 \r\nidna-ssl 1.1.0 \r\nimage 1.5.33 \r\nimageio 2.4.1 \r\nimagesize 1.2.0 \r\nimbalanced-learn 0.4.3 \r\nimblearn 0.0 \r\nimgaug 0.2.9 \r\nimportlib-metadata 3.4.0 \r\nimportlib-resources 5.1.0 \r\nimutils 0.5.4 \r\ninflect 2.1.0 \r\niniconfig 1.1.1 \r\nintel-openmp 2021.1.2 \r\nintervaltree 2.1.0 \r\nipykernel 4.10.1 \r\nipython 5.5.0 \r\nipython-genutils 0.2.0 \r\nipython-sql 0.3.9 \r\nipywidgets 7.6.3 \r\nitsdangerous 1.1.0 \r\njax 0.2.7 \r\njaxlib 0.1.57+cuda101 \r\njdcal 1.4.1 \r\njedi 0.18.0 \r\njieba 0.42.1 \r\nJinja2 2.11.2 \r\njoblib 1.0.0 \r\njpeg4py 0.1.4 \r\njsonschema 2.6.0 \r\njupyter 1.0.0 \r\njupyter-client 5.3.5 \r\njupyter-console 5.2.0 \r\njupyter-core 4.7.0 \r\njupyterlab-pygments 0.1.2 \r\njupyterlab-widgets 1.0.0 \r\nkaggle 1.5.10 \r\nkapre 0.1.3.1 \r\nKeras 2.4.3 \r\nKeras-Preprocessing 1.1.2 \r\nkeras-vis 0.4.1 \r\nkiwisolver 1.3.1 \r\nknnimpute 0.1.0 \r\nkorean-lunar-calendar 0.2.1 \r\nlibrosa 0.8.0 \r\nlightgbm 2.2.3 \r\nllvmlite 0.34.0 \r\nlmdb 0.99 \r\nlucid 0.3.8 \r\nLunarCalendar 0.0.9 \r\nlxml 4.2.6 \r\nMarkdown 3.3.3 \r\nMarkupSafe 1.1.1 \r\nmatplotlib 3.2.2 \r\nmatplotlib-venn 0.11.6 \r\nmissingno 0.4.2 \r\nmistune 0.8.4 \r\nmizani 0.6.0 \r\nmkl 2019.0 \r\nmlxtend 0.14.0 \r\nmore-itertools 8.6.0 \r\nmoviepy 0.2.3.5 \r\nmpmath 1.1.0 \r\nmsgpack 1.0.2 \r\nmultidict 5.1.0 \r\nmultiprocess 0.70.11.1 \r\nmultitasking 0.0.9 \r\nmurmurhash 1.0.5 \r\nmusic21 5.5.0 \r\nnatsort 5.5.0 \r\nnbclient 0.5.1 \r\nnbconvert 5.6.1 \r\nnbformat 5.1.2 \r\nnest-asyncio 1.4.3 \r\nnetworkx 2.5 \r\nnibabel 3.0.2 \r\nnltk 3.2.5 \r\nnotebook 5.3.1 \r\nnp-utils 0.5.12.1 \r\nnumba 0.51.2 \r\nnumexpr 2.7.2 \r\nnumpy 1.19.5 \r\nnvidia-ml-py3 7.352.0 \r\noauth2client 4.1.3 \r\noauthlib 3.1.0 \r\nokgrade 0.4.3 \r\nopencv-contrib-python 4.1.2.30 \r\nopencv-python 4.1.2.30 \r\nopenpyxl 2.5.9 \r\nopt-einsum 3.3.0 \r\nosqp 0.6.2.post0 \r\npackaging 20.8 \r\npalettable 3.3.0 \r\npandas 1.1.5 \r\npandas-datareader 0.9.0 \r\npandas-gbq 0.13.3 \r\npandas-profiling 1.4.1 \r\npandocfilters 1.4.3 \r\npanel 0.9.7 \r\nparam 1.10.1 \r\nparso 0.8.1 \r\npathlib 1.0.1 \r\npatsy 0.5.1 \r\npexpect 4.8.0 \r\npickleshare 0.7.5 \r\nPillow 7.0.0 \r\npip 19.3.1 \r\npip-tools 4.5.1 \r\nplac 1.1.3 \r\nplotly 4.4.1 \r\nplotnine 0.6.0 \r\npluggy 0.7.1 \r\npooch 1.3.0 \r\nportpicker 1.3.1 \r\nprefetch-generator 1.0.1 \r\npreshed 3.0.5 \r\nprettytable 2.0.0 \r\nprogressbar2 3.38.0 \r\nprometheus-client 0.9.0 \r\npromise 2.3 \r\nprompt-toolkit 1.0.18 \r\nprotobuf 3.12.4 \r\npsutil 5.4.8 \r\npsycopg2 2.7.6.1 \r\nptyprocess 0.7.0 \r\npy 1.10.0 \r\npyarrow 0.14.1 \r\npyasn1 0.4.8 \r\npyasn1-modules 0.2.8 \r\npycocotools 2.0.2 \r\npycparser 2.20 \r\npyct 0.4.8 \r\npydata-google-auth 1.1.0 \r\npydot 1.3.0 \r\npydot-ng 2.0.0 \r\npydotplus 2.0.2 \r\nPyDrive 1.3.1 \r\npyemd 0.5.1 \r\npyglet 1.5.0 \r\nPygments 2.6.1 \r\npygobject 3.26.1 \r\npymc3 3.7 \r\nPyMeeus 0.3.7 \r\npymongo 3.11.2 \r\npymystem3 0.2.0 \r\npynndescent 0.5.1 \r\nPyOpenGL 3.1.5 \r\npyparsing 2.4.7 \r\npyrsistent 0.17.3 \r\npysndfile 1.3.8 \r\nPySocks 1.7.1 \r\npystan 2.19.1.1 \r\npytest 3.6.4 \r\npython-apt 1.6.5+ubuntu0.5\r\npython-chess 0.23.11 \r\npython-dateutil 2.8.1 \r\npython-louvain 0.15 \r\npython-slugify 4.0.1 \r\npython-utils 2.5.3 \r\npytorch-lightning 1.1.6 \r\npytz 2018.9 \r\npyviz-comms 2.0.1 \r\nPyWavelets 1.1.1 \r\nPyYAML 5.3.1 \r\npyzmq 21.0.1 \r\nqdldl 0.1.5.post0 \r\nqtconsole 5.0.2 \r\nQtPy 1.9.0 \r\nregex 2019.12.20 \r\nrequests 2.23.0 \r\nrequests-oauthlib 1.3.0 \r\nresampy 0.2.2 \r\nretrying 1.3.3 \r\nrpy2 3.2.7 \r\nrsa 4.7 \r\nsacremoses 0.0.43 \r\nscikit-image 0.16.2 \r\nscikit-learn 0.22.2.post1 \r\nscipy 1.4.1 \r\nscreen-resolution-extra 0.0.0 \r\nscs 2.1.2 \r\nseaborn 0.11.1 \r\nSend2Trash 1.5.0 \r\nsentencepiece 0.1.95 \r\nsetuptools 51.3.3 \r\nsetuptools-git 1.2 \r\nShapely 1.7.1 \r\nsimplegeneric 0.8.1 \r\nsix 1.15.0 \r\nsklearn 0.0 \r\nsklearn-pandas 1.8.0 \r\nsmart-open 4.1.2 \r\nsnowballstemmer 2.1.0 \r\nsortedcontainers 2.3.0 \r\nSoundFile 0.10.3.post1 \r\nspacy 2.2.4 \r\nSphinx 1.8.5 \r\nsphinxcontrib-serializinghtml 1.1.4 \r\nsphinxcontrib-websupport 1.2.4 \r\nSQLAlchemy 1.3.22 \r\nsqlparse 0.4.1 \r\nsrsly 1.0.5 \r\nstatsmodels 0.10.2 \r\nsympy 1.1.1 \r\ntables 3.4.4 \r\ntabulate 0.8.7 \r\ntblib 1.7.0 \r\ntensorboard 2.4.1 \r\ntensorboard-plugin-wit 1.8.0 \r\ntensorboardcolab 0.0.22 \r\ntensorflow 2.4.1 \r\ntensorflow-addons 0.8.3 \r\ntensorflow-datasets 4.0.1 \r\ntensorflow-estimator 2.4.0 \r\ntensorflow-gcs-config 2.4.0 \r\ntensorflow-hub 0.11.0 \r\ntensorflow-metadata 0.27.0 \r\ntensorflow-privacy 0.2.2 \r\ntensorflow-probability 0.12.1 \r\ntermcolor 1.1.0 \r\nterminado 0.9.2 \r\ntestpath 0.4.4 \r\ntext-unidecode 1.3 \r\ntextblob 0.15.3 \r\ntextgenrnn 1.4.1 \r\nTheano 1.0.5 \r\nthinc 7.4.0 \r\ntifffile 2020.9.3 \r\ntokenizers 0.8.1rc2 \r\ntoml 0.10.2 \r\ntoolz 0.11.1 \r\ntorch 1.7.0+cu101 \r\ntorchsummary 1.5.1 \r\ntorchtext 0.3.1 \r\ntorchvision 0.8.1+cu101 \r\ntornado 5.1.1 \r\ntqdm 4.41.1 \r\ntraitlets 4.3.3 \r\ntransformers 3.3.0 \r\ntweepy 3.6.0 \r\ntypeguard 2.7.1 \r\ntyping-extensions 3.7.4.3 \r\ntzlocal 1.5.1 \r\numap-learn 0.5.0 \r\nuritemplate 3.0.1 \r\nurllib3 1.24.3 \r\nvega-datasets 0.9.0 \r\nwasabi 0.8.1 \r\nwcwidth 0.2.5 \r\nwebencodings 0.5.1 \r\nWerkzeug 1.0.1 \r\nwheel 0.36.2 \r\nwidgetsnbextension 3.5.1 \r\nwordcloud 1.5.0 \r\nwrapt 1.12.1 \r\nxarray 0.15.1 \r\nxgboost 0.90 \r\nxkit 0.0.0 \r\nxlrd 1.1.0 \r\nxlwt 1.3.0 \r\nyarl 1.6.3 \r\nyellowbrick 0.9.1 \r\nzict 2.0.0 \r\nzipp 3.4.0\r\n```",
"Thank you for sharing! In previous packages, we needed `sentencepiece` so it was installed automatically. It's not anymore, with I think was the issue here. Will look into it further.",
"> I think the errors could be more explicit here, here I think it comes from the fact that you don't have SentencePiece installed. Can you try to install it and let me know if it fixes your issue?\r\n\r\nYes, I have already installed **SentencePiece**.",
"It eventually worked for me with the following re-installation:\r\n\r\n```\r\n!pip install transformers==2.9.0 \r\n!pip install pytorch_lightning==0.7.5\r\n```\r\n\r\nMaybe the error was due to the specific version.",
"You need sentencepiece: **_!pip install sentencepiece_**\r\n\r\nHowever, if you are using colab notebook you have to **_restart the runtime for it to work._** after installing sentencepiece_",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,611 | 1,619 | 1,619 | NONE | null | I am trying to use **T5Tokenizer** and **t5-base** model to fine-tune on **SQuAD** dataset. But each time, when I run the tokenizer code I get errors (e.g, `'NoneType' object has no attribute 'encode'/'batch_encode_plus'/'encode_plus'`).
Example code
```
tokenizer = T5Tokenizer.from_pretrained('t5-base')
ids_neg = tokenizer.encode('negative </s>')
ids_pos = tokenizer.encode('positive </s>')
```
I get the following error:
> AttributeError Traceback (most recent call last)
> <ipython-input-19-f34cd55ac673> in <module>()
> ----> 1 ids_neg = tokenizer.encode('negative </s>')
> 2 ids_pos = tokenizer.encode('positive </s>')
> 3 len(ids_neg), len(ids_pos)
>
> AttributeError: 'NoneType' object has no attribute 'encode' | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9862/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9862/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9861 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9861/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9861/comments | https://api.github.com/repos/huggingface/transformers/issues/9861/events | https://github.com/huggingface/transformers/pull/9861 | 795,457,922 | MDExOlB1bGxSZXF1ZXN0NTYyODAyMjQ0 | 9,861 | Rag modification | {
"login": "LouisCastricato",
"id": 5066878,
"node_id": "MDQ6VXNlcjUwNjY4Nzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/5066878?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LouisCastricato",
"html_url": "https://github.com/LouisCastricato",
"followers_url": "https://api.github.com/users/LouisCastricato/followers",
"following_url": "https://api.github.com/users/LouisCastricato/following{/other_user}",
"gists_url": "https://api.github.com/users/LouisCastricato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LouisCastricato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LouisCastricato/subscriptions",
"organizations_url": "https://api.github.com/users/LouisCastricato/orgs",
"repos_url": "https://api.github.com/users/LouisCastricato/repos",
"events_url": "https://api.github.com/users/LouisCastricato/events{/privacy}",
"received_events_url": "https://api.github.com/users/LouisCastricato/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Erm.... misclicked"
] | 1,611 | 1,611 | 1,611 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9861/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9861",
"html_url": "https://github.com/huggingface/transformers/pull/9861",
"diff_url": "https://github.com/huggingface/transformers/pull/9861.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9861.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9860 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9860/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9860/comments | https://api.github.com/repos/huggingface/transformers/issues/9860/events | https://github.com/huggingface/transformers/issues/9860 | 795,456,437 | MDU6SXNzdWU3OTU0NTY0Mzc= | 9,860 | Padding tokens affect MobileBert output | {
"login": "johnmccain",
"id": 17013636,
"node_id": "MDQ6VXNlcjE3MDEzNjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/17013636?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnmccain",
"html_url": "https://github.com/johnmccain",
"followers_url": "https://api.github.com/users/johnmccain/followers",
"following_url": "https://api.github.com/users/johnmccain/following{/other_user}",
"gists_url": "https://api.github.com/users/johnmccain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnmccain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnmccain/subscriptions",
"organizations_url": "https://api.github.com/users/johnmccain/orgs",
"repos_url": "https://api.github.com/users/johnmccain/repos",
"events_url": "https://api.github.com/users/johnmccain/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnmccain/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Following up on this, is there anyone in particular that I should tag to take a look at this issue?",
"Hi @johnmccain, I'll take a look in the coming days.",
"Hi! I ran your example and added an additional relative difference computation: \r\n```py\r\nr_tol = torch.max(torch.abs(out_with_pad - out_without_pad) / torch.abs(out_without_pad))\r\n```\r\n\r\nWhat I gather from this is that the difference between the two outputs varies from 0.3% to 0.4%.\r\n\r\nSome things to note:\r\n\r\n- The attention mask is useful to hide tokens, but isn't perfect: the attention mask essentially adds a very large negative value to the attentions of the tokens we don't want to attend to (-10000), but that is not (-inf) either, so it doesn't erase them from existence. Even if padding is correctly done with an attention mask, some differences of ~1e-4 or ~1e-5 can still happen.\r\n- Unfortunately, there’s not much we can do, given that this is the way the original model was trained, using an adder (-10000). We have to keep as close as possible to the original implementation.\r\n- While keeping in mind that these differences are usually very very small, and shouldn’t have an impact on your model, the way to get closer to the expected behavior if to have as few padding tokens as possible.\r\n\r\nNow, MobileBERT is peculiar in that it has extremely high outputs compared to other models, but from what I'm seeing it's still within the 0.3%-0.4%. It is slightly higher than for other models, such as BERT which are in the ~0.0001% range. I didn't dive in enough to see exactly why this is so, but my guess is that all the tweaks to make it smaller (bottlenecks) might be responsible, as well as the very high outputs.\r\n\r\nIf you randomly initialize a MobileBERT and run through the same tests:\r\n\r\n```py\r\nconfig = AutoConfig.from_pretrained(model_string)\r\nmodel = AutoModelForSequenceClassification.from_config(config)\r\n```\r\n\r\nYou'll get results that are comparable to BERT:\r\n\r\n```py\r\n print(out_with_pad_logits, out_without_pad_logits)\r\n print(out_without_pad_logits - out_with_pad_logits)\r\n print(r_tol)\r\n```\r\n\r\nyields \r\n\r\n```\r\ntensor([[ 0.0255, -0.0048]]) tensor([[ 0.0255, -0.0048]])\r\ntensor([[-2.9769e-05, 8.6003e-06]])\r\ntensor(0.0018)\r\n```\r\n\r\non my side (random each run as randomly initialized weights)\r\n\r\nInvestigated with @jplu ",
"Thanks for looking into this!\r\nThat makes sense that this phenomenon would only be immediately visible with MobileBert with its extremely large logits. I will say that this can affect downstream tasks in my experience--for a MobileBert model finetuned on a binary classification task, switching from `padding='max_length'` to `padding='longest'` changed a handful of logits on my test set enough to affect the predicted class. (~1 in 500-1000 examples were altered enough to flip from 0 to 1 or vice versa). I haven't experienced that same sort of impact when using other Huggingface models like RoBERTa or Bert.\r\n\r\nI wonder if the effect of padding tokens is diminished when using an activation in the classifier head to avoid the extremely large logits as suggested in #8938. I will comment back with what I find.",
"Hey guys, I opened a similar issue a while ago https://github.com/huggingface/transformers/issues/7070. It was automatically closed due to inactivity, but we are still struggling with the issue every day and don't use batching when predicting.\r\n\r\nWhen computing the relative difference, for the example input shown in the issue mentioned above (I compared `emb2` and `emb4` from my issue), I got a quite disturbing result:\r\n- mean error: 4% \r\n- median error: 0.9%\r\n- max error: 200% (5033.1235 VS 1674.1803)\r\n\r\nI might have done some mistake, but I just used the formula written above:\r\n\r\n> ```python\r\n> r_tol = torch.max(torch.abs(out_with_pad - out_without_pad) / torch.abs(out_without_pad))\r\n> ```",
"Hi @swecooo! Thanks for letting us know. There might be a deeper issue than what I've seen then, I'll take a deeper look as soon as I have time.\r\n\r\nCould you specify how you computed these, for example with a code snippet so that I can investigate? Thanks!",
"Hey @LysandreJik, thank you very much for looking into this issue of ours. :slightly_smiling_face: This is the snippet that I used for computing the mean, median and max errors. I believe it should be identical to your formula mentioned above.\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModel, AutoTokenizer\r\n\r\nmodel = 'google/mobilebert-uncased'\r\ntokenizer = AutoTokenizer.from_pretrained(model)\r\nmodel = AutoModel.from_pretrained(model)\r\ntext = 'Hey, how are you?'\r\n\r\ni1 = tokenizer.batch_encode_plus([text], padding=True, return_tensors='pt')\r\nemb1 = model(**i1).pooler_output[0] # Only one in batch (not padded): [-2.5088e+07, 7.3279e+04, 1.7544e+05, ...]\r\n\r\ni2 = tokenizer.batch_encode_plus([text, text + ' hey'], padding=True, return_tensors='pt')\r\nemb2 = model(**i2).pooler_output[0] # Not longest (padded): [-2.4871e+07, 8.1873e+04, 1.6693e+05, ...]\r\n\r\ndiff = torch.abs(emb2 - emb1) / torch.abs(emb1)\r\nprint(\"Mean\", torch.mean(diff)) # 0.0432\r\nprint(\"Median\", torch.median(diff)) # 0.0090\r\nprint(\"Max\", torch.max(diff)) # 2.0063\r\ntop_10 = torch.argsort(diff, descending=True)[:10]\r\nprint(diff[top_10], emb1[top_10], emb2[top_10], sep=\"\\n\") # [2.0063, 1.5129, 1.1135, 0.6702, ...]\r\n # [1674.1803, 7940.5342, 2012.5089, -13467.0508, ...]\r\n # [5033.1235, 19954.0098, 4253.4575, -4441.5933 ...]\r\n```\r\n\r\nFrom the outputs, it seems that, naturally, smaller output values have a larger error. Please do let me know if you can (or cannot) reproduce the issue in the same magnitude as it happens for me, or if I can provide any more details.\r\n\r\nI used `torch==1.7.1` and `transformers==4.3.2` on Python 3.7.",
"Cool, thanks for providing this snippet! I'll need to take a few hours to deep dive into it and see what's happening, so you expect an answer by the end of the next week if that's alright.",
"Will also look into your previous issue https://github.com/huggingface/transformers/issues/7070 (Sorry that it felt through the cracks!)",
"> you expect an answer by the end of the next week if that's alright\r\n\r\nSure, thanks a lot for looking into this. About #7070, I believe it's basically the same issue as here. :slightly_smiling_face:",
"Hello! I've taken a look, and you are both right: padding tokens affect MobileBERT's output values. One thing that MobileBERT does differently to other models, is that it uses an embedding size of `128` which is different to the `hidden_size`. \r\n\r\nBefore adding the word embeddings to the position embeddings and token type embeddings, these word embeddings are first passed through a 1D convolution with kernel size 3, effectively casting a tensor of size `(batch_size, sequence_length, 128)` to a tensor of size `(batch_size, sequence_length, 384)`. \r\n\r\nThis happens here: https://github.com/huggingface/transformers/blob/a85eb616f73c3e7eedb22146972ea41921164671/src/transformers/models/mobilebert/modeling_mobilebert.py#L199-L214\r\n\r\nThen, this value is passed through a linear layer of output size `512`, resulting in a final value of size `(batch_size, sequence_length, 512)`. \r\n\r\nThis happens here: https://github.com/huggingface/transformers/blob/a85eb616f73c3e7eedb22146972ea41921164671/src/transformers/models/mobilebert/modeling_mobilebert.py#L215-L216\r\n\r\nDue to these two transformations, if we have a single padding token, it now has an impact on the token that is right before it. One can easily test is with the following code:\r\n\r\n```py\r\nfrom transformers import MobileBertModel, MobileBertTokenizer\r\nimport torch\r\n\r\n# Instantiate model and tokenizer\r\nmodel = MobileBertModel.from_pretrained(\"google/mobilebert-uncased\")\r\ntokenizer = MobileBertTokenizer.from_pretrained(\"google/mobilebert-uncased\")\r\n\r\n# Create an array of just \"1\"\r\ninput_embeds = torch.ones([1, 10, 128])\r\n\r\n# Fill the last token's embeddings with a very high value\r\ninput_embeds[:, -1, :] = 100000\r\n\r\nresulting_embeddings = model.embeddings(inputs_embeds=input_embeds)\r\n# Resulting embeddings of shape [1, 10, 512]\r\n\r\nmaximum_values_per_token_embedding = resulting_embeddings.squeeze().max(dim=1).values.round().tolist()\r\n# [16.0, 17.0, 17.0, 17.0, 17.0, 17.0, 17.0, 17.0, 210926.0, 1566259.0]\r\n```\r\n\r\nAs we can see, the last two tokens are affected by the very high value of the last token. This is due to the 1D convolution. Unfortunately, the attention mask can't really do anything about that now, as it's only aware of the last value, and only ignoring that one.\r\n\r\n---\r\n\r\nSteps from here: I'm contacting the author to see if we have an error in our implementation w.r.t padding tokens. In the meantime I'll think about how we can handle it from there.\r\n\r\nThank you for opening this issue, this is quite an error in the expected behavior vs actual behavior!",
"Hi all! There seems to have been an error with the weights conversion, as this issue stems from the padding token (0) embeddings seem to have values, where they should not.\r\n\r\nCould you please confirm that adding the following line right after model instantiation solves your issues:\r\n\r\nIf the model is a `MobileBertModel` (for example with `AutoModel`)\r\n```py\r\nmodel.embeddings.word_embeddings.weight[0, :] = 0\r\n```\r\n@swecooo after adding the line mentioned above, running your code results in:\r\n```out\r\nMean tensor(8.5514e-06, grad_fn=<MeanBackward0>)\r\nMedian tensor(6.2345e-07, grad_fn=<MedianBackward0>)\r\nMax tensor(0.0003, grad_fn=<MaxBackward1>)\r\n```\r\n\r\nIf the model is a mobilebert with a head, for example sequence classification:\r\n```py\r\nmodel.mobilebert.embeddings.word_embeddings.weight[0, :] = 0\r\n```\r\n\r\n@johnmccain after adding the line mentioned above, running your code results in:\r\n```out\r\ntensor([[-2765138.0000, 1868917.2500]])\r\ntensor([[-2765139.7500, 1868916.5000]])\r\n```\r\n\r\nIf you confirm this solves your issues, I will update the checkpoints on the hub.",
"Hey @LysandreJik, I can confirm that setting the pad token embeddings to zero solves the issue with my code. \r\nI went ahead and trained up the model on a classification task to check the real-world impact of zeroing the pad token embeddings, and I am no longer seeing discrepancies in classification output when using max_length vs longest padding 😃\r\n\r\nThank you!",
"This is great news!",
"Hi @LysandreJik, I can also confirm that zeroing the embedding solves the issue for me. Thanks a lot!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hey @LysandreJik, I just want to ping about the model checkpoint update, because it seems that the issue is still present in the model. I use the workaround for now, but if you found some time, it would be great to close this! :)",
"Thanks for the ping @sewco, I have just updated the weights. This can be closed now, feel free to reopen if you still feel something is missing."
] | 1,611 | 1,618 | 1,618 | NONE | null | ## Environment info
- `transformers` version: 4.2.2
- Platform: Windows-10-10.0.17134-SP0
- Python version: 3.6.8
- PyTorch version (GPU?): 1.7.1+cu110 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
## Information
Model I am using (Bert, XLNet ...): MobileBert
Adding padding tokens to the end of a sequence affects MobileBert output even when masked. I've tried this on a few other models (`bert-base-uncased`, `roberta-base`, `xlm-roberta-base`) and was only able to replicate this with `google/mobilebert-uncased`.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_string = 'google/mobilebert-uncased'
tokenizer = AutoTokenizer.from_pretrained(model_string)
model = AutoModelForSequenceClassification.from_pretrained(model_string)
example_text = 'Hello, world!'
input_with_pad = tokenizer.encode_plus(
example_text,
padding='max_length',
max_length=32,
return_tensors='pt'
)
print(input_with_pad)
# {'input_ids': tensor([[ 101, 7592, 1010, 2088, 999, 102, 0, 0, 0, 0, 0, 0,
# 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
# 0, 0, 0, 0, 0, 0, 0, 0]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
# 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
# 0, 0, 0, 0, 0, 0, 0, 0]])}
input_without_pad = tokenizer.encode_plus(
example_text,
padding='longest',
max_length=32,
return_tensors='pt'
)
print(input_without_pad)
# {'input_ids': tensor([[ 101, 7592, 1010, 2088, 999, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1]])}
with torch.no_grad():
model.eval()
out_with_pad = model(**input_with_pad)
print(out_with_pad.logits)
# tensor([[12693366., -5310913.]])
out_without_pad = model(**input_without_pad)
print(out_without_pad.logits)
# tensor([[12741167., -5327575.]])
```
## Expected behavior
Padding tokens should not affect the output of the model as long as they are masked. As far as I can tell, this only occurs with mobilebert. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9860/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9859 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9859/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9859/comments | https://api.github.com/repos/huggingface/transformers/issues/9859/events | https://github.com/huggingface/transformers/issues/9859 | 795,454,777 | MDU6SXNzdWU3OTU0NTQ3Nzc= | 9,859 | Head masking and test_head_masking not working properly for TFT5 models. | {
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"That's an assert we put in T5 because the head mask is not supported. Happy you take care of this!!!"
] | 1,611 | 1,613 | 1,613 | CONTRIBUTOR | null | When removing `test_head_masking` flags during #9858, I found out `test_headmasking` was actually never run for `TFT5Model` and it seems there must be a bug, please see below:
```
_______________________________________________________________________________________________________ TFT5ModelTest.test_headmasking _______________________________________________________________________________________________________
self = <tests.test_modeling_tf_t5.TFT5ModelTest testMethod=test_headmasking>
def test_headmasking(self):
if not self.test_head_masking:
return
random.Random().seed(42)
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
random.Random().seed()
inputs_dict["output_attentions"] = True
config.output_hidden_states = True
configs_no_init = _config_zero_init(config) # To be sure we have no Nan
for model_class in self.all_model_classes:
model = model_class(config=configs_no_init)
# Prepare head_mask
def prepare_layer_head_mask(i, attention_heads, num_hidden_layers):
if i == 0:
return tf.concat(
(tf.zeros(1, dtype=tf.float32), tf.ones(attention_heads - 1, dtype=tf.float32)), 0
)
elif i == num_hidden_layers - 1:
return tf.concat(
(tf.zeros(attention_heads - 1, dtype=tf.float32), tf.ones(1, dtype=tf.float32)), 0
)
else:
return tf.ones(attention_heads, dtype=tf.float32)
head_mask = tf.stack(
[
prepare_layer_head_mask(i, config.num_attention_heads, config.num_hidden_layers)
for i in range(config.num_hidden_layers)
],
0,
)
inputs = self._prepare_for_class(inputs_dict, model_class).copy()
inputs["head_mask"] = head_mask
if model.config.is_encoder_decoder:
signature = inspect.signature(model.call)
arg_names = [*signature.parameters.keys()]
if "decoder_head_mask" in arg_names: # necessary diferentiation because of T5 model
inputs["decoder_head_mask"] = head_mask
> outputs = model(**inputs, return_dict=True)
test_modeling_tf_common.py:686:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../../../../miniconda3/envs/bart/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py:1012: in __call__
outputs = call_fn(inputs, *args, **kwargs)
../src/transformers/models/t5/modeling_tf_t5.py:1160: in call
inputs["encoder_outputs"] = self.encoder(
../../../../../../miniconda3/envs/bart/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py:1012: in __call__
outputs = call_fn(inputs, *args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <transformers.models.t5.modeling_tf_t5.TFT5MainLayer object at 0x7f8c38206a30>
input_ids = <tf.Tensor: shape=(13, 7), dtype=int32, numpy=
array([[63, 79, 60, 1, 57, 50, 42],
[27, 6, 27, 88, 79, 14, 3... [95, 95, 79, 95, 63, 32, 24],
[ 8, 9, 14, 46, 91, 75, 56],
[26, 78, 52, 95, 45, 33, 78]], dtype=int32)>
attention_mask = None, encoder_hidden_states = None, encoder_attention_mask = None, inputs_embeds = None
head_mask = <tf.Tensor: shape=(5, 4), dtype=float32, numpy=
array([[0., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.],
[0., 0., 0., 1.]], dtype=float32)>, encoder_head_mask = None
past_key_values = None, use_cache = False, output_attentions = True, output_hidden_states = True, return_dict = True, training = False, kwargs = {}
inputs = {'attention_mask': <tf.Tensor: shape=(13, 7), dtype=float32, numpy=
array([[1., 1., 1., 1., 1., 1., 1.],
[1., 1..., 1.]], dtype=float32)>, 'encoder_attention_mask': None, 'encoder_head_mask': None, 'encoder_hidden_states': None, ...}
input_shape = [13, 7], batch_size = 13, seq_length = 7, mask_seq_length = 7
def call(
self,
input_ids=None,
attention_mask=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
inputs_embeds=None,
head_mask=None,
encoder_head_mask=None,
past_key_values=None,
use_cache=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
training=False,
**kwargs,
) -> Tuple:
inputs = input_processing(
func=self.call,
config=self.config,
input_ids=input_ids,
attention_mask=attention_mask,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
inputs_embeds=inputs_embeds,
head_mask=head_mask,
encoder_head_mask=encoder_head_mask,
past_key_values=past_key_values,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
training=training,
kwargs_call=kwargs,
)
if inputs["input_ids"] is not None and inputs["inputs_embeds"] is not None:
err_msg_prefix = "decoder_" if self.is_decoder else ""
raise ValueError(
f"You cannot specify both {err_msg_prefix}inputs and {err_msg_prefix}inputs_embeds at the same time"
)
elif inputs["input_ids"] is not None:
input_shape = shape_list(inputs["input_ids"])
inputs["input_ids"] = tf.reshape(inputs["input_ids"], (-1, input_shape[-1]))
elif inputs["inputs_embeds"] is not None:
input_shape = shape_list(inputs["inputs_embeds"])[:-1]
else:
err_msg_prefix = "decoder_" if self.is_decoder else ""
raise ValueError(f"You have to specify either {err_msg_prefix}inputs or {err_msg_prefix}inputs_embeds")
if inputs["inputs_embeds"] is None:
assert self.embed_tokens is not None, "You have to intialize the model with valid token embeddings"
inputs["inputs_embeds"] = self.embed_tokens(inputs["input_ids"])
batch_size, seq_length = input_shape
# required mask seq length can be calculated via length of past
mask_seq_length = (
shape_list(inputs["past_key_values"][0][0])[2] + seq_length
if inputs["past_key_values"] is not None
else seq_length
)
if inputs["attention_mask"] is None:
inputs["attention_mask"] = tf.fill((batch_size, mask_seq_length), 1)
if (
self.is_decoder
and inputs["encoder_attention_mask"] is None
and inputs["encoder_hidden_states"] is not None
):
encoder_seq_length = shape_list(inputs["encoder_hidden_states"])[1]
inputs["encoder_attention_mask"] = tf.fill((batch_size, encoder_seq_length), 1)
# initialize past_key_values with `None` if past does not exist
if inputs["past_key_values"] is None:
inputs["past_key_values"] = [None] * len(self.block)
# We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
# ourselves in which case we just need to make it broadcastable to all heads.
inputs["attention_mask"] = tf.cast(inputs["attention_mask"], dtype=tf.float32)
num_dims_attention_mask = len(shape_list(inputs["attention_mask"]))
if num_dims_attention_mask == 3:
extended_attention_mask = inputs["attention_mask"][:, None, :, :]
elif num_dims_attention_mask == 2:
# Provided a padding mask of dimensions [batch_size, mask_seq_length]
# - if the model is a decoder, apply a causal mask in addition to the padding mask
# - if the model is an encoder, make the mask broadcastable to [batch_size, num_heads, mask_seq_length, mask_seq_length]
if self.is_decoder:
seq_ids = tf.range(mask_seq_length)
causal_mask = tf.less_equal(
tf.tile(seq_ids[None, None, :], (batch_size, mask_seq_length, 1)),
seq_ids[None, :, None],
)
causal_mask = tf.cast(causal_mask, dtype=tf.float32)
extended_attention_mask = causal_mask[:, None, :, :] * inputs["attention_mask"][:, None, None, :]
if inputs["past_key_values"][0] is not None:
extended_attention_mask = extended_attention_mask[:, :, -seq_length:, :]
else:
extended_attention_mask = inputs["attention_mask"][:, None, None, :]
# Since attention_mask is 1.0 for positions we want to attend and 0.0 for
# masked positions, this operation will create a tensor which is 0.0 for
# positions we want to attend and -1e9 for masked positions.
# Since we are adding it to the raw scores before the softmax, this is
# effectively the same as removing these entirely.
# T5 has a mask that can compare sequence ids, we can simulate this here with this transposition
# Cf. https://github.com/tensorflow/mesh/blob/8d2465e9bc93129b913b5ccc6a59aa97abd96ec6/mesh_tensorflow/transformer/transformer_layers.py#L270
# extended_attention_mask = tf.math.equal(extended_attention_mask,
# tf.transpose(extended_attention_mask, perm=(-1, -2)))
extended_attention_mask = (1.0 - extended_attention_mask) * -1e9
if self.is_decoder and inputs["encoder_attention_mask"] is not None:
# If a 2D ou 3D attention mask is provided for the cross-attention
# we need to make broadcastable to [batch_size, num_heads, mask_seq_length, mask_seq_length]
# we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
inputs["encoder_attention_mask"] = tf.cast(inputs["encoder_attention_mask"], dtype=tf.float32)
num_dims_encoder_attention_mask = len(shape_list(inputs["encoder_attention_mask"]))
if num_dims_encoder_attention_mask == 3:
encoder_extended_attention_mask = inputs["encoder_attention_mask"][:, None, :, :]
if num_dims_encoder_attention_mask == 2:
encoder_extended_attention_mask = inputs["encoder_attention_mask"][:, None, None, :]
# T5 has a mask that can compare sequence ids, we can simulate this here with this transposition
# Cf. https://github.com/tensorflow/mesh/blob/8d2465e9bc93129b913b5ccc6a59aa97abd96ec6/mesh_tensorflow/transformer/transformer_layers.py#L270
# encoder_extended_attention_mask = tf.math.equal(encoder_extended_attention_mask,
# tf.transpose(encoder_extended_attention_mask, perm=(-1, -2)))
encoder_extended_attention_mask = (1.0 - encoder_extended_attention_mask) * -1e9
else:
encoder_extended_attention_mask = None
> assert inputs["head_mask"] is None, "Head mask not supported"
E AssertionError: Head mask not supported
../src/transformers/models/t5/modeling_tf_t5.py:714: AssertionError
============================================================================================================== warnings summary ==============================================================================================================
../../../../../../miniconda3/envs/bart/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py:22
/Users/daniel.stancl/miniconda3/envs/bart/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
tests/test_modeling_tf_t5.py: 44 warnings
/var/folders/vs/4jsdk4nx1ds2m48ltfk3nmdc0000gn/T/tmpc35hmpmg.py:8: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
ag__.converted_call(ag__.ld(warnings).warn, ("The 'warn' method is deprecated, use 'warning' instead", ag__.ld(DeprecationWarning), 2), None, fscope)
tests/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_saved_model_creation
tests/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_saved_model_creation
tests/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_saved_model_creation
tests/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_saved_model_creation
tests/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_saved_model_creation
/Users/daniel.stancl/Documents/PhD/Projects/test_transformers/transformers/src/transformers/modeling_tf_utils.py:293: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
tf_logger.warn(
tests/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_saved_model_creation
tests/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_saved_model_creation
tests/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_saved_model_creation
tests/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_saved_model_creation
tests/test_modeling_tf_t5.py::TFT5EncoderOnlyModelTest::test_saved_model_creation
/Users/daniel.stancl/Documents/PhD/Projects/test_transformers/transformers/src/transformers/modeling_tf_utils.py:302: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
tf_logger.warn("The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.")
-- Docs: https://docs.pytest.org/en/stable/warnings.html
========================================================================================================== short test summary info ===========================================================================================================
FAILED test_modeling_tf_t5.py::TFT5ModelTest::test_headmasking - AssertionError: Head mask not supported
```
My contribution: I'm gonna try to take care of this tomorrow.
<hr>
Reviewer: @jplu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9859/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9858 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9858/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9858/comments | https://api.github.com/repos/huggingface/transformers/issues/9858/events | https://github.com/huggingface/transformers/pull/9858 | 795,450,672 | MDExOlB1bGxSZXF1ZXN0NTYyNzk2MjEy | 9,858 | Remove redundant `test_head_masking = True` flags in test files | {
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | This PR removes redundant `test_head_masking = True` flags from test files as this is set by default.
Reviewer: @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9858/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9858",
"html_url": "https://github.com/huggingface/transformers/pull/9858",
"diff_url": "https://github.com/huggingface/transformers/pull/9858.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9858.patch",
"merged_at": 1611846554000
} |
https://api.github.com/repos/huggingface/transformers/issues/9857 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9857/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9857/comments | https://api.github.com/repos/huggingface/transformers/issues/9857/events | https://github.com/huggingface/transformers/pull/9857 | 795,363,007 | MDExOlB1bGxSZXF1ZXN0NTYyNzIzNTk5 | 9,857 | Pin memory in Trainer by default | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could we please go through normal PR review approval cycles? Unless I missed something and there was one.\r\n\r\nIt looks like my comment on slack was missed where I suggested to use a more specific cl arg name. \r\n\r\nI proposed one of:\r\n- dataloader_pin_memory\r\n- dl_pin_memory\r\n\r\nBut since we already have `dataloader_num_workers`:\r\n```\r\n num_workers=self.args.dataloader_num_workers,\r\n pin_memory=self.args.pin_memory,\r\n```\r\nit should probably be `dataloader_pin_memory`\r\n\r\nThis is important since there are other ways to pin memory in pytorch.\r\n\r\n------------------\r\n\r\nThis is a general comment - not specific to this PR:\r\n\r\nWe have this ongoing issue wrt cl arg naming, that we name something and later we realize it's not the best name and then we are concerned with changing the name not to break user's code, so let's think deeply about new cl args names before we add them. Thank you!",
"@stas00 It seems like I missed this message and when I opened this PR in the morning, I didn't see any comments and @sgugger had approved the PR. For a final check, I asked @LysandreJik who gave me the green light.\r\n\r\nTo avoid this in future, I would request if PR specific comments are made on the PR itself so that author & other reviewers can go through them and make sure that everything is resolved before merging.",
"Yes, absolutely. I guess it just fell through the cracks.\r\n\r\nAnd let's have PR description, as simple as:\r\n\r\nThis PR adds `--pin_memory` to trainer DataLoader and it defaults to True. \r\n\r\n\r\n"
] | 1,611 | 1,611 | 1,611 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9857/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9857/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9857",
"html_url": "https://github.com/huggingface/transformers/pull/9857",
"diff_url": "https://github.com/huggingface/transformers/pull/9857.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9857.patch",
"merged_at": 1611820247000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/9856 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9856/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9856/comments | https://api.github.com/repos/huggingface/transformers/issues/9856/events | https://github.com/huggingface/transformers/pull/9856 | 795,313,052 | MDExOlB1bGxSZXF1ZXN0NTYyNjgyMzY2 | 9,856 | Add head_mask and decoder_head_mask to PyTorch LED | {
"login": "stancld",
"id": 46073029,
"node_id": "MDQ6VXNlcjQ2MDczMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/46073029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stancld",
"html_url": "https://github.com/stancld",
"followers_url": "https://api.github.com/users/stancld/followers",
"following_url": "https://api.github.com/users/stancld/following{/other_user}",
"gists_url": "https://api.github.com/users/stancld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stancld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stancld/subscriptions",
"organizations_url": "https://api.github.com/users/stancld/orgs",
"repos_url": "https://api.github.com/users/stancld/repos",
"events_url": "https://api.github.com/users/stancld/events{/privacy}",
"received_events_url": "https://api.github.com/users/stancld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,612 | 1,612 | CONTRIBUTOR | null | This PR implements `head_mask` and `decoder_head_mask` for PyTorch LED (and Longformer as there's a copy dependency) and it is the follow-up to the open issue #9814.
**Motivation:** This PR is a part of an endeavour to enable the usage of `head_mask` and `decoder_head_mask` for all encoder-decoder transformers following the recent work on BART-like models (#9569).
<hr>
Fixes: https://github.com/huggingface/transformers/issues/9814
Reviewers: @patrickvonplaten @LysandreJik @stas00 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9856/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9856",
"html_url": "https://github.com/huggingface/transformers/pull/9856",
"diff_url": "https://github.com/huggingface/transformers/pull/9856.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9856.patch",
"merged_at": 1612292812000
} |
https://api.github.com/repos/huggingface/transformers/issues/9855 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9855/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9855/comments | https://api.github.com/repos/huggingface/transformers/issues/9855/events | https://github.com/huggingface/transformers/issues/9855 | 795,242,726 | MDU6SXNzdWU3OTUyNDI3MjY= | 9,855 | About max_length in generation_utils.py | {
"login": "LinjianLi",
"id": 43627450,
"node_id": "MDQ6VXNlcjQzNjI3NDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/43627450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LinjianLi",
"html_url": "https://github.com/LinjianLi",
"followers_url": "https://api.github.com/users/LinjianLi/followers",
"following_url": "https://api.github.com/users/LinjianLi/following{/other_user}",
"gists_url": "https://api.github.com/users/LinjianLi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LinjianLi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LinjianLi/subscriptions",
"organizations_url": "https://api.github.com/users/LinjianLi/orgs",
"repos_url": "https://api.github.com/users/LinjianLi/repos",
"events_url": "https://api.github.com/users/LinjianLi/events{/privacy}",
"received_events_url": "https://api.github.com/users/LinjianLi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @LinjianLi,\r\n\r\nnote that `max_length` states the maximum length of both generated tokens and input tokens (which is always at least 1). This means that we count the first special token also as an output token (it will be in the final output) and thus should also be included when computing `max_length`",
"> Hey @LinjianLi,\r\n> \r\n> note that `max_length` states the maximum length of both generated tokens and input tokens (which is always at least 1). This means that we count the first special token also as an output token (it will be in the final output) and thus should also be included when computing `max_length`\r\n\r\nThanks for your reply!"
] | 1,611 | 1,614 | 1,614 | NONE | null | In `generation_utils.py`, the docstring of the `beam_search` function shows the example of usage.
```
>>> input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long)
>>> input_ids = input_ids * model.config.decoder_start_token_id
>>> something else that I omit here
>>> outputs = model.beam_search(input_ids, beam_scorer, logits_processor=logits_processor, **model_kwargs)
```
The `beam_search` function uses `while cur_len < max_length` to control the length of generated sequence. But the `cur_len` counts the length including the start token which is a special token. When the user sets `max_length = 1`, does it not mean that the user wants the model to generate one token **while not considering the start token** (I am not sure that is it just me or others may think that way too)? But the `cur_len` will be 1 at the beginning because of the start token and the statement below in the source code.
```
batch_beam_size, cur_len = input_ids.shape
```
The control flow will jump out of the `while` loop and not generate any token.
Maybe `while cur_len < max_length` should be changed to `while cur_len <= max_length`. And maybe other functions should also change the corresponding loop control statement if I am right. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9855/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9854 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9854/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9854/comments | https://api.github.com/repos/huggingface/transformers/issues/9854/events | https://github.com/huggingface/transformers/pull/9854 | 795,225,626 | MDExOlB1bGxSZXF1ZXN0NTYyNjA5OTAz | 9,854 | Deprecate model_path in Trainer.train | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | COLLABORATOR | null | # What does this PR do?
This PR deprecates `Trainer.train(model_path=xxx)` to be replaced by `Trainer.train(resume_from_checkpoint=xxx)` which (I think) is clearer and better. No breaking change, just a deprecation warning for now. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9854/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9854/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9854",
"html_url": "https://github.com/huggingface/transformers/pull/9854",
"diff_url": "https://github.com/huggingface/transformers/pull/9854.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9854.patch",
"merged_at": 1611840766000
} |
https://api.github.com/repos/huggingface/transformers/issues/9853 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9853/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9853/comments | https://api.github.com/repos/huggingface/transformers/issues/9853/events | https://github.com/huggingface/transformers/pull/9853 | 795,184,411 | MDExOlB1bGxSZXF1ZXN0NTYyNTc1NDQx | 9,853 | Fix computation of attention_probs when head_mask is provided. | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a lot!"
] | 1,611 | 1,611 | 1,611 | MEMBER | null | Remove dead code path when computing `attention_probs` in case of `head_mask` is provided.
Masking was computed on `attention_scores` which is never used / returned afterwards. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9853/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9853",
"html_url": "https://github.com/huggingface/transformers/pull/9853",
"diff_url": "https://github.com/huggingface/transformers/pull/9853.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9853.patch",
"merged_at": 1611832313000
} |
https://api.github.com/repos/huggingface/transformers/issues/9852 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9852/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9852/comments | https://api.github.com/repos/huggingface/transformers/issues/9852/events | https://github.com/huggingface/transformers/pull/9852 | 795,159,805 | MDExOlB1bGxSZXF1ZXN0NTYyNTU0NzMy | 9,852 | Adding a new `return_full_text` parameter to TextGenerationPipeline. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Don't mind the failing test, you can merge when ready."
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
For text-generation, it's sometimes used as prompting text.
In that context, prefixing `generated_text` with the actual input
forces the caller to take an extra step to remove it.
The proposed change adds a new parameter (for backward compatibility).
`return_full_text` that enables the caller to prevent adding the prefix.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9852/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9852",
"html_url": "https://github.com/huggingface/transformers/pull/9852",
"diff_url": "https://github.com/huggingface/transformers/pull/9852.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9852.patch",
"merged_at": 1611912453000
} |
https://api.github.com/repos/huggingface/transformers/issues/9851 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9851/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9851/comments | https://api.github.com/repos/huggingface/transformers/issues/9851/events | https://github.com/huggingface/transformers/pull/9851 | 795,135,405 | MDExOlB1bGxSZXF1ZXN0NTYyNTM0MTc0 | 9,851 | [GA forks] Test on every push | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9851/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9851",
"html_url": "https://github.com/huggingface/transformers/pull/9851",
"diff_url": "https://github.com/huggingface/transformers/pull/9851.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9851.patch",
"merged_at": 1611756714000
} |
https://api.github.com/repos/huggingface/transformers/issues/9850 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9850/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9850/comments | https://api.github.com/repos/huggingface/transformers/issues/9850/events | https://github.com/huggingface/transformers/issues/9850 | 795,118,067 | MDU6SXNzdWU3OTUxMTgwNjc= | 9,850 | Some model use serve previous version can not do inference in web api. | {
"login": "svjack",
"id": 27874014,
"node_id": "MDQ6VXNlcjI3ODc0MDE0",
"avatar_url": "https://avatars.githubusercontent.com/u/27874014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/svjack",
"html_url": "https://github.com/svjack",
"followers_url": "https://api.github.com/users/svjack/followers",
"following_url": "https://api.github.com/users/svjack/following{/other_user}",
"gists_url": "https://api.github.com/users/svjack/gists{/gist_id}",
"starred_url": "https://api.github.com/users/svjack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/svjack/subscriptions",
"organizations_url": "https://api.github.com/users/svjack/orgs",
"repos_url": "https://api.github.com/users/svjack/repos",
"events_url": "https://api.github.com/users/svjack/events{/privacy}",
"received_events_url": "https://api.github.com/users/svjack/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The error here seems to be because there's a dissociated tokenizer and model. The tokenizer should be BERT while the model should be ALBERT.\r\n\r\nThe configuration should reflect this by having a `\"tokenizer_class\": \"BertTokenizer\"`.\r\n\r\nPinging @wptoux\r\n\r\nAn example can be seen with PhoBERT having the model set as RoBERTa and the tokenizer as `PhobertTokenizer`: https://huggingface.co/vinai/phobert-base/blob/main/config.json",
"> The error here seems to be because there's a dissociated tokenizer and model. The tokenizer should be BERT while the model should be ALBERT.\r\n> \r\n> The configuration should reflect this by having a `\"tokenizer_class\": \"BertTokenizer\"`.\r\n> \r\n> Pinging @wptoux\r\n> \r\n> An example can be seen with PhoBERT having the model set as RoBERTa and the tokenizer as `PhobertTokenizer`: https://huggingface.co/vinai/phobert-base/blob/main/config.json\r\n\r\nThis library has improved a lot since I release this model, I will update it.",
"Glad to hear it @wptoux! Thank you!",
"I have fixed the problem, and the web api is working now.\r\n\r\nHere is an test example\r\n\r\nContext: 李白(701年—762年12月) ,字太白,号青莲居士,又号“谪仙人”,唐代伟大的浪漫主义诗人,被后人誉为“诗仙”,与杜甫并称为“李杜”,为了与另两位诗人李商隐与杜牧即“小李杜”区别,杜甫与李白又合称“大李杜”。北京大学教授李志敏评价:“李白之诗呼吸宇宙,出乎道;杜甫之诗德参天地,源于儒,皆至天人合一境界,故能出神入化。\r\n\r\nQuestion: 如何评价李白的诗\r\n\r\nAnswer: 李白之诗呼吸宇宙,出乎道",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,611 | 1,619 | 1,619 | NONE | null | the model serve in
https://huggingface.co/wptoux/albert-chinese-large-qa
can not do inference by click “compute” button.
because it use transformers 3.0.2 but can not
properly load in current version.
I think model online server should consider its implement transformer version. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9850/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9849 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9849/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9849/comments | https://api.github.com/repos/huggingface/transformers/issues/9849/events | https://github.com/huggingface/transformers/pull/9849 | 795,114,472 | MDExOlB1bGxSZXF1ZXN0NTYyNTE2OTEw | 9,849 | Labeled pull requests | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9849/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9849",
"html_url": "https://github.com/huggingface/transformers/pull/9849",
"diff_url": "https://github.com/huggingface/transformers/pull/9849.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9849.patch",
"merged_at": 1611755155000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/9848 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9848/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9848/comments | https://api.github.com/repos/huggingface/transformers/issues/9848/events | https://github.com/huggingface/transformers/pull/9848 | 795,104,682 | MDExOlB1bGxSZXF1ZXN0NTYyNTA4ODA3 | 9,848 | Add XLA test | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Out of curiosity, how long are those tests for the models that have them?",
"few milliseconds, XLA is really fast :)"
] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
In the same spirit than for the mixed precision test, this PR adds one for XLA compliancy. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9848/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9848",
"html_url": "https://github.com/huggingface/transformers/pull/9848",
"diff_url": "https://github.com/huggingface/transformers/pull/9848.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9848.patch",
"merged_at": 1611915903000
} |
https://api.github.com/repos/huggingface/transformers/issues/9847 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9847/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9847/comments | https://api.github.com/repos/huggingface/transformers/issues/9847/events | https://github.com/huggingface/transformers/pull/9847 | 795,100,786 | MDExOlB1bGxSZXF1ZXN0NTYyNTA1NTMw | 9,847 | TFBart lables consider both pad token and -100 | {
"login": "kiyoungkim1",
"id": 37245002,
"node_id": "MDQ6VXNlcjM3MjQ1MDAy",
"avatar_url": "https://avatars.githubusercontent.com/u/37245002?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kiyoungkim1",
"html_url": "https://github.com/kiyoungkim1",
"followers_url": "https://api.github.com/users/kiyoungkim1/followers",
"following_url": "https://api.github.com/users/kiyoungkim1/following{/other_user}",
"gists_url": "https://api.github.com/users/kiyoungkim1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kiyoungkim1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kiyoungkim1/subscriptions",
"organizations_url": "https://api.github.com/users/kiyoungkim1/orgs",
"repos_url": "https://api.github.com/users/kiyoungkim1/repos",
"events_url": "https://api.github.com/users/kiyoungkim1/events{/privacy}",
"received_events_url": "https://api.github.com/users/kiyoungkim1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patrickvonplaten \r\nI merged upstream to the branch!",
"You have an error in code quality, could you run `make style` and `make quality` to check it out? Thanks."
] | 1,611 | 1,612 | 1,612 | CONTRIBUTOR | null | For #9770,
1. ```TFBartModels``` use -100 as a masking token for ```decoder_input_ids``` and ```compute_loss``` like other models(```T5```).
2. For legacy, all the ```padding token``` in ```labels``` are replace by ```-100`` token.
Below examples show the same result for ```labels``` with ```-100 token``` or ```padding token```, where previously ```Nan``` are shown in the latter case(#9770).
```
import tensorflow as tf
from transformers import BartTokenizer, TFBartForConditionalGeneration
tokenizer = BartTokenizer.from_pretrained("facebook/bart-base")
model = TFBartForConditionalGeneration.from_pretrained("facebook/bart-base")
inputs = tokenizer("My dog is <mask>", return_tensors='tf', truncation=True, max_length=16, padding="max_length")
labels_ids = tokenizer("My dog is cute", return_tensors='tf', truncation=True, max_length=16, padding="max_length").input_ids
## labels padding_token = 1
loss = model(inputs, labels=labels_ids)[0]
print(labels_ids)
print(loss)
## labels padding_token = -100
labels_ids = tf.where(
labels_ids == 1, tf.fill(tf.shape(labels_ids), tf.constant(-100, dtype='int32')), labels_ids
)
loss = model(inputs, labels=labels_ids)[0]
print(labels_ids)
print(loss)
```
```
tf.Tensor(
[[ 0 2387 2335 16 11962 2 1 1 1 1 1 1
1 1 1 1]], shape=(1, 16), dtype=int32)
tf.Tensor(
[2.2291888e-05 4.8874615e-05 3.7192607e-05 7.9230859e-04 6.1941862e+00
1.1058818e+00], shape=(6,), dtype=float32)
tf.Tensor(
[[ 0 2387 2335 16 11962 2 -100 -100 -100 -100 -100 -100
-100 -100 -100 -100]], shape=(1, 16), dtype=int32)
tf.Tensor(
[2.2291888e-05 4.8874615e-05 3.7192607e-05 7.9230859e-04 6.1941862e+00
1.1058818e+00], shape=(6,), dtype=float32)
```
TFBart gives the same result with both -100 and padding token.
However, ```Bart(pytorch) with -100 token in labels```, ```Bart with padding token in labels``` and ```TFBart(tensorflow with -100 or padding token)``` gives three different results. This is noticed but not treated in this PR.
@patrickvonplaten
@jplu
@patil-suraj
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9847/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9847",
"html_url": "https://github.com/huggingface/transformers/pull/9847",
"diff_url": "https://github.com/huggingface/transformers/pull/9847.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9847.patch",
"merged_at": 1612132289000
} |
https://api.github.com/repos/huggingface/transformers/issues/9846 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9846/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9846/comments | https://api.github.com/repos/huggingface/transformers/issues/9846/events | https://github.com/huggingface/transformers/pull/9846 | 795,095,395 | MDExOlB1bGxSZXF1ZXN0NTYyNTAwOTg5 | 9,846 | Adding new parameter to `generate`: `max_time`. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Continuing a bit the discussion we had offline for others to chime in.\r\n\r\nAfter quite some discussion and thinking, I see the following problem:\r\n\r\n- I don't want to clutter `generate()` with if-statements anymore as it's done a bit in this PR, but rather make use of tools like `LogitsProcessor`. Now @Narsil you gave me some very good arguments to why just adding a `LogitsProcessor` that forces to generate EOS is not good enough (Some models don't have the EOS token & we don't always want to have EOS at the end of the sentence). So I would propose the following solution that we should then also use to deprecate `max_length` from the \"lower\" generate methods like `greedy_search`, `sample`, ...\r\n\r\nAnalogs to `LogitsProcessor` and `LogitsProcessorList`, we create a new logit called `StoppingCriteria` and `StoppingCriteriaList` which would look as follows: \r\n\r\n```python\r\nclass StoppingCriteriaList(list):\r\n def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> torch.FloatTensor:\r\n stopping = False\r\n for criteria in self:\r\n stopping = stopping or criteria(input_ids, scores)\r\n return stopping\r\n```\r\n\r\nand a `MaxTimeStopping` class as follows:\r\n\r\n```python\r\nclass MaxTimeStopping(StoppingCriteria):\r\n def __init__(self, max_time):\r\n self.start_time = time.time()\r\n self.max_time = max_time\r\n\r\n def __call__(self, *args):\r\n if time.time() - self.start_time > self.max_time:\r\n return True\r\n return False\r\n```\r\n\r\nThe same way we can create a `MaxLengthStopping` class as follows:\r\n\r\n```python\r\nclass MaxLengthStopping(StoppingCriteria):\r\n def __init__(self, max_length):\r\n self.start_time = time.time()\r\n self.max_length = max_length\r\n\r\n def __call__(self, input_ids, *args):\r\n if input_ids.shape[-1] > self.max_length:\r\n return True\r\n return False\r\n```\r\n\r\nThen we can add create a `stopping_criteria` list object in generate along side creating the `logits_processor` list object and pass it to the submodules. In each submodule we would then do something like \r\n\r\n```\r\nif stopping_criteria(input_ids, scores):\r\n break;\r\n```\r\n\r\nI would then also deprecate the `max_length` as an input parameter to `greedy_search` etc and add a `stopping_criteria` list object instead. \r\n\r\nThis new approach would open the way for more fancy stopping criteria. E.g. at the moment `max_length` defines the number of total tokens (passed tokens + generated tokens) instead of just the generated tokens which is very hard to change in terms of backwards compatibility. Lots of people have complained about that. With this approach, one could easily make a new `MaxGeneratedTokenStopping` class that would then take over.\r\n\r\nAnother positive effect of this function is that we can easily test & optimize those classes as we've already seen it for the `LogitsProcessor` classes.\r\n\r\nThis will require a rather big change, so I'd be very glad if @LysandreJik and @sgugger you can give your opinion here before proceeding.",
"Thanks for the thoughtful explanation, this makes a lot of sense. I'm very down to continue the modular approach we have with processors, the new `StoppingCriteria` you propose seems like the way to go. It's good that it keeps the extensibility of the generation methods while not complexifying the generate method itself.",
"Agreed with both of you, this `StoppingCriteria` class seems like a good idea!",
"@patrickvonplaten @LysandreJik \r\nDo you mind a second review ?\r\n\r\nI think this PR is actually ready.\r\n\r\nThe TF code (which was my main concern) doesn't seem to use LogitsProcessor nor to be tested, so I figured leaving `max_time` is ok. I could also simply remove it to make sure I don't break things.",
"> Great! Thanks a lot for tackling this PR!\r\n> \r\n> I'm quite happy with the design :-)\r\n> \r\n> Can we:\r\n> \r\n> 1. Add some docstring for the classes and add those classes to the docs? Give it a new section in `docs/source/internal/generation_utils.rst`\r\n\r\nDone, I also added the import statements within `src/transformers/__init__.py`. Is there any other place I should think of ?\r\n\r\n> \r\n> 2. Deprecate the `max_length` function input argument for all `greedy_search`, `beam_search` and update the docstring and tests using the new `StoppingCriteriaList` instead\r\n\r\nThis is something harder to do because of some other usages of `max_length`. (see other comment). I think it should belong in another PR, because this one is already a bit large. And it would require other kinds of care (regarding performance at least).\r\n\r\nWhat do you think ?\r\n\r\n> \r\n> 3. Change the `class StoppingCriteria` to an abstract class so keep the design as close as possible to the one in `LogitsProcessor...`\r\n\r\nDone. shouldn't they actually contain `@abstractmethod` ?\r\n\r\n> \r\n> 4. Delete the functionality for TF. If it would be ok for you, I'd like to just add this functionality for PyTorch for now since TF needs a big refactor before adding more features IMO\r\n\r\nOk.\r\n\r\n",
"@Narsil, sorry for being so slow on this one! After thinking a bit more, I think you're right that `max_time` should not be part of the config. One last thing that we'll have to do IMO is to ensure backwards compatibility for the \"sub\"-generation methods. See comment above. Please let me know, if this doesn't make sense or if I misunderstood something",
"T5 Also passed, but had a OOM crash on my local machine.\r\n```\r\n================================================================================================================================== test session starts ===================================================================================================================================\r\nplatform linux -- Python 3.8.5, pytest-6.1.1, py-1.9.0, pluggy-0.13.1\r\nrootdir: /home/nicolas/src/transformers\r\nplugins: forked-1.3.0, xdist-2.1.0\r\ncollected 114 items \r\n\r\ntests/test_modeling_bart.py ........................................ssss.......................^[[A..............................ssss............. [100%]\r\n\r\n============================================================= warnings summary =============================================================\r\n.venv/lib/python3.8/site-packages/tensorflow/python/autograph/utils/testing.py:21\r\n /home/nicolas/src/transformers/.venv/lib/python3.8/site-packages/tensorflow/python/autograph/utils/testing.py:21: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\r\n import imp\r\n\r\ntests/test_modeling_bart.py::BartModelTest::test_torchscript\r\ntests/test_modeling_bart.py::BartModelTest::test_torchscript_output_attentions\r\ntests/test_modeling_bart.py::BartModelTest::test_torchscript_output_hidden_state\r\n /home/nicolas/src/transformers/.venv/lib/python3.8/site-packages/torch/nn/functional.py:1897: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n assert padding_idx < weight.size(0), \"Padding_idx must be within num_embeddings\"\r\n\r\ntests/test_modeling_bart.py::BartModelTest::test_torchscript\r\ntests/test_modeling_bart.py::BartModelTest::test_torchscript_output_attentions\r\ntests/test_modeling_bart.py::BartModelTest::test_torchscript_output_hidden_state\r\ntests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript\r\ntests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_attentions\r\ntests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_hidden_state\r\n /home/nicolas/src/transformers/src/transformers/models/bart/modeling_bart.py:213: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n assert attn_weights.size() == (\r\n\r\ntests/test_modeling_bart.py::BartModelTest::test_torchscript\r\ntests/test_modeling_bart.py::BartModelTest::test_torchscript_output_attentions\r\ntests/test_modeling_bart.py::BartModelTest::test_torchscript_output_hidden_state\r\ntests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript\r\ntests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_attentions\r\ntests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_hidden_state\r\n /home/nicolas/src/transformers/src/transformers/models/bart/modeling_bart.py:220: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n assert attention_mask.size() == (\r\n\r\ntests/test_modeling_bart.py::BartModelTest::test_torchscript\r\ntests/test_modeling_bart.py::BartModelTest::test_torchscript_output_attentions\r\ntests/test_modeling_bart.py::BartModelTest::test_torchscript_output_hidden_state\r\ntests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript\r\ntests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_attentions\r\ntests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_hidden_state\r\n /home/nicolas/src/transformers/src/transformers/models/bart/modeling_bart.py:252: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n assert attn_output.size() == (\r\n\r\ntests/test_modeling_bart.py::BartModelTest::test_torchscript\r\ntests/test_modeling_bart.py::BartModelTest::test_torchscript_output_attentions\r\ntests/test_modeling_bart.py::BartModelTest::test_torchscript_output_hidden_state\r\ntests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript\r\ntests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_attentions\r\ntests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_hidden_state\r\n /home/nicolas/src/transformers/src/transformers/models/bart/modeling_bart.py:856: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if input_shape[-1] > 1:\r\n\r\n-- Docs: https://docs.pytest.org/en/stable/warnings.html\r\n========================================= 106 passed, 8 skipped, 28 warnings in 562.15s (0:09:22) ==========================================\r\n```",
"> T5 Also passed, but had a OOM crash on my local machine.\r\n> \r\n> ```\r\n> ================================================================================================================================== test session starts ===================================================================================================================================\r\n> platform linux -- Python 3.8.5, pytest-6.1.1, py-1.9.0, pluggy-0.13.1\r\n> rootdir: /home/nicolas/src/transformers\r\n> plugins: forked-1.3.0, xdist-2.1.0\r\n> collected 114 items \r\n> \r\n> tests/test_modeling_bart.py ........................................ssss.......................^[[A..............................ssss............. [100%]\r\n> \r\n> ============================================================= warnings summary =============================================================\r\n> .venv/lib/python3.8/site-packages/tensorflow/python/autograph/utils/testing.py:21\r\n> /home/nicolas/src/transformers/.venv/lib/python3.8/site-packages/tensorflow/python/autograph/utils/testing.py:21: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\r\n> import imp\r\n> \r\n> tests/test_modeling_bart.py::BartModelTest::test_torchscript\r\n> tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_attentions\r\n> tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_hidden_state\r\n> /home/nicolas/src/transformers/.venv/lib/python3.8/site-packages/torch/nn/functional.py:1897: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n> assert padding_idx < weight.size(0), \"Padding_idx must be within num_embeddings\"\r\n> \r\n> tests/test_modeling_bart.py::BartModelTest::test_torchscript\r\n> tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_attentions\r\n> tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_hidden_state\r\n> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript\r\n> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_attentions\r\n> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_hidden_state\r\n> /home/nicolas/src/transformers/src/transformers/models/bart/modeling_bart.py:213: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n> assert attn_weights.size() == (\r\n> \r\n> tests/test_modeling_bart.py::BartModelTest::test_torchscript\r\n> tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_attentions\r\n> tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_hidden_state\r\n> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript\r\n> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_attentions\r\n> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_hidden_state\r\n> /home/nicolas/src/transformers/src/transformers/models/bart/modeling_bart.py:220: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n> assert attention_mask.size() == (\r\n> \r\n> tests/test_modeling_bart.py::BartModelTest::test_torchscript\r\n> tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_attentions\r\n> tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_hidden_state\r\n> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript\r\n> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_attentions\r\n> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_hidden_state\r\n> /home/nicolas/src/transformers/src/transformers/models/bart/modeling_bart.py:252: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n> assert attn_output.size() == (\r\n> \r\n> tests/test_modeling_bart.py::BartModelTest::test_torchscript\r\n> tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_attentions\r\n> tests/test_modeling_bart.py::BartModelTest::test_torchscript_output_hidden_state\r\n> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript\r\n> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_attentions\r\n> tests/test_modeling_bart.py::BartStandaloneDecoderModelTest::test_torchscript_output_hidden_state\r\n> /home/nicolas/src/transformers/src/transformers/models/bart/modeling_bart.py:856: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n> if input_shape[-1] > 1:\r\n> \r\n> -- Docs: https://docs.pytest.org/en/stable/warnings.html\r\n> ========================================= 106 passed, 8 skipped, 28 warnings in 562.15s (0:09:22) ==========================================\r\n> ```\r\n\r\nOk for me then! If T5 tests pass this is good enough."
] | 1,611 | 1,615 | 1,615 | CONTRIBUTOR | null | Generation by tokens number is sometimes a bit clunky because we don't
know how many tokens are good enough or even how many tokens are in
the payload (for pipelines users for instance). This leads to hard
to understand behavior.
This PR proposes a new argument `max_time` which is a float of seconds
for the allowed time for `generate` to run on.
Ideally combinations of `max_tokens=None`, `max_time=2` could be used to
generate as many tokens as possible within time budget.
NB: Another possible approach consists of passing a callback to `generate`
putting the caller in charge of the actual decision of when to stop
generating tokens. It opens the door to 'which args should we pass'
to this callback. It's hard to imagine other use-cases for this
early stopping behavior than time (that are not already covered by
parameters of generate)
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten @LysandreJik
@jplu
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9846/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9846/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9846",
"html_url": "https://github.com/huggingface/transformers/pull/9846",
"diff_url": "https://github.com/huggingface/transformers/pull/9846.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9846.patch",
"merged_at": 1615540310000
} |
https://api.github.com/repos/huggingface/transformers/issues/9845 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9845/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9845/comments | https://api.github.com/repos/huggingface/transformers/issues/9845/events | https://github.com/huggingface/transformers/pull/9845 | 795,088,291 | MDExOlB1bGxSZXF1ZXN0NTYyNDk1MDc4 | 9,845 | [WIP/ don't merge] T5 gradient checkpointing | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,611 | 1,619 | 1,619 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9845/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9845",
"html_url": "https://github.com/huggingface/transformers/pull/9845",
"diff_url": "https://github.com/huggingface/transformers/pull/9845.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9845.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9844 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9844/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9844/comments | https://api.github.com/repos/huggingface/transformers/issues/9844/events | https://github.com/huggingface/transformers/pull/9844 | 795,086,515 | MDExOlB1bGxSZXF1ZXN0NTYyNDkzNjIy | 9,844 | [examples/seq2seq] support label smoothing | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> I don't know if the shift methods are used for something else in the seq2seq methods, but if this was their only use, we could maybe deprecate them?\r\n\r\nthose are used for exactly the same reason, `prepare decoder_input_ids` by shifting `labels`, and those are mostly used inside the models, so yeah, think we could deprecate them",
"I agree we could remove the `pad_token_id` argument."
] | 1,611 | 1,612 | 1,612 | MEMBER | null | # What does this PR do?
Add support for label smoothing by adding `prepare_decoder_input_ids_from_labels` method to all seq2seq models which will let us prepare `decoder_input_ids` outside the model.
For context, we need to pass `decoder_input_ids` for label smoothing because we don't pass `labels` to avoid calculating loss twice, which leads to speeds degradation, see #9713.
@sgugger , @patrickvonplaten what do we think about adding `prepare_decoder_input_ids_from_labels` to every seq2seq model, there are already `shift_tokens_right/_shift_right` methods, but the name is a bit confusing IMO to use outside the model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9844/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9844",
"html_url": "https://github.com/huggingface/transformers/pull/9844",
"diff_url": "https://github.com/huggingface/transformers/pull/9844.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9844.patch",
"merged_at": 1612547517000
} |
https://api.github.com/repos/huggingface/transformers/issues/9843 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9843/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9843/comments | https://api.github.com/repos/huggingface/transformers/issues/9843/events | https://github.com/huggingface/transformers/issues/9843 | 795,084,829 | MDU6SXNzdWU3OTUwODQ4Mjk= | 9,843 | SQUAD Question Answering example:: RuntimeError: Could not infer dtype of NoneType | {
"login": "paniabhisek",
"id": 9455582,
"node_id": "MDQ6VXNlcjk0NTU1ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9455582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paniabhisek",
"html_url": "https://github.com/paniabhisek",
"followers_url": "https://api.github.com/users/paniabhisek/followers",
"following_url": "https://api.github.com/users/paniabhisek/following{/other_user}",
"gists_url": "https://api.github.com/users/paniabhisek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paniabhisek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paniabhisek/subscriptions",
"organizations_url": "https://api.github.com/users/paniabhisek/orgs",
"repos_url": "https://api.github.com/users/paniabhisek/repos",
"events_url": "https://api.github.com/users/paniabhisek/events{/privacy}",
"received_events_url": "https://api.github.com/users/paniabhisek/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm not able to reproduce the issue. I went to [this page](https://huggingface.co/transformers/custom_datasets.html), then clicked on \"Open in colab\" on the top right (chose PyTorch), and then run the question-answering tutorial, and it's working fine for me:\r\n\r\n\r\n\r\n\r\n",
"Hi @paniabhisek \r\n\r\nFor QA you could use the official `run_qa.py ` example scripts which now supports `Trainer` and `datasets`. You can find it here \r\nhttps://github.com/huggingface/transformers/tree/master/examples/question-answering \r\n\r\n",
"@NielsRogge I ran the code in colab, it's working for me too. But not in conda environment.\r\n\r\n@patil-suraj [example-script](https://github.com/huggingface/transformers/tree/master/examples/question-answering) only supports squad 1.1 ? Does it support squad 2.0 ?",
"It supports squad V1 and V2. For V2, just add the flag `--version2_with_negative` (on top of `--dataset_nme squad_v2`)",
"If you try to call `train_dataset[137]`, it returns an error (`[136]` and `[138]` both work properly). It is because `end_positions.append(encodings.char_to_token(i, answers[i]['answer_end']))` and `end_positions.append(encodings.char_to_token(i, answers[i]['answer_end'] + 1))` do not find the correct token; the `end_position[-1]` is None. The code before #9378 should work.",
"[#9378-comment](https://github.com/huggingface/transformers/pull/9378#issuecomment-759717949) have worked for me. I was wondering how to use a snippet without an unfamiliar script so I can use my own language model. thanks @kevinthwu .\r\n\r\nbtw thanks @sgugger I can use the squad 2.0 with the option `--version2_with_negative`.\r\n\r\nI'm not closing as the docs are not updated yet.",
"> It supports squad V1 and V2. For V2, just add the flag `--version2_with_negative` (on top of `--dataset_nme squad_v2`)\r\n\r\n the argument name is '**version_2_with_negative**' (line 444 run_qa.py)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,611 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1
- Platform: Linux
- Python version: 3.6.12
- PyTorch version (GPU?): 1.7.1
- Tensorflow version (GPU?): 2.4.0
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
- @sgugger, @patil-suraj
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library: @sgugger
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples: @patil-suraj
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [X] the official example scripts: (give details below)
mkdir squad
wget https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json -O squad/train-v2.0.json
wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json -O squad/dev-v2.0.json
import json
from pathlib import Path
def read_squad(path):
path = Path(path)
with open(path, 'rb') as f:
squad_dict = json.load(f)
contexts = []
questions = []
answers = []
for group in squad_dict['data']:
for passage in group['paragraphs']:
context = passage['context']
for qa in passage['qas']:
question = qa['question']
for answer in qa['answers']:
contexts.append(context)
questions.append(question)
answers.append(answer)
return contexts, questions, answers
train_contexts, train_questions, train_answers = read_squad('squad/train-v2.0.json')
val_contexts, val_questions, val_answers = read_squad('squad/dev-v2.0.json')
def add_end_idx(answers, contexts):
for answer, context in zip(answers, contexts):
gold_text = answer['text']
start_idx = answer['answer_start']
end_idx = start_idx + len(gold_text)
# sometimes squad answers are off by a character or two – fix this
if context[start_idx:end_idx] == gold_text:
answer['answer_end'] = end_idx
elif context[start_idx-1:end_idx-1] == gold_text:
answer['answer_start'] = start_idx - 1
answer['answer_end'] = end_idx - 1 # When the gold label is off by one character
elif context[start_idx-2:end_idx-2] == gold_text:
answer['answer_start'] = start_idx - 2
answer['answer_end'] = end_idx - 2 # When the gold label is off by two characters
add_end_idx(train_answers, train_contexts)
add_end_idx(val_answers, val_contexts)
from transformers import DistilBertTokenizerFast
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
train_encodings = tokenizer(train_contexts, train_questions, truncation=True, padding=True)
val_encodings = tokenizer(val_contexts, val_questions, truncation=True, padding=True)
def add_token_positions(encodings, answers):
start_positions = []
end_positions = []
for i in range(len(answers)):
start_positions.append(encodings.char_to_token(i, answers[i]['answer_start']))
end_positions.append(encodings.char_to_token(i, answers[i]['answer_end']))
# if start position is None, the answer passage has been truncated
if start_positions[-1] is None:
start_positions[-1] = tokenizer.model_max_length
# if end position is None, the 'char_to_token' function points to the space before the correct token - > add + 1
if end_positions[-1] is None:
end_positions[-1] = encodings.char_to_token(i, answers[i]['answer_end'] + 1)
encodings.update({'start_positions': start_positions, 'end_positions': end_positions})
add_token_positions(train_encodings, train_answers)
add_token_positions(val_encodings, val_answers)
import torch
class SquadDataset(torch.utils.data.Dataset):
def __init__(self, encodings):
self.encodings = encodings
def __getitem__(self, idx):
return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
def __len__(self):
return len(self.encodings.input_ids)
train_dataset = SquadDataset(train_encodings)
val_dataset = SquadDataset(val_encodings)
from transformers import DistilBertForQuestionAnswering, Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=3, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=64, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=10,
)
model = DistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased")
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset # evaluation dataset
)
trainer.train()
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (give the name) SQUaD
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Ran the example from [squad question answering](https://huggingface.co/transformers/custom_datasets.html#question-answering-with-squad-2-0)
2. getting the following
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-9-fe1badbb2679> in <module>
21 )
22
---> 23 trainer.train()
/media/data2/anaconda/envs/hr/lib/python3.6/site-packages/transformers/trainer.py in train(self, model_path, trial)
871 self.control = self.callback_handler.on_epoch_begin(self.args, self.state, self.control)
872
--> 873 for step, inputs in enumerate(epoch_iterator):
874
875 # Skip past any already trained steps if resuming training
/media/data2/anaconda/envs/hr/lib/python3.6/site-packages/torch/utils/data/dataloader.py in __next__(self)
433 if self._sampler_iter is None:
434 self._reset()
--> 435 data = self._next_data()
436 self._num_yielded += 1
437 if self._dataset_kind == _DatasetKind.Iterable and \
/media/data2/anaconda/envs/hr/lib/python3.6/site-packages/torch/utils/data/dataloader.py in _next_data(self)
473 def _next_data(self):
474 index = self._next_index() # may raise StopIteration
--> 475 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
476 if self._pin_memory:
477 data = _utils.pin_memory.pin_memory(data)
/media/data2/anaconda/envs/hr/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
/media/data2/anaconda/envs/hr/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
<ipython-input-8-a9d5c9a06902> in __getitem__(self, idx)
6
7 def __getitem__(self, idx):
----> 8 return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
9
10 def __len__(self):
<ipython-input-8-a9d5c9a06902> in <dictcomp>(.0)
6
7 def __getitem__(self, idx):
----> 8 return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
9
10 def __len__(self):
RuntimeError: Could not infer dtype of NoneType
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior: should have run without the error.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9843/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9842 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9842/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9842/comments | https://api.github.com/repos/huggingface/transformers/issues/9842/events | https://github.com/huggingface/transformers/pull/9842 | 795,081,015 | MDExOlB1bGxSZXF1ZXN0NTYyNDg4OTUx | 9,842 | Fix model templates | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | MEMBER | null | Fixes the style issue with model templates | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9842/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9842",
"html_url": "https://github.com/huggingface/transformers/pull/9842",
"diff_url": "https://github.com/huggingface/transformers/pull/9842.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9842.patch",
"merged_at": 1611753659000
} |
https://api.github.com/repos/huggingface/transformers/issues/9841 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9841/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9841/comments | https://api.github.com/repos/huggingface/transformers/issues/9841/events | https://github.com/huggingface/transformers/issues/9841 | 795,059,066 | MDU6SXNzdWU3OTUwNTkwNjY= | 9,841 | Multi-TPU training uses just 1 out of 8 cores. | {
"login": "avacaondata",
"id": 35173563,
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avacaondata",
"html_url": "https://github.com/avacaondata",
"followers_url": "https://api.github.com/users/avacaondata/followers",
"following_url": "https://api.github.com/users/avacaondata/following{/other_user}",
"gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions",
"organizations_url": "https://api.github.com/users/avacaondata/orgs",
"repos_url": "https://api.github.com/users/avacaondata/repos",
"events_url": "https://api.github.com/users/avacaondata/events{/privacy}",
"received_events_url": "https://api.github.com/users/avacaondata/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi there. It's just a logging problem in the reporting of the total batch size. If we do the math, from your 5835032 samples, we get 91,172 batches per device, 11,396 batches total (divided by the number of cores) and 1,424 optimization steps (divided by the accumulation steps), which, multiplied by the 3 epochs, gives us the 4,272 steps you see.\r\n\r\nSo the number of cores is indeed taken into account.",
"Ahh, I see, my bad, I didn't calculate the number of steps correctly then (what a Data Scientist :P) Thank You very much @sgugger "
] | 1,611 | 1,611 | 1,611 | NONE | null | ## Environment info
- `transformers` version: 4.2.2
- Platform: n1-standard-64 Google Cloud
- Python version: 3.7
- PyTorch version (GPU?): 1.7 XLA
- Tensorflow version (GPU?):
- Using GPU in script?: NO, using TPU
- Using distributed or parallel set-up in script?: YES; I try to run it in parallel using all 8 cores with xla_spawn.py setting num_cores to 8 in a V3-8.
### Who can help
@patrickvonplaten, @LysandreJik @sgugger
## Information
Model I am using (Bert, XLNet ...): ALBERT base
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The problem occurs when I try to train with run_mlm_wwm.py through xla_spawn.py. I've checked that when xla_spawn calls run_mlm_ww.py, xm.xrt_world_size() is 8, which should be. However, when the Trainer starts to train, its batch size is only 64, but should be 64 * num_cores = 512. I've printed out the parameters sent by xla_spawn and those received by run_mlm_wwm.py, and they coincide, thus I don't understand why in line 690 of trainer: ```{python}total_train_batch_size = self.args.train_batch_size * xm.xrt_world_size()``` the total_train_batch_size is not converted to 512...
This is the full call:
```{bash}
XRT_TPU_CONFIG="tpu_worker;0;10.44.99.146:8470" python -u transformers/examples/xla_spawn.py --num_cores 8 \
transformers/examples/language-modeling/run_mlm_wwm.py \
--model_type albert \
--config_name ./config/albert-base-v2.json \
--tokenizer_name ./tokenizer_2912 \
--train_file ./train_texts_1_percent.txt \
--validation_file ./validation_data/good_texts.csv \
--output_dir ./models/model_1_percent \
--overwrite_output_dir \
--do_train \
--do_eval \
--evaluation_strategy steps \
--per_device_train_batch_size 64 \
--per_device_eval_batch_size 128 \
--gradient_accumulation_steps 8 \
--learning_rate 0.00176 \
--save_steps 1000 \
--logging_steps 1000 \
--overwrite_cache \
--max_seq_length 512 \
--eval_accumulation_steps 10 \
--load_best_model_at_end \
--run_name model_1_percent \
--save_total_limit 20 --tpu_metrics_debug
```
The model starts to train, but it doesn't take into account that it has 8 tpu cores:
```
[INFO|trainer.py:662] 2021-01-27 12:22:50,282 >> ***** Running training *****
[INFO|trainer.py:663] 2021-01-27 12:22:50,282 >> Num examples = 5835032
[INFO|trainer.py:664] 2021-01-27 12:22:50,282 >> Num Epochs = 3
[INFO|trainer.py:665] 2021-01-27 12:22:50,282 >> Instantaneous batch size per device = 64
[INFO|trainer.py:666] 2021-01-27 12:22:50,282 >> Total train batch size (w. parallel, distributed & accumulation) = 512
[INFO|trainer.py:667] 2021-01-27 12:22:50,282 >> Gradient Accumulation steps = 8
[INFO|trainer.py:668] 2021-01-27 12:22:50,282 >> Total optimization steps = 4272
0%| | 3/4272 [04:18<113:20:52, 95.58s/it]
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: Whole Word Masked Language Modelling
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Instantiate a Google Cloud V3-8 TPU and a n1-standard-64 Google Cloud instance.
2. Use any toy text dataset and any tokenizer and model name from the ones available in Transformers (these won't change the problem, so it's not necessary to have your own pretrained tokenizer or own dataset).
3. Try to execute the command I posted above but setting XRT_TPU_CONFIG to the IP address of your TPU.
## Expected behavior
It's expected that xla_spawn.py runs the python file passed to it in a multiprocessing fashion, distributing the batches and model over the TPU cores; however, at some point the xrt_world_size() turns to 1 and it doesn't see all the devices available anymore, but only one. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9841/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9840 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9840/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9840/comments | https://api.github.com/repos/huggingface/transformers/issues/9840/events | https://github.com/huggingface/transformers/pull/9840 | 795,049,207 | MDExOlB1bGxSZXF1ZXN0NTYyNDYxODkw | 9,840 | Fix TF template | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes the template and a cast issue.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9840/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9840",
"html_url": "https://github.com/huggingface/transformers/pull/9840",
"diff_url": "https://github.com/huggingface/transformers/pull/9840.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9840.patch",
"merged_at": 1611751230000
} |
https://api.github.com/repos/huggingface/transformers/issues/9839 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9839/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9839/comments | https://api.github.com/repos/huggingface/transformers/issues/9839/events | https://github.com/huggingface/transformers/pull/9839 | 795,020,524 | MDExOlB1bGxSZXF1ZXN0NTYyNDM3NzIy | 9,839 | Run GA on forks | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834088753,
"node_id": "MDU6TGFiZWwxODM0MDg4NzUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Tests",
"name": "Tests",
"color": "a6fcca",
"default": false,
"description": "Related to tests"
}
] | closed | false | null | [] | [] | 1,611 | 1,614 | 1,614 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9839/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9839",
"html_url": "https://github.com/huggingface/transformers/pull/9839",
"diff_url": "https://github.com/huggingface/transformers/pull/9839.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9839.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/9838 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9838/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9838/comments | https://api.github.com/repos/huggingface/transformers/issues/9838/events | https://github.com/huggingface/transformers/issues/9838 | 794,968,943 | MDU6SXNzdWU3OTQ5Njg5NDM= | 9,838 | logging_epochs argument for TrainingArguments | {
"login": "hasansalimkanmaz",
"id": 49716619,
"node_id": "MDQ6VXNlcjQ5NzE2NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/49716619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hasansalimkanmaz",
"html_url": "https://github.com/hasansalimkanmaz",
"followers_url": "https://api.github.com/users/hasansalimkanmaz/followers",
"following_url": "https://api.github.com/users/hasansalimkanmaz/following{/other_user}",
"gists_url": "https://api.github.com/users/hasansalimkanmaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hasansalimkanmaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hasansalimkanmaz/subscriptions",
"organizations_url": "https://api.github.com/users/hasansalimkanmaz/orgs",
"repos_url": "https://api.github.com/users/hasansalimkanmaz/repos",
"events_url": "https://api.github.com/users/hasansalimkanmaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/hasansalimkanmaz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | [
"Just like `evaluation_strategy` chooses between `'steps'` and `'epoch'`, to maintain consistency I think it is better to introduce either of:\r\n1. a new enumeration `LoggingStrategy` with values\r\n 1. `'epoch'` for per-epoch functionality\r\n 2. `'steps'` functionality by falling back on `logging_steps`\r\n2. a new bool argument `log_per_epoch` to decide between `epoch` or `steps` functionality and proceed similarly as above\r\n\r\n@hasansalimkanmaz If you're still occupied, is it okay if I take a stab at this?",
"Feel free to go ahead @tanmay17061 I am still busy with some other staff. Thanks for your interest."
] | 1,611 | 1,613 | 1,613 | CONTRIBUTOR | null | # 🚀 Feature request
There is no `logging_epochs` argument in `TrainingArguments`. When someone wants to train with `EvaluationStrategy.EPOCH`, he/she wants to see the logs after each epoch. Currently it is not possible.
## Motivation
Better logging for training
## Your contribution
If I have time, I would like to add it. However, I am not available in a coming couple of weeks.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9838/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9838/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9837 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9837/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9837/comments | https://api.github.com/repos/huggingface/transformers/issues/9837/events | https://github.com/huggingface/transformers/pull/9837 | 794,928,242 | MDExOlB1bGxSZXF1ZXN0NTYyMzYwOTA4 | 9,837 | Fixing flaky conversational test + flag it as a pipeline test. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9837/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9837",
"html_url": "https://github.com/huggingface/transformers/pull/9837",
"diff_url": "https://github.com/huggingface/transformers/pull/9837.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9837.patch",
"merged_at": 1611825596000
} |
https://api.github.com/repos/huggingface/transformers/issues/9836 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9836/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9836/comments | https://api.github.com/repos/huggingface/transformers/issues/9836/events | https://github.com/huggingface/transformers/issues/9836 | 794,921,224 | MDU6SXNzdWU3OTQ5MjEyMjQ= | 9,836 | [docs] use `versionadded`, `versionchanged` and `deprecated` directive | {
"login": "ydcjeff",
"id": 32727188,
"node_id": "MDQ6VXNlcjMyNzI3MTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/32727188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydcjeff",
"html_url": "https://github.com/ydcjeff",
"followers_url": "https://api.github.com/users/ydcjeff/followers",
"following_url": "https://api.github.com/users/ydcjeff/following{/other_user}",
"gists_url": "https://api.github.com/users/ydcjeff/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydcjeff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydcjeff/subscriptions",
"organizations_url": "https://api.github.com/users/ydcjeff/orgs",
"repos_url": "https://api.github.com/users/ydcjeff/repos",
"events_url": "https://api.github.com/users/ydcjeff/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydcjeff/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Cool idea! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,611 | 1,619 | 1,619 | CONTRIBUTOR | null | # 🚀 Feature request
## Documentation
Use `.. versionadded::`, `.. versionchanged::` and `.. deprecated::` directive, so that user knows which features are added / changed / deprecated in which version and they can navigate the docs easily without changing the docs from version to version.
Ref: https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#directive-versionadded
## Motivation
To be able to know (without going to github) which features are introduced / changed / deprecated / improved in which version just from the docs.
Since `transfomers` is widely used in production, this slight change to the docs can help users see bird eye view for the features of the library.
Let me know what do you think
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9836/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9836/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9835 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9835/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9835/comments | https://api.github.com/repos/huggingface/transformers/issues/9835/events | https://github.com/huggingface/transformers/issues/9835 | 794,898,203 | MDU6SXNzdWU3OTQ4OTgyMDM= | 9,835 | support Mixed Precision and avoid 'dtype=float32' in implementation | {
"login": "xuxingya",
"id": 13343428,
"node_id": "MDQ6VXNlcjEzMzQzNDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/13343428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xuxingya",
"html_url": "https://github.com/xuxingya",
"followers_url": "https://api.github.com/users/xuxingya/followers",
"following_url": "https://api.github.com/users/xuxingya/following{/other_user}",
"gists_url": "https://api.github.com/users/xuxingya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xuxingya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xuxingya/subscriptions",
"organizations_url": "https://api.github.com/users/xuxingya/orgs",
"repos_url": "https://api.github.com/users/xuxingya/repos",
"events_url": "https://api.github.com/users/xuxingya/events{/privacy}",
"received_events_url": "https://api.github.com/users/xuxingya/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hello!\r\nThanks for the feature requests. We are currently working on this, some of them already support mixed precision 👍 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,611 | 1,619 | 1,619 | NONE | null | # 🚀 Feature request
When using Longformer model, I found the dtype of many tensors are assigned directly by `dtype=tf.dtypes.float32` or equations, which makes mixed precision raining impossible. And I found that other models also have this problem.
Of course It is not a bug to not support mixed precision training. But because Transformer models are usually very large, it will be appreciated to support mix precision training when implementing new models.
So I suggest if possible please use dtype inference instead of direct dtype assignment.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9835/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9834 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9834/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9834/comments | https://api.github.com/repos/huggingface/transformers/issues/9834/events | https://github.com/huggingface/transformers/pull/9834 | 794,896,775 | MDExOlB1bGxSZXF1ZXN0NTYyMzM0NzI1 | 9,834 | Improved TF inputs | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"As said offline, I feel like this adds unnecessary complexity for no real gain. I don't think this necessarily slows things down, and if it does I'm sure it's by a negligible margin. \r\n\r\nAs you mentioned offline this also fixes a bug, so if you find a way to integrate this in the `input_processing` method as you've mentioned I may be in favor of this change.",
"Ok, I will rethink this to integrate it inside `input_processing`",
"@LysandreJik @patrickvonplaten the check is now inside `input_processing` (done for BERT only to show an example). Does-it fits you better?",
"Yes it's cleaner! I still don't really like the `already_processed=True`, but I understand why it's necessary.\r\n\r\nBy the way, is there a reason we re-specify all the inputs for base models after processing the inputs? Can't we just unpack the `inputs` directly in the transformer? \r\n\r\nInstead of:\r\n\r\n```py\r\n inputs = input_processing(\r\n func=self.call,\r\n config=self.config,\r\n input_ids=input_ids,\r\n attention_mask=attention_mask,\r\n token_type_ids=token_type_ids,\r\n position_ids=position_ids,\r\n head_mask=head_mask,\r\n inputs_embeds=inputs_embeds,\r\n output_attentions=output_attentions,\r\n output_hidden_states=output_hidden_states,\r\n return_dict=return_dict,\r\n training=training,\r\n kwargs_call=kwargs,\r\n )\r\n outputs = self.bert(\r\n input_ids=inputs[\"input_ids\"],\r\n attention_mask=inputs[\"attention_mask\"],\r\n token_type_ids=inputs[\"token_type_ids\"],\r\n position_ids=inputs[\"position_ids\"],\r\n head_mask=inputs[\"head_mask\"],\r\n inputs_embeds=inputs[\"inputs_embeds\"],\r\n output_attentions=inputs[\"output_attentions\"],\r\n output_hidden_states=inputs[\"output_hidden_states\"],\r\n return_dict=inputs[\"return_dict\"],\r\n training=inputs[\"training\"],\r\n )\r\n```\r\n\r\nwe would have:\r\n\r\n```py\r\n inputs = input_processing(\r\n func=self.call,\r\n config=self.config,\r\n input_ids=input_ids,\r\n attention_mask=attention_mask,\r\n token_type_ids=token_type_ids,\r\n position_ids=position_ids,\r\n head_mask=head_mask,\r\n inputs_embeds=inputs_embeds,\r\n output_attentions=output_attentions,\r\n output_hidden_states=output_hidden_states,\r\n return_dict=return_dict,\r\n training=training,\r\n kwargs_call=kwargs,\r\n )\r\n outputs = self.bert(**inputs)\r\n```\r\n\r\nwhich looks cleaner and we there would be no need to mention `already_processed`. Looking at it, I would expect the `input_processing` method to do the full processing for the model inputs, so I don't see why we would need to redefine what inputs we're sending to the model; the selection should already have been made in the `input_processing`.\r\n\r\nPlease let me know if this has already been discussed or if I'm missing something.",
"That's cleaner indeed, but without the `already_processed` argument I don't see how we can know that the input has already been processed or not. \r\n\r\nHow do you know from:\r\n```python\r\n inputs = input_processing(\r\n func=self.call,\r\n config=self.config,\r\n input_ids=input_ids,\r\n attention_mask=attention_mask,\r\n token_type_ids=token_type_ids,\r\n position_ids=position_ids,\r\n head_mask=head_mask,\r\n inputs_embeds=inputs_embeds,\r\n output_attentions=output_attentions,\r\n output_hidden_states=output_hidden_states,\r\n return_dict=return_dict,\r\n training=training,\r\n kwargs_call=kwargs,\r\n )\r\n```\r\n\r\nThat the given inputs have already been processed or not without having a flag that gives you this info? :(",
"OK, I might have found the solution, need to think a bit more about this, but wait the next push and you will let me know if it looks better :)",
"@LysandreJik Since the last push now `input_processing` handles 100% of the process, and now the calls can be like `outputs = self.bert(**inputs)`.\r\n\r\nIf everyone is ok with this last version I will update accordingly the other models.",
"To be honest, I don't really see the point of this PR (but maybe there is something I'm not seeing or misunderstood) - is this PR just to run `input_processing` 1 time instead of possibly 4 times? Or is it also fixing a bug/enabling functionality that didn't exist before? Is that speed-up even noticeable? \r\n\r\nI don't think the trade-off between an (I assume) tiny speed-up of the forward pass vs. added complex logic for the user + much more code is worth it here...also this PR would again change all forward functions of all models I think, no? So, we'll run into a bunch of merge conflicts here again (not that big of an issue though)",
"This is also for fixing an issue on the inputs that are a list. If the input is a list, the list is recursively processed. I thought you saw the thread we had with @LysandreJik . I'm copy pasting the explanation here.\r\n\r\nIf we have the input `input_ids=[[[1,2,3]], [[1,1,1]]]` after the first processing we get `{\"input_ids\": [[1,2,3]], \"attention_mask\": [[1,1,1]]}`, after the second processing we get `\"input_ids\": [1,2,3], \"attention_mask\": [1,1,1]`, after the third processing we get `input_ids=1, attention_mask=1` after the fourth processing we get an error.\r\n\r\nSo in order to avoid this issue, we should parse only once eveytime the input.",
"> This is also for fixing an issue on the inputs that are a list. If the input is a list, the list is recursively processed. I thought you saw the thread we had with @LysandreJik . I'm copy pasting the explanation here.\r\n> \r\n> If we have the input `input_ids=[[[1,2,3]], [[1,1,1]]]` after the first processing we get `{\"input_ids\": [[1,2,3]], \"attention_mask\": [[1,1,1]]}`, after the second processing we get `\"input_ids\": [1,2,3], \"attention_mask\": [1,1,1]`, after the third processing we get `input_ids=1, attention_mask=1` after the fourth processing we get an error.\r\n> \r\n> So in order to avoid this issue, we should parse only once eveytime the input.\r\n\r\nGot it! Thanks for sharing this here! Then yes, adding as little new logic as possible and boilerplate code to fix it is fine with me",
"I agree with @patrickvonplaten comments here and I would like to avoid touching all the files for this (however touching all the files for the change `outputs = self.bert(**inputs)` would be welcome as it's more readable).\r\n\r\nAn option to do it all in the `input_processing` without hurting the readability of all model files is to have `input_processing` return a subclass of dict that we could call `ProcessedInputs`. Then testing for that subclass at the beginning of the function (and directly returning the result in that case) would be enough.",
"> An option to do it all in the input_processing without hurting the readability of all model files is to have input_processing return a subclass of dict that we could call ProcessedInputs. Then testing for that subclass at the beginning of the function (and directly returning the result in that case) would be enough.\r\n\r\nWHOA!!! I love this idea! This is much better indeed, we get a better readability and a better checking of what has been processed or not. Thank you very much for sharing this idea ! I will close this PR and rethink it accordingly to what you proposed :)",
"If there is no bug, I think it may be wise not to spend too much time on it either. Doing the computation 4 times isn't much of an issue as it seems negligible compared to a forward pass' execution time.\r\n\r\nAlso, knowing how TF becomes annoying in graph mode, I would be very surprised if it could handle conditional statements with subclasses in its graph",
"Now that no issue has been identified, I would put this as an aside project. I don't mind checking this on my personal time :)"
] | 1,611 | 1,612 | 1,612 | CONTRIBUTOR | null | # What does this PR do?
This PR aims to improve the input processing of the TF models. Currently the inputs of each model are processed at least twice (once in the model, once in the main layer) and at most four times for the Seq2Seq models (once in the model, once in the main layer, once in the encoder layer and once in the decoder layer). This is a bit overkill and slows down the performance of a forward pass.
To fix this issue, we introduce a flag in order to know if the incoming inputs are already processed or not, if yes we keep them as they are otherwise we run the input processing.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9834/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9834",
"html_url": "https://github.com/huggingface/transformers/pull/9834",
"diff_url": "https://github.com/huggingface/transformers/pull/9834.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9834.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/9833 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9833/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9833/comments | https://api.github.com/repos/huggingface/transformers/issues/9833/events | https://github.com/huggingface/transformers/issues/9833 | 794,884,606 | MDU6SXNzdWU3OTQ4ODQ2MDY= | 9,833 | Mixed Precision support and avoid ‘分咯啊太’ | {
"login": "xuxingya",
"id": 13343428,
"node_id": "MDQ6VXNlcjEzMzQzNDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/13343428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xuxingya",
"html_url": "https://github.com/xuxingya",
"followers_url": "https://api.github.com/users/xuxingya/followers",
"following_url": "https://api.github.com/users/xuxingya/following{/other_user}",
"gists_url": "https://api.github.com/users/xuxingya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xuxingya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xuxingya/subscriptions",
"organizations_url": "https://api.github.com/users/xuxingya/orgs",
"repos_url": "https://api.github.com/users/xuxingya/repos",
"events_url": "https://api.github.com/users/xuxingya/events{/privacy}",
"received_events_url": "https://api.github.com/users/xuxingya/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | NONE | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9833/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9832 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9832/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9832/comments | https://api.github.com/repos/huggingface/transformers/issues/9832/events | https://github.com/huggingface/transformers/issues/9832 | 794,879,372 | MDU6SXNzdWU3OTQ4NzkzNzI= | 9,832 | ImportError: cannot import name 'get_last_checkpoint' | {
"login": "yuxuan2015",
"id": 15645056,
"node_id": "MDQ6VXNlcjE1NjQ1MDU2",
"avatar_url": "https://avatars.githubusercontent.com/u/15645056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuxuan2015",
"html_url": "https://github.com/yuxuan2015",
"followers_url": "https://api.github.com/users/yuxuan2015/followers",
"following_url": "https://api.github.com/users/yuxuan2015/following{/other_user}",
"gists_url": "https://api.github.com/users/yuxuan2015/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuxuan2015/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuxuan2015/subscriptions",
"organizations_url": "https://api.github.com/users/yuxuan2015/orgs",
"repos_url": "https://api.github.com/users/yuxuan2015/repos",
"events_url": "https://api.github.com/users/yuxuan2015/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuxuan2015/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"HI there,\r\n\r\nwhat's your transformers version ? `get_last_checkpoint` is available on master, so you should install from source to use it",
"hi @yuxuan2015 , the latest stable release of transformers (4.2.2) has no 'get_last_checkpoint' function, so if you installed via package manager you won't be able to use that function. like patil said, you need to install from source",
"I solved this error after reinstalling transformers from pip. The version of transformers I installed is 4.3.3",
"As mentioned in the `examples/readme.md` [here](https://github.com/huggingface/transformers/tree/master/examples#important-note), to run the examples, always install from source.\r\n\r\nClosing this issue."
] | 1,611 | 1,615 | 1,615 | NONE | null | from transformers.trainer_utils import get_last_checkpoint, is_main_process
ImportError: cannot import name 'get_last_checkpoint' | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9832/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9832/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/9831 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/9831/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/9831/comments | https://api.github.com/repos/huggingface/transformers/issues/9831/events | https://github.com/huggingface/transformers/pull/9831 | 794,865,661 | MDExOlB1bGxSZXF1ZXN0NTYyMzA4NzIy | 9,831 | [Setup.py] update jaxlib | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,611 | 1,611 | 1,611 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes failing Ciricle CI because of version mismatch: https://app.circleci.com/pipelines/github/huggingface/transformers/19040/workflows/75599a81-f58c-40c6-8feb-f824d57a1d65/jobs/157385
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/9831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/9831/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/9831",
"html_url": "https://github.com/huggingface/transformers/pull/9831",
"diff_url": "https://github.com/huggingface/transformers/pull/9831.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/9831.patch",
"merged_at": 1611736461000
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.