url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/1911
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1911/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1911/comments
https://api.github.com/repos/huggingface/transformers/issues/1911/events
https://github.com/huggingface/transformers/pull/1911
526,957,579
MDExOlB1bGxSZXF1ZXN0MzQ0MzI2NjIw
1,911
Fix GPT2 docstring from #1906
{ "login": "bilal2vec", "id": 29356759, "node_id": "MDQ6VXNlcjI5MzU2NzU5", "avatar_url": "https://avatars.githubusercontent.com/u/29356759?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bilal2vec", "html_url": "https://github.com/bilal2vec", "followers_url": "https://api.github.com/users/bilal2vec/followers", "following_url": "https://api.github.com/users/bilal2vec/following{/other_user}", "gists_url": "https://api.github.com/users/bilal2vec/gists{/gist_id}", "starred_url": "https://api.github.com/users/bilal2vec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bilal2vec/subscriptions", "organizations_url": "https://api.github.com/users/bilal2vec/orgs", "repos_url": "https://api.github.com/users/bilal2vec/repos", "events_url": "https://api.github.com/users/bilal2vec/events{/privacy}", "received_events_url": "https://api.github.com/users/bilal2vec/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1911?src=pr&el=h1) Report\n> Merging [#1911](https://codecov.io/gh/huggingface/transformers/pull/1911?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/26db31e0c09a8b5e1ca7a61c454b159eab9d86be?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1911/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1911?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1911 +/- ##\n=======================================\n Coverage 84.04% 84.04% \n=======================================\n Files 97 97 \n Lines 14333 14333 \n=======================================\n Hits 12046 12046 \n Misses 2287 2287\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1911?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/1911/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9ncHQyLnB5) | `96.72% <ø> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1911?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1911?src=pr&el=footer). Last update [26db31e...65c5080](https://codecov.io/gh/huggingface/transformers/pull/1911?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Great, thanks!" ]
1,574
1,574
1,574
CONTRIBUTOR
null
Fixes #1906 Changes the GPT2 Tokenizer's docstrings to correctly explain the reason for `add_prefix_space` parameter
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1911/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1911/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1911", "html_url": "https://github.com/huggingface/transformers/pull/1911", "diff_url": "https://github.com/huggingface/transformers/pull/1911.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1911.patch", "merged_at": 1574699521000 }
https://api.github.com/repos/huggingface/transformers/issues/1910
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1910/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1910/comments
https://api.github.com/repos/huggingface/transformers/issues/1910/events
https://github.com/huggingface/transformers/issues/1910
526,751,179
MDU6SXNzdWU1MjY3NTExNzk=
1,910
Bart Tokenizer treat symbols in a word as a new word.
{ "login": "billchenxi", "id": 986935, "node_id": "MDQ6VXNlcjk4NjkzNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/986935?v=4", "gravatar_id": "", "url": "https://api.github.com/users/billchenxi", "html_url": "https://github.com/billchenxi", "followers_url": "https://api.github.com/users/billchenxi/followers", "following_url": "https://api.github.com/users/billchenxi/following{/other_user}", "gists_url": "https://api.github.com/users/billchenxi/gists{/gist_id}", "starred_url": "https://api.github.com/users/billchenxi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/billchenxi/subscriptions", "organizations_url": "https://api.github.com/users/billchenxi/orgs", "repos_url": "https://api.github.com/users/billchenxi/repos", "events_url": "https://api.github.com/users/billchenxi/events{/privacy}", "received_events_url": "https://api.github.com/users/billchenxi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, this is due to the original BERT's tokenization. You can try it using the [google-research's implementation](https://github.com/google-research/bert):\r\n\r\n```py\r\nraw_text = 'text with percentage%'\r\ntokenizer = tokenization.FullTokenizer(vocab_file=vocab_file, do_lower_case=True)\r\ntokens = tokenizer.tokenize(raw_text)\r\n\r\nprint(tokens) # ['text', 'with', 'percentage', '%']\r\n```\r\n\r\nThe goal is to be as close as possible to the original implementation, hence the similar behavior concerning special tokens." ]
1,574
1,575
1,575
NONE
null
## 🐛 Bug <!-- Important information --> The model I am using is Bart: The problem arises when using: * [ ] `tokenizer.encode` * [ ] `tokenizer.decode` The tasks I am working on is: * [ ] Encode a string, then decode it back. ## To Reproduce ``` import torch from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-cased') test_string = 'text with percentage%' # encode Converts a string in a sequence of ids (integer), using the tokenizer and vocabulary. input_ids = tokenizer.encode(test_string) output = tokenizer.decode(input_ids) print(output) ``` `>>> text with percentage %` ## Expected behavior It should be `text with percentage%`, which treats the symbol in the word as one word. ## Environment * OS: MacOS * Python version: 3.7 * PyTorch version: 1.3 * PyTorch Transformers version (or branch): Master
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1910/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/1910/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1909
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1909/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1909/comments
https://api.github.com/repos/huggingface/transformers/issues/1909/events
https://github.com/huggingface/transformers/issues/1909
526,689,923
MDU6SXNzdWU1MjY2ODk5MjM=
1,909
Passing inputs to TFGPT2LMHeadModel results in error: 'TensorSliceDataset' object has no attribute 'shape'
{ "login": "rdisipio", "id": 7974270, "node_id": "MDQ6VXNlcjc5NzQyNzA=", "avatar_url": "https://avatars.githubusercontent.com/u/7974270?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rdisipio", "html_url": "https://github.com/rdisipio", "followers_url": "https://api.github.com/users/rdisipio/followers", "following_url": "https://api.github.com/users/rdisipio/following{/other_user}", "gists_url": "https://api.github.com/users/rdisipio/gists{/gist_id}", "starred_url": "https://api.github.com/users/rdisipio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rdisipio/subscriptions", "organizations_url": "https://api.github.com/users/rdisipio/orgs", "repos_url": "https://api.github.com/users/rdisipio/repos", "events_url": "https://api.github.com/users/rdisipio/events{/privacy}", "received_events_url": "https://api.github.com/users/rdisipio/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi! You can simply use `tf.constant` to build your input tensors, like this:\r\n\r\n```py\r\nraw_text = \"Here comes the sun\"\r\ntokens = tokenizer.encode(raw_text, add_special_tokens=False)\r\ninputs = {'input_ids': tf.constant(tokens)}\r\noutputs = model(inputs)\r\n```\r\n\r\nYou can use datasets when using building a custom loop or using keras.fit for example, as these will generally feed the tensors directly to the model, instead of feeding the `tf.data.Dataset` directly. Here's how I would go about starting a basic custom loop using a `tf.data.Dataset`:\r\n\r\n```py\r\nimport tensorflow as tf\r\nimport numpy as np\r\nfrom transformers import *\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\nmodel = TFGPT2LMHeadModel.from_pretrained('gpt2')\r\n\r\nraw_text = \"Here comes the sun\"\r\ntokens = tokenizer.encode(raw_text, add_special_tokens=False)\r\ninputs = tf.data.Dataset.from_tensor_slices( np.array([tokens]) )\r\n\r\nfor input_value in inputs:\r\n outputs = model(input_value)\r\n```\r\n\r\nPlease notice I converted to a numpy array by adding a dimension (`[tokens]`) otherwise you would end up with individual IDs held by the dataset rather than sequences of ids.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,574
1,580
1,580
NONE
null
## 🐛 Bug Model I am using (Bert, XLNet....): `TFGPT2LMHeadModel` Language I am using the model on (English, Chinese....): English The problem arise when using: * [ ] the official example scripts: (give details) * [X ] my own modified scripts: (give details) ``` import tensorflow as tf from transformers import * tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = TFGPT2LMHeadModel.from_pretrained('gpt2') raw_text = "Here comes the sun" tokens = tokenizer.encode(raw_text, add_special_tokens=False) inputs = tf.data.Dataset.from_tensor_slices( np.array(tokens) ) inputs = {'input_ids': inputs} outputs = model(inputs) ``` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ X ] my own task or dataset: (give details) Trying to work out a stripped-down version of `run_generation.py` using `TFGPT2LMHeadModel` only. ## To Reproduce Steps to reproduce the behavior: just run the code above, you should get the following error: ``` Traceback (most recent call last): File "./generate_text.py", line 47, in <module> out = sample_sequence(tokens, num_samples=num_samples) File "./generate_text.py", line 27, in sample_sequence outputs = model(inputs) File "/Users/Riccardo/development/ideal/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 891, in __call__ outputs = self.call(cast_inputs, *args, **kwargs) File "/Users/Riccardo/development/ideal/lib/python3.7/site-packages/transformers/modeling_tf_gpt2.py", line 490, in call transformer_outputs = self.transformer(inputs, **kwargs) File "/Users/Riccardo/development/ideal/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 891, in __call__ outputs = self.call(cast_inputs, *args, **kwargs) File "/Users/Riccardo/development/ideal/lib/python3.7/site-packages/transformers/modeling_tf_gpt2.py", line 257, in call position_ids = tf.range(past_length, shape_list(input_ids)[-1] + past_length, dtype=tf.int32)[tf.newaxis, :] File "/Users/Riccardo/development/ideal/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 475, in shape_list static = x.shape.as_list() AttributeError: 'TensorSliceDataset' object has no attribute 'shape' ``` ## Expected behavior Still not sure! ## Environment * OS: MacOsX 11.14.6 (Mojave) * Python version: 3.7.5 * Tensorflow version: 2.0.0 * Tensorflow Transformers version (or branch): 2.1.1 * Using GPU ? No * Distributed of parallel setup ? No * Any other relevant information: ## Additional context <!-- Add any other context about the problem here. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1909/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1909/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1908
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1908/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1908/comments
https://api.github.com/repos/huggingface/transformers/issues/1908/events
https://github.com/huggingface/transformers/issues/1908
526,686,571
MDU6SXNzdWU1MjY2ODY1NzE=
1,908
Training transformer XL from scratch with my own dataset
{ "login": "Syrup274", "id": 28222839, "node_id": "MDQ6VXNlcjI4MjIyODM5", "avatar_url": "https://avatars.githubusercontent.com/u/28222839?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Syrup274", "html_url": "https://github.com/Syrup274", "followers_url": "https://api.github.com/users/Syrup274/followers", "following_url": "https://api.github.com/users/Syrup274/following{/other_user}", "gists_url": "https://api.github.com/users/Syrup274/gists{/gist_id}", "starred_url": "https://api.github.com/users/Syrup274/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Syrup274/subscriptions", "organizations_url": "https://api.github.com/users/Syrup274/orgs", "repos_url": "https://api.github.com/users/Syrup274/repos", "events_url": "https://api.github.com/users/Syrup274/events{/privacy}", "received_events_url": "https://api.github.com/users/Syrup274/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think yes, \r\nJust load the model (class) and then start training. As it is mentioned [here](https://huggingface.co/transformers/model_doc/transformerxl.html#transformers.TransfoXLModel), model classes are just a PyTorch torch.nn.Module sub-class.\r\n\r\n> This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.", "Is it also possible to train tensorflow model?", "If we think about [this](https://huggingface.co/transformers/model_doc/transformerxl.html#tftransfoxlmodel) expression, yes. \r\n\r\n> This model is a tf.keras.Model tf.keras.Model sub-class. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.\r\n\r\nAlso, check [this](https://github.com/huggingface/transformers#quick-tour-tf-20-training-and-pytorch-interoperability) out.", "Thanks very much!" ]
1,574
1,621
1,574
NONE
null
## ❓ Questions & Help Is it possible to train a transformer XL model from scratch with my own dataset? Just initialize the model with default params and compile & fit the model? <!-- A clear and concise description of the question. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1908/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1908/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1907
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1907/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1907/comments
https://api.github.com/repos/huggingface/transformers/issues/1907/events
https://github.com/huggingface/transformers/issues/1907
526,617,266
MDU6SXNzdWU1MjY2MTcyNjY=
1,907
lm_fine-tuning on small dataset of 3 documents
{ "login": "vr25", "id": 22553367, "node_id": "MDQ6VXNlcjIyNTUzMzY3", "avatar_url": "https://avatars.githubusercontent.com/u/22553367?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vr25", "html_url": "https://github.com/vr25", "followers_url": "https://api.github.com/users/vr25/followers", "following_url": "https://api.github.com/users/vr25/following{/other_user}", "gists_url": "https://api.github.com/users/vr25/gists{/gist_id}", "starred_url": "https://api.github.com/users/vr25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vr25/subscriptions", "organizations_url": "https://api.github.com/users/vr25/orgs", "repos_url": "https://api.github.com/users/vr25/repos", "events_url": "https://api.github.com/users/vr25/events{/privacy}", "received_events_url": "https://api.github.com/users/vr25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "How do you know you have exact identical `pytorch_model.bin` files? Do you just compare file sizes? IF so, it is not a qualified method just because weights usually are just float numbers and they (almost) always occupy same size on the disk. You can compare the hashes of files to make sure.", "Yes, I just thought of comparing the files naively by comparing their sizes. \r\n\r\nI see, yes, \"hashes\" sounds a much better way of comparing files, thanks. I'll post here if that works.\r\n\r\nAlso, do you have any beginner suggestions on generating the hashes quickly and efficiently?", "I used md5sum pytorch_model.bin to generate the hashes of the files and both are different. Anyway, thanks, again!" ]
1,574
1,574
1,574
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Hi, I am trying to use [run_lm_finetuning.py](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py) on a sample dataset [here](https://github.com/dmlc/gluon-nlp/blob/master/scripts/bert/sample_text.txt). I am running the script with following arguments but I get the exact identical pytorch_model.bin [440.5 MB] saved in the output_dir=op: python run_lm_finetuning.py --train_data_file=sample_text.txt --output_dir=op --mlm --do_train --overwrite_output_dir --do_lower_case --save_steps=50 I was wondering if this dataset of 3 documents is too small to fine-tune on or if I can modify some arguments to get a domain-fine-tuned model. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1907/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1907/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1906
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1906/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1906/comments
https://api.github.com/repos/huggingface/transformers/issues/1906/events
https://github.com/huggingface/transformers/issues/1906
526,613,213
MDU6SXNzdWU1MjY2MTMyMTM=
1,906
Documentation error in GPT2Tokenizer
{ "login": "CrafterKolyan", "id": 9883873, "node_id": "MDQ6VXNlcjk4ODM4NzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9883873?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CrafterKolyan", "html_url": "https://github.com/CrafterKolyan", "followers_url": "https://api.github.com/users/CrafterKolyan/followers", "following_url": "https://api.github.com/users/CrafterKolyan/following{/other_user}", "gists_url": "https://api.github.com/users/CrafterKolyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/CrafterKolyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CrafterKolyan/subscriptions", "organizations_url": "https://api.github.com/users/CrafterKolyan/orgs", "repos_url": "https://api.github.com/users/CrafterKolyan/repos", "events_url": "https://api.github.com/users/CrafterKolyan/events{/privacy}", "received_events_url": "https://api.github.com/users/CrafterKolyan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "According to the code (https://github.com/huggingface/transformers/blob/master/transformers/tokenization_gpt2.py#L183) the docstring is incomplete. \r\n\r\nThe `add_prefix_space` parameter should be passed to `tokenizer.tokenize()` as well as `tokenizer.encode()` or `tokenizer.decode()`.\r\n\r\nAs well, the docstring should actually say:\r\n\r\n```\r\nOtherwise, this tokenizer encode and decode method will not conserve a space at the beginning of a string: tokenizer.decode(tokenizer.encode(“ Hello”)) = ”Hello”\r\n```\r\n\r\nAccording to #1380, the Roberta/GPT2 tokenizer expects sequences to start with a space. Simply prepending a space to the input sequence doesn't give the same result because `GPT2Tokenizer` only overrides the internally-used `_tokenizer()` method and `PreTrainedTokenizer`'s `tokenize()` method (which is called by the user) does it's own preprocessing." ]
1,574
1,574
1,574
CONTRIBUTOR
null
Documentation page: https://huggingface.co/transformers/model_doc/gpt2.html#transformers.GPT2Tokenizer Code place: https://github.com/huggingface/transformers/blob/d7d36181fdefdabadc53adf51bed4a2680f5880a/transformers/tokenization_gpt2.py#L112-L113 This phrase: > Otherwise, this tokenizer encode and decode method will not conserve the absence of a space at the beginning of a string: tokenizer.decode(tokenizer.encode(“Hello”)) = ” Hello” is **NOT** correct. Actually: > tokenizer.decode(tokenizer.encode(“Hello”)) = ”Hello” Try this example: ``` from transformers import GPT2Tokenizer tokenizer = GPT2Tokenizer.from_pretrained('gpt2') print("'" + tokenizer.decode(tokenizer.encode("Hello")) + "'") ``` Output: ``` 'Hello' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1906/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1906/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1905
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1905/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1905/comments
https://api.github.com/repos/huggingface/transformers/issues/1905/events
https://github.com/huggingface/transformers/issues/1905
526,599,866
MDU6SXNzdWU1MjY1OTk4NjY=
1,905
run_summarization_finetuning.py
{ "login": "jmamou", "id": 19263306, "node_id": "MDQ6VXNlcjE5MjYzMzA2", "avatar_url": "https://avatars.githubusercontent.com/u/19263306?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmamou", "html_url": "https://github.com/jmamou", "followers_url": "https://api.github.com/users/jmamou/followers", "following_url": "https://api.github.com/users/jmamou/following{/other_user}", "gists_url": "https://api.github.com/users/jmamou/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmamou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmamou/subscriptions", "organizations_url": "https://api.github.com/users/jmamou/orgs", "repos_url": "https://api.github.com/users/jmamou/repos", "events_url": "https://api.github.com/users/jmamou/events{/privacy}", "received_events_url": "https://api.github.com/users/jmamou/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I think so. The script called `run_summarization_finetuning.py` has the goal of extracting text summary given as input a text document. This code shows how to fine-tune a text summarizer model (called **BertSum**) with two different datasets: CNN and Daily Mail.\r\n\r\n> ## Questions & Help\r\n> Hi\r\n> Does your code run_summarization_finetuning.py implement the abstractive summarization approach described in \"Text summarization with pretrained encoders.\" by Liu, Yang, and Mirella Lapata?\r\n> Thanks", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,574
1,580
1,580
CONTRIBUTOR
null
## ❓ Questions & Help Hi Does your code run_summarization_finetuning.py implement the abstractive summarization approach described in "Text summarization with pretrained encoders." by Liu, Yang, and Mirella Lapata? Thanks <!-- A clear and concise description of the question. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1905/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1905/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1904
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1904/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1904/comments
https://api.github.com/repos/huggingface/transformers/issues/1904/events
https://github.com/huggingface/transformers/issues/1904
526,553,318
MDU6SXNzdWU1MjY1NTMzMTg=
1,904
Typo in Documentation for GPT2LM Output "past"
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes, feel free to open a PR to fix this Patrick ;-)" ]
1,574
1,577
1,577
MEMBER
null
I think the described output shape for "past" for GPT-2 is wrong. In: [https://github.com/huggingface/transformers/blob/0cdfcca24b5739e25b584c05b866baa19ea382ef/transformers/modeling_gpt2.py#L332](url) it says the output shape of each key, value tensor in each self attention layer is ``(batch_size, num_heads, sequence_length, sequence_length)`` , but it should be ``(batch_size, num_heads, sequence_length, hidden_size / num_heads)`` or ``(batch_size, num_heads, sequence_length, embed_size_per_head)`` .Also since the past tensor per layer always shows both key and value tensors it might even be clearer to write: ``(2, batch_size, num_heads, sequence_length, embed_size_per_head)``
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1904/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1903
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1903/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1903/comments
https://api.github.com/repos/huggingface/transformers/issues/1903/events
https://github.com/huggingface/transformers/pull/1903
526,487,351
MDExOlB1bGxSZXF1ZXN0MzQzOTQyMjc3
1,903
Valohai integration
{ "login": "JuhaKiili", "id": 1525350, "node_id": "MDQ6VXNlcjE1MjUzNTA=", "avatar_url": "https://avatars.githubusercontent.com/u/1525350?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JuhaKiili", "html_url": "https://github.com/JuhaKiili", "followers_url": "https://api.github.com/users/JuhaKiili/followers", "following_url": "https://api.github.com/users/JuhaKiili/following{/other_user}", "gists_url": "https://api.github.com/users/JuhaKiili/gists{/gist_id}", "starred_url": "https://api.github.com/users/JuhaKiili/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JuhaKiili/subscriptions", "organizations_url": "https://api.github.com/users/JuhaKiili/orgs", "repos_url": "https://api.github.com/users/JuhaKiili/repos", "events_url": "https://api.github.com/users/JuhaKiili/events{/privacy}", "received_events_url": "https://api.github.com/users/JuhaKiili/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1903?src=pr&el=h1) Report\n> Merging [#1903](https://codecov.io/gh/huggingface/transformers/pull/1903?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0cdfcca24b5739e25b584c05b866baa19ea382ef?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1903/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1903?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1903 +/- ##\n=======================================\n Coverage 84.04% 84.04% \n=======================================\n Files 97 97 \n Lines 14333 14333 \n=======================================\n Hits 12046 12046 \n Misses 2287 2287\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1903?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1903?src=pr&el=footer). Last update [0cdfcca...66fc8d2](https://codecov.io/gh/huggingface/transformers/pull/1903?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "> * Loss logging no longer accumulates (why was that?) but shows raw loss\r\n\r\nWith the revised commits, this is no longer true (but I'm not able to edit @JuhaKiili's original message, and he's AFK at Slush :) )\r\n\r\n" ]
1,574
1,575
1,575
CONTRIBUTOR
null
Huggingface / Transformers Valohai integration Changes to existing Transformers code: - Prints Valohai-styled logs (JSON) Additional info: - valohai.yaml has most (but not all) parameters used by run_glue.py - Valohai execution downloads all glue datas by default (still pretty fast). Download script placed in `utils/download_glue_data.py`. - Valohai execution only saves model/checkpoint at the end by default (adjust with Valohai UI) - Valohai execution logs every 25 steps (adjust with Valohai UI)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1903/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1903/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1903", "html_url": "https://github.com/huggingface/transformers/pull/1903", "diff_url": "https://github.com/huggingface/transformers/pull/1903.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1903.patch", "merged_at": 1575385982000 }
https://api.github.com/repos/huggingface/transformers/issues/1902
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1902/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1902/comments
https://api.github.com/repos/huggingface/transformers/issues/1902/events
https://github.com/huggingface/transformers/pull/1902
526,477,413
MDExOlB1bGxSZXF1ZXN0MzQzOTM0Mjky
1,902
Add CamemBERT models to modeling_auto
{ "login": "Evpok", "id": 1656541, "node_id": "MDQ6VXNlcjE2NTY1NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1656541?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Evpok", "html_url": "https://github.com/Evpok", "followers_url": "https://api.github.com/users/Evpok/followers", "following_url": "https://api.github.com/users/Evpok/following{/other_user}", "gists_url": "https://api.github.com/users/Evpok/gists{/gist_id}", "starred_url": "https://api.github.com/users/Evpok/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Evpok/subscriptions", "organizations_url": "https://api.github.com/users/Evpok/orgs", "repos_url": "https://api.github.com/users/Evpok/repos", "events_url": "https://api.github.com/users/Evpok/events{/privacy}", "received_events_url": "https://api.github.com/users/Evpok/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1902?src=pr&el=h1) Report\n> Merging [#1902](https://codecov.io/gh/huggingface/transformers/pull/1902?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e70cdf083ddb8bfe298d43e6d70d698a3a2f56d3?src=pr&el=desc) will **decrease** coverage by `0.03%`.\n> The diff coverage is `14.28%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1902/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1902?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1902 +/- ##\n==========================================\n- Coverage 84.08% 84.04% -0.04% \n==========================================\n Files 97 97 \n Lines 14316 14323 +7 \n==========================================\n+ Hits 12037 12038 +1 \n- Misses 2279 2285 +6\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1902?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1902/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2F1dG8ucHk=) | `31.81% <14.28%> (-1.52%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1902?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1902?src=pr&el=footer). Last update [e70cdf0...75ef125](https://codecov.io/gh/huggingface/transformers/pull/1902?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Great, thanks @Evpok !" ]
1,574
1,574
1,574
CONTRIBUTOR
null
CamemBERT was in autoconfig but not in automodel, this PR aims to correct that. Also first PR here so please tell me if I missed some things :-)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1902/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1902/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1902", "html_url": "https://github.com/huggingface/transformers/pull/1902", "diff_url": "https://github.com/huggingface/transformers/pull/1902.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1902.patch", "merged_at": 1574695264000 }
https://api.github.com/repos/huggingface/transformers/issues/1901
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1901/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1901/comments
https://api.github.com/repos/huggingface/transformers/issues/1901/events
https://github.com/huggingface/transformers/issues/1901
526,474,324
MDU6SXNzdWU1MjY0NzQzMjQ=
1,901
Methods get_input_embeddings() and set_input_embeddings() appear in documentation but not available.
{ "login": "avacaondata", "id": 35173563, "node_id": "MDQ6VXNlcjM1MTczNTYz", "avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avacaondata", "html_url": "https://github.com/avacaondata", "followers_url": "https://api.github.com/users/avacaondata/followers", "following_url": "https://api.github.com/users/avacaondata/following{/other_user}", "gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}", "starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions", "organizations_url": "https://api.github.com/users/avacaondata/orgs", "repos_url": "https://api.github.com/users/avacaondata/repos", "events_url": "https://api.github.com/users/avacaondata/events{/privacy}", "received_events_url": "https://api.github.com/users/avacaondata/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Did you check if the methods are in `modeling_utils.py` in your local package?\r\nSee #1837 . As @julien-c said, the version of the lib you use may not be in sync with the scripts you run.\r\nTry to install the lib from `master`:\r\n`pip install git+https://github.com/huggingface/transformers`\r\n", "The doc you linked to is for the `master` version, if I'm not mistaken. Maybe we should make that clearer, cc @LysandreJik \r\n\r\nYes, @loveritsu929 is right – install from source if you want to use those methods right now. Thanks!" ]
1,574
1,574
1,574
NONE
null
## ❓ Questions & Help Hi, I'm trying to access the word embeddings layer of Bert multilingual, as I want to take out all tokens not belonging to spanish and add some tokens which are part of this language, with the objective of adapting BERT multilingual to spanish. The thing is, in your documentation you claim that there are 2 functions for this purpose: get_input_embeddings() and set_input_embeddings() (https://huggingface.co/transformers/model_doc/bert.html); in fact there is a link to the source code and they appear to be there. However, once I try to do this in version 2.1.1. (the one documentation refers to), none of this methods are part of BertModel class, which is astonishing to me. Please tell me what's wrong!! Did you remove these methods from current versions but have a unique documentation for all versions? Which versions can I find these methods in? Is the source code showed in the documentation different from the actual source code of the library? <!-- A clear and concise description of the question. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1901/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1901/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1900
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1900/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1900/comments
https://api.github.com/repos/huggingface/transformers/issues/1900/events
https://github.com/huggingface/transformers/issues/1900
526,465,963
MDU6SXNzdWU1MjY0NjU5NjM=
1,900
Can GPT2DoubleHeadsModel be used for regular next token prediction task without adjusting its head?
{ "login": "h56cho", "id": 52889259, "node_id": "MDQ6VXNlcjUyODg5MjU5", "avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h56cho", "html_url": "https://github.com/h56cho", "followers_url": "https://api.github.com/users/h56cho/followers", "following_url": "https://api.github.com/users/h56cho/following{/other_user}", "gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}", "starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h56cho/subscriptions", "organizations_url": "https://api.github.com/users/h56cho/orgs", "repos_url": "https://api.github.com/users/h56cho/repos", "events_url": "https://api.github.com/users/h56cho/events{/privacy}", "received_events_url": "https://api.github.com/users/h56cho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You don't need to do anything in order to make it predict the next word. That's what it is. But you may consider finetuning it with your own dataset.", "Hello,\r\n\r\nThank you for your reply.\r\nso the **GPT2DoubleHeadsModel** (NOT GPT2LMHeadModel but the DoubleHeadsModel), without any adjustment on its head, can be used for any \"non-multiple-choice-based\" next token prediction, as well as for the multiple-choice questions?\r\n\r\nThank you,", "Edit:\r\nI'm not sure. I did missread the GPT2DoubleHeadsModel.", "If GPT2DoubleHeadsModel can process both multiple-choice questions as well as non-multiple-choice next token predictions without any adjustment, why did HuggingFace make 2 different GPT2 models -- GPT2DoubleHeadsModel and GPT2LMHeadModel ?\r\n\r\nCan GPT2DoubleHeadsModel process both multiple-choice questions as well as non-multiple-choice next token predictions without any further adjustment(s) [e.g. adjustment on its head, etc,]?", "So?" ]
1,574
1,644
1,574
NONE
null
Hello, According to the HuggingFace Transformer's website (https://huggingface.co/transformers/model_doc/gpt2.html#gpt2doubleheadsmodel), **GPT2DoubleHeadsModel** is the GPT2 Model transformer with a language modelling and a multiple-choice classification head on top e.g. for RocStories/SWAG tasks. Does this mean that we can use the **GPT2DoubleHeadsModel** for the regular language modelling task (i.e. next word prediction) without modifying its head? or would I need to adjust the head of **GPT2DoubleHeadsModel** if I want to do the next word prediction, since GPT2DoubleHeadsModel is for answering multiple-choice type questions only? Thank you,
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1900/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1900/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1899
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1899/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1899/comments
https://api.github.com/repos/huggingface/transformers/issues/1899/events
https://github.com/huggingface/transformers/issues/1899
526,340,095
MDU6SXNzdWU1MjYzNDAwOTU=
1,899
Classify entities - run_ner script
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "organizations_url": "https://api.github.com/users/ghost/orgs", "repos_url": "https://api.github.com/users/ghost/repos", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "received_events_url": "https://api.github.com/users/ghost/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,574
1,580
1,580
NONE
null
Hello all, I am just wondering what extra input to the "BertForTokenClassification" if I want to classify the entities TO (PER, LOC ..) . Note that the entities are given in advance. I used run_ner script but it extracts the entities and classify them (the extraction is not needed). I did not get how the script(or the input) can be modified for my task? any idea?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1899/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1898
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1898/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1898/comments
https://api.github.com/repos/huggingface/transformers/issues/1898/events
https://github.com/huggingface/transformers/issues/1898
526,323,227
MDU6SXNzdWU1MjYzMjMyMjc=
1,898
Is the usage of scheduler described in README correct?
{ "login": "tamuhey", "id": 24998666, "node_id": "MDQ6VXNlcjI0OTk4NjY2", "avatar_url": "https://avatars.githubusercontent.com/u/24998666?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tamuhey", "html_url": "https://github.com/tamuhey", "followers_url": "https://api.github.com/users/tamuhey/followers", "following_url": "https://api.github.com/users/tamuhey/following{/other_user}", "gists_url": "https://api.github.com/users/tamuhey/gists{/gist_id}", "starred_url": "https://api.github.com/users/tamuhey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tamuhey/subscriptions", "organizations_url": "https://api.github.com/users/tamuhey/orgs", "repos_url": "https://api.github.com/users/tamuhey/repos", "events_url": "https://api.github.com/users/tamuhey/events{/privacy}", "received_events_url": "https://api.github.com/users/tamuhey/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,574
1,580
1,580
CONTRIBUTOR
null
https://github.com/huggingface/transformers/blame/master/README.md#L546 I think it's not here to do `scheduler.step` but in epoch loop.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1898/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1898/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1897
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1897/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1897/comments
https://api.github.com/repos/huggingface/transformers/issues/1897/events
https://github.com/huggingface/transformers/issues/1897
526,312,104
MDU6SXNzdWU1MjYzMTIxMDQ=
1,897
Distilling GPT2 with gives OOM
{ "login": "snaik2016", "id": 18183245, "node_id": "MDQ6VXNlcjE4MTgzMjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/18183245?v=4", "gravatar_id": "", "url": "https://api.github.com/users/snaik2016", "html_url": "https://github.com/snaik2016", "followers_url": "https://api.github.com/users/snaik2016/followers", "following_url": "https://api.github.com/users/snaik2016/following{/other_user}", "gists_url": "https://api.github.com/users/snaik2016/gists{/gist_id}", "starred_url": "https://api.github.com/users/snaik2016/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/snaik2016/subscriptions", "organizations_url": "https://api.github.com/users/snaik2016/orgs", "repos_url": "https://api.github.com/users/snaik2016/repos", "events_url": "https://api.github.com/users/snaik2016/events{/privacy}", "received_events_url": "https://api.github.com/users/snaik2016/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Do you need pretrain distilgpt2 from scratch? You can consider just finetuning it.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,574
1,580
1,580
NONE
null
## ❓ Questions & Help Distilling GPT2 gives OOM what is the best way to fit both teacher student in single GPU and train? Tried reducing batch size but that itself results into an error. File "train.py", line 285, in main distiller.train() File "trabsformersexamples\distillation\distiller.py", line 340, in train self.step(input_ids=token_ids, attention_mask=attn_mask, lm_labels=lm_labels) File "trabsformersexamples\distillation\distiller.py", line 378, in step s_logits, _, s_hidden_states = self.student(input_ids=input_ids, attention_mask=None) # (bs, seq_length, voc_size) File "conda\conda\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "conda\conda\envs\pytorch\lib\site-packages\transformers\modeling_gpt2.py", line 549, in forward inputs_embeds=inputs_embeds) File "conda\conda\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "conda\conda\envs\pytorch\lib\site-packages\transformers\modeling_gpt2.py", line 439, in forward inputs_embeds = self.wte(input_ids) File "conda\conda\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "conda\conda\envs\pytorch\lib\site-packages\torch\nn\modules\sparse.py", line 114, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "conda\conda\envs\pytorch\lib\site-packages\torch\nn\functional.py", line 1484, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.IntTensor instead (while checking arguments for embedding)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1897/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1897/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1896
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1896/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1896/comments
https://api.github.com/repos/huggingface/transformers/issues/1896/events
https://github.com/huggingface/transformers/issues/1896
526,304,569
MDU6SXNzdWU1MjYzMDQ1Njk=
1,896
Tokenizing/Loading Data for GPT-2 (1 example per line)
{ "login": "varunnambikrishnan", "id": 55714145, "node_id": "MDQ6VXNlcjU1NzE0MTQ1", "avatar_url": "https://avatars.githubusercontent.com/u/55714145?v=4", "gravatar_id": "", "url": "https://api.github.com/users/varunnambikrishnan", "html_url": "https://github.com/varunnambikrishnan", "followers_url": "https://api.github.com/users/varunnambikrishnan/followers", "following_url": "https://api.github.com/users/varunnambikrishnan/following{/other_user}", "gists_url": "https://api.github.com/users/varunnambikrishnan/gists{/gist_id}", "starred_url": "https://api.github.com/users/varunnambikrishnan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/varunnambikrishnan/subscriptions", "organizations_url": "https://api.github.com/users/varunnambikrishnan/orgs", "repos_url": "https://api.github.com/users/varunnambikrishnan/repos", "events_url": "https://api.github.com/users/varunnambikrishnan/events{/privacy}", "received_events_url": "https://api.github.com/users/varunnambikrishnan/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Please check this out: https://github.com/huggingface/transformers/issues/1816#issuecomment-554759751\r\nThis finetuning script https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py and the library do almost everything for you. Thus you don't need to split the text into fixed blocksize etc. Just give it one datafile for training and one for evaluating.", "I was using the script but when generating samples it seemed like it was mixing information from multiple examples in the training set, when they should be treated as individual examples. Is there a way to avoid this?", "Like in this thread it mentions not concatenating different articles in the same input: https://github.com/huggingface/transformers/issues/1145#issuecomment-527225854? Is there a way to do that with single examples (if they are not correlated)", "Language models look for entire corpus. You can seperate them via `<|endoftext|>` special token. The model learns (and already learned if you based it on a pretrained one while finetuning) to separate contexts by `<|endoftext|>` token. This is a common case for language models. For example, XLNet uses `<eod>` as separator. This is also implied at the original GPT paper while covering how to train GPT for document classification etc. See figure one: https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf\r\nBut, still if you want to make sure that different rows in the data are not even connected while training, then you can pad each line until fill `block_size`. For example, GPT2 has 1024 input size and you have a data long 1540 (=1024+516) as tokens. Then you could have: \r\n<1024 tokens><516 tokens + 508 padding><nextdata..>\r\nThus you can make sure that GPT2 doesn't get mixed data as input.", "Also, please check this tutorial http://jalammar.github.io/illustrated-gpt2/ by Jay Alammar", "I thought GPT-2 didn't use padding tokens https://github.com/huggingface/transformers/issues/1435#issuecomment-538779523 and it's unclear to me how to use the attention mask. I'll try just using the <|endoftext|> token however", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,574
1,580
1,580
NONE
null
I want to finetune GPT-2 on a dataset that has one example per line, and the examples all have different length. What is the best way to alter the TextDataset class to allow for this? I understand GPT-2 requires fixed length inputs but I'm not sure how to apply the attention mask to achieve this? Also do I need to add a bos/eos token to each line? Appreciate the help
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1896/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1896/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1895
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1895/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1895/comments
https://api.github.com/repos/huggingface/transformers/issues/1895/events
https://github.com/huggingface/transformers/pull/1895
526,299,089
MDExOlB1bGxSZXF1ZXN0MzQzNzkyMDI0
1,895
Update Squ
{ "login": "Santosh-Gupta", "id": 5524261, "node_id": "MDQ6VXNlcjU1MjQyNjE=", "avatar_url": "https://avatars.githubusercontent.com/u/5524261?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Santosh-Gupta", "html_url": "https://github.com/Santosh-Gupta", "followers_url": "https://api.github.com/users/Santosh-Gupta/followers", "following_url": "https://api.github.com/users/Santosh-Gupta/following{/other_user}", "gists_url": "https://api.github.com/users/Santosh-Gupta/gists{/gist_id}", "starred_url": "https://api.github.com/users/Santosh-Gupta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Santosh-Gupta/subscriptions", "organizations_url": "https://api.github.com/users/Santosh-Gupta/orgs", "repos_url": "https://api.github.com/users/Santosh-Gupta/repos", "events_url": "https://api.github.com/users/Santosh-Gupta/events{/privacy}", "received_events_url": "https://api.github.com/users/Santosh-Gupta/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you give at least some explanation in the title and the body of your post explaining what fixed or changed? Now it's very vague. ", "> Could you give at least some explanation in the title and the body of your post explaining what fixed or changed? Now it's very vague.\r\n\r\nSorry about the vagueness, I'll do this for any new pull requests. As for this one, it's obsolete now since this script has been refactored. " ]
1,574
1,578
1,578
CONTRIBUTOR
null
Update https://github.com/huggingface/transformers/blob/master/examples/run_squad.py Implemented solution from https://github.com/huggingface/transformers/issues/1837#issuecomment-554206349
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1895/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1895/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1895", "html_url": "https://github.com/huggingface/transformers/pull/1895", "diff_url": "https://github.com/huggingface/transformers/pull/1895.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1895.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/1894
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1894/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1894/comments
https://api.github.com/repos/huggingface/transformers/issues/1894/events
https://github.com/huggingface/transformers/issues/1894
526,267,577
MDU6SXNzdWU1MjYyNjc1Nzc=
1,894
`overwrite_cache` argument in `run_lm_finetuning.py` not used at all
{ "login": "iedmrc", "id": 13666448, "node_id": "MDQ6VXNlcjEzNjY2NDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/13666448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iedmrc", "html_url": "https://github.com/iedmrc", "followers_url": "https://api.github.com/users/iedmrc/followers", "following_url": "https://api.github.com/users/iedmrc/following{/other_user}", "gists_url": "https://api.github.com/users/iedmrc/gists{/gist_id}", "starred_url": "https://api.github.com/users/iedmrc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iedmrc/subscriptions", "organizations_url": "https://api.github.com/users/iedmrc/orgs", "repos_url": "https://api.github.com/users/iedmrc/repos", "events_url": "https://api.github.com/users/iedmrc/events{/privacy}", "received_events_url": "https://api.github.com/users/iedmrc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Indeed, thank you!" ]
1,574
1,574
1,574
CONTRIBUTOR
null
There is an unused argument in `run_lm_finetuning.py` https://github.com/huggingface/transformers/blob/e70cdf083ddb8bfe298d43e6d70d698a3a2f56d3/examples/run_lm_finetuning.py#L418 Is it forgotten?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1894/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1894/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1893
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1893/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1893/comments
https://api.github.com/repos/huggingface/transformers/issues/1893/events
https://github.com/huggingface/transformers/pull/1893
526,261,028
MDExOlB1bGxSZXF1ZXN0MzQzNzYwOTA5
1,893
Cleanup TPU bits from `run_glue.py`
{ "login": "jysohn23", "id": 19496130, "node_id": "MDQ6VXNlcjE5NDk2MTMw", "avatar_url": "https://avatars.githubusercontent.com/u/19496130?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jysohn23", "html_url": "https://github.com/jysohn23", "followers_url": "https://api.github.com/users/jysohn23/followers", "following_url": "https://api.github.com/users/jysohn23/following{/other_user}", "gists_url": "https://api.github.com/users/jysohn23/gists{/gist_id}", "starred_url": "https://api.github.com/users/jysohn23/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jysohn23/subscriptions", "organizations_url": "https://api.github.com/users/jysohn23/orgs", "repos_url": "https://api.github.com/users/jysohn23/repos", "events_url": "https://api.github.com/users/jysohn23/events{/privacy}", "received_events_url": "https://api.github.com/users/jysohn23/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Great, thanks @jysohn23 !", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1893?src=pr&el=h1) Report\n> Merging [#1893](https://codecov.io/gh/huggingface/transformers/pull/1893?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/454455c695ff38df1ed3670a43677fdd1abcedf3?src=pr&el=desc) will **decrease** coverage by `0.02%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1893/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1893?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1893 +/- ##\n==========================================\n- Coverage 84.05% 84.03% -0.03% \n==========================================\n Files 97 94 -3 \n Lines 14316 14032 -284 \n==========================================\n- Hits 12034 11792 -242 \n+ Misses 2282 2240 -42\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1893?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1893/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `91.2% <0%> (-0.95%)` | :arrow_down: |\n| [transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1893/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYXV0by5weQ==) | `58.82% <0%> (-0.64%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1893/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3JvYmVydGEucHk=) | `89.9% <0%> (-0.53%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1893/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbmV0LnB5) | `87.82% <0%> (-0.37%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/1893/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3RyYW5zZm9feGwucHk=) | `92.21% <0%> (-0.33%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1893/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3V0aWxzLnB5) | `92.4% <0%> (-0.28%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1893/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2N0cmwucHk=) | `97.75% <0%> (-0.11%)` | :arrow_down: |\n| [transformers/tests/modeling\\_distilbert\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1893/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2Rpc3RpbGJlcnRfdGVzdC5weQ==) | `99.09% <0%> (-0.09%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/1893/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbS5weQ==) | `90.39% <0%> (-0.08%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1893/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2Rpc3RpbGJlcnQucHk=) | `98.59% <0%> (-0.07%)` | :arrow_down: |\n| ... and [19 more](https://codecov.io/gh/huggingface/transformers/pull/1893/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1893?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1893?src=pr&el=footer). Last update [454455c...2a9df0c](https://codecov.io/gh/huggingface/transformers/pull/1893?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,574
1,574
1,574
COLLABORATOR
null
TPU runner is currently implemented in: https://github.com/pytorch-tpu/transformers/blob/tpu/examples/run_glue_tpu.py. We plan to upstream this directly into `huggingface/transformers` (either `master` or `tpu`) branch once it's been more thoroughly tested.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1893/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1893/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1893", "html_url": "https://github.com/huggingface/transformers/pull/1893", "diff_url": "https://github.com/huggingface/transformers/pull/1893.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1893.patch", "merged_at": 1574290475000 }
https://api.github.com/repos/huggingface/transformers/issues/1892
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1892/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1892/comments
https://api.github.com/repos/huggingface/transformers/issues/1892/events
https://github.com/huggingface/transformers/issues/1892
526,119,169
MDU6SXNzdWU1MjYxMTkxNjk=
1,892
error on bert.fit for Squad dataset
{ "login": "stardoxx", "id": 32615370, "node_id": "MDQ6VXNlcjMyNjE1Mzcw", "avatar_url": "https://avatars.githubusercontent.com/u/32615370?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stardoxx", "html_url": "https://github.com/stardoxx", "followers_url": "https://api.github.com/users/stardoxx/followers", "following_url": "https://api.github.com/users/stardoxx/following{/other_user}", "gists_url": "https://api.github.com/users/stardoxx/gists{/gist_id}", "starred_url": "https://api.github.com/users/stardoxx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stardoxx/subscriptions", "organizations_url": "https://api.github.com/users/stardoxx/orgs", "repos_url": "https://api.github.com/users/stardoxx/repos", "events_url": "https://api.github.com/users/stardoxx/events{/privacy}", "received_events_url": "https://api.github.com/users/stardoxx/received_events", "type": "User", "site_admin": false }
[ { "id": 1108649070, "node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information", "name": "Need more information", "color": "d876e3", "default": false, "description": "Further information is requested" }, { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "We can't help you without more information.\r\nPlease fill in the issues templates required information.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,574
1,581
1,581
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> The size of tensor a (384) must match the size of tensor b (12) at non-singleton dimension 1. On passing values to fit function in modelling/bert.py facing above error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1892/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1892/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1891
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1891/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1891/comments
https://api.github.com/repos/huggingface/transformers/issues/1891/events
https://github.com/huggingface/transformers/pull/1891
526,087,315
MDExOlB1bGxSZXF1ZXN0MzQzNjAyOTQ5
1,891
fixes issue with unrecognized arguments for AdamW
{ "login": "oleg-sharethis", "id": 50597761, "node_id": "MDQ6VXNlcjUwNTk3NzYx", "avatar_url": "https://avatars.githubusercontent.com/u/50597761?v=4", "gravatar_id": "", "url": "https://api.github.com/users/oleg-sharethis", "html_url": "https://github.com/oleg-sharethis", "followers_url": "https://api.github.com/users/oleg-sharethis/followers", "following_url": "https://api.github.com/users/oleg-sharethis/following{/other_user}", "gists_url": "https://api.github.com/users/oleg-sharethis/gists{/gist_id}", "starred_url": "https://api.github.com/users/oleg-sharethis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/oleg-sharethis/subscriptions", "organizations_url": "https://api.github.com/users/oleg-sharethis/orgs", "repos_url": "https://api.github.com/users/oleg-sharethis/repos", "events_url": "https://api.github.com/users/oleg-sharethis/events{/privacy}", "received_events_url": "https://api.github.com/users/oleg-sharethis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1891?src=pr&el=h1) Report\n> Merging [#1891](https://codecov.io/gh/huggingface/transformers/pull/1891?src=pr&el=desc) into [xlnet](https://codecov.io/gh/huggingface/transformers/commit/1b35d05d4b3c121a9740544aa6f884f1039780b1?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1891/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1891?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## xlnet #1891 +/- ##\n=====================================\n Coverage 78.9% 78.9% \n=====================================\n Files 34 34 \n Lines 6181 6181 \n=====================================\n Hits 4877 4877 \n Misses 1304 1304\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1891?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1891?src=pr&el=footer). Last update [1b35d05...dcbea8c](https://codecov.io/gh/huggingface/transformers/pull/1891?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Same as #1890 " ]
1,574
1,575
1,575
NONE
null
as suggested in https://github.com/huggingface/transformers/issues/830
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1891/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1891/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1891", "html_url": "https://github.com/huggingface/transformers/pull/1891", "diff_url": "https://github.com/huggingface/transformers/pull/1891.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1891.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/1890
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1890/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1890/comments
https://api.github.com/repos/huggingface/transformers/issues/1890/events
https://github.com/huggingface/transformers/pull/1890
526,045,032
MDExOlB1bGxSZXF1ZXN0MzQzNTYyNzE1
1,890
Correction for the tuple problem
{ "login": "oleg-sharethis", "id": 50597761, "node_id": "MDQ6VXNlcjUwNTk3NzYx", "avatar_url": "https://avatars.githubusercontent.com/u/50597761?v=4", "gravatar_id": "", "url": "https://api.github.com/users/oleg-sharethis", "html_url": "https://github.com/oleg-sharethis", "followers_url": "https://api.github.com/users/oleg-sharethis/followers", "following_url": "https://api.github.com/users/oleg-sharethis/following{/other_user}", "gists_url": "https://api.github.com/users/oleg-sharethis/gists{/gist_id}", "starred_url": "https://api.github.com/users/oleg-sharethis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/oleg-sharethis/subscriptions", "organizations_url": "https://api.github.com/users/oleg-sharethis/orgs", "repos_url": "https://api.github.com/users/oleg-sharethis/repos", "events_url": "https://api.github.com/users/oleg-sharethis/events{/privacy}", "received_events_url": "https://api.github.com/users/oleg-sharethis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1890?src=pr&el=h1) Report\n> Merging [#1890](https://codecov.io/gh/huggingface/transformers/pull/1890?src=pr&el=desc) into [xlnet](https://codecov.io/gh/huggingface/transformers/commit/1b35d05d4b3c121a9740544aa6f884f1039780b1?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1890/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1890?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## xlnet #1890 +/- ##\n=====================================\n Coverage 78.9% 78.9% \n=====================================\n Files 34 34 \n Lines 6181 6181 \n=====================================\n Hits 4877 4877 \n Misses 1304 1304\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1890?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1890?src=pr&el=footer). Last update [1b35d05...5365fbd](https://codecov.io/gh/huggingface/transformers/pull/1890?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Hi, we don't have finetune_on_pregenerated.py in the examples anymore but a simpler script, `run_lm_finetuning`. Closing for now." ]
1,574
1,575
1,575
NONE
null
This fixes the problem described and corrected in https://github.com/huggingface/transformers/issues/831
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1890/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1890/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1890", "html_url": "https://github.com/huggingface/transformers/pull/1890", "diff_url": "https://github.com/huggingface/transformers/pull/1890.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1890.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/1889
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1889/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1889/comments
https://api.github.com/repos/huggingface/transformers/issues/1889/events
https://github.com/huggingface/transformers/pull/1889
526,035,853
MDExOlB1bGxSZXF1ZXN0MzQzNTUzOTI2
1,889
explain how to successfully run examples in readme and doc
{ "login": "rlouf", "id": 3885044, "node_id": "MDQ6VXNlcjM4ODUwNDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3885044?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rlouf", "html_url": "https://github.com/rlouf", "followers_url": "https://api.github.com/users/rlouf/followers", "following_url": "https://api.github.com/users/rlouf/following{/other_user}", "gists_url": "https://api.github.com/users/rlouf/gists{/gist_id}", "starred_url": "https://api.github.com/users/rlouf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rlouf/subscriptions", "organizations_url": "https://api.github.com/users/rlouf/orgs", "repos_url": "https://api.github.com/users/rlouf/repos", "events_url": "https://api.github.com/users/rlouf/events{/privacy}", "received_events_url": "https://api.github.com/users/rlouf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1889?src=pr&el=h1) Report\n> Merging [#1889](https://codecov.io/gh/huggingface/transformers/pull/1889?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/454455c695ff38df1ed3670a43677fdd1abcedf3?src=pr&el=desc) will **increase** coverage by `0.02%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1889/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1889?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1889 +/- ##\n==========================================\n+ Coverage 84.05% 84.08% +0.02% \n==========================================\n Files 97 97 \n Lines 14316 14316 \n==========================================\n+ Hits 12034 12037 +3 \n+ Misses 2282 2279 -3\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1889?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1889/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `97.08% <0%> (+1.45%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1889?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1889?src=pr&el=footer). Last update [454455c...5cd8487](https://codecov.io/gh/huggingface/transformers/pull/1889?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Cool, this is important. I feel like installing from source without cloning (e.g. using `pip install git+https://github.com/huggingface/transformers`) would put less strain on the users' set-up, is there a reason we prefer cloning (I know the aforementioned method also clones, but it puts it in a temporary folder)? This is fine as it is, I'm just curious.", "That way you also cover the case of people who clone the repository to execute the examples (I would believe the majority of users), and encourage this behavior which is a much better practice than copy/pasting.\r\nDoing so you are absolutely sure the version of the library is the exact same as the version of the examples. This may not be true for `pip install git+https://github.com/huggingface/transformers` if the library has since changed.", "> That way you also cover the case of people who clone the repository to execute the examples (I would believe the majority of users), and encourage this behavior which is a much better practice than copy/pasting.\r\n> Doing so you are absolutely sure the version of the library is the exact same as the version of the examples. This may not be true for `pip install git+https://github.com/huggingface/transformers` if the library has since changed.\r\n\r\nin fact, passing a branch name, a commit hash, a tag name or a git ref is possible like so:\r\n\r\n```\r\n[-e] git://git.example.com/MyProject.git@master#egg=MyProject\r\n[-e] git://git.example.com/[email protected]#egg=MyProject\r\n[-e] git://git.example.com/MyProject.git@da39a3ee5e6b4b0d3255bfef95601890afd80709#egg=MyProject\r\n[-e] git://git.example.com/MyProject.git@refs/pull/123/head#egg=MyProject\r\n```\r\nAccording to https://pip.pypa.io/en/stable/reference/pip_install/#git ", "@LysandreJik Maybe we can list both options i.e. also reference `pip install git+https://github.com/huggingface/transformers`\r\n\r\nAlso small nitpick, don't hesitate to squash commits when merging very related changes.", "Fair nitpick" ]
1,574
1,574
1,574
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1889/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1889/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1889", "html_url": "https://github.com/huggingface/transformers/pull/1889", "diff_url": "https://github.com/huggingface/transformers/pull/1889.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1889.patch", "merged_at": 1574365280000 }
https://api.github.com/repos/huggingface/transformers/issues/1888
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1888/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1888/comments
https://api.github.com/repos/huggingface/transformers/issues/1888/events
https://github.com/huggingface/transformers/issues/1888
525,939,353
MDU6SXNzdWU1MjU5MzkzNTM=
1,888
Is there a straightforward way to classify documents at the sentence level, while using surrounding sentences for context?
{ "login": "pydn", "id": 25550995, "node_id": "MDQ6VXNlcjI1NTUwOTk1", "avatar_url": "https://avatars.githubusercontent.com/u/25550995?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pydn", "html_url": "https://github.com/pydn", "followers_url": "https://api.github.com/users/pydn/followers", "following_url": "https://api.github.com/users/pydn/following{/other_user}", "gists_url": "https://api.github.com/users/pydn/gists{/gist_id}", "starred_url": "https://api.github.com/users/pydn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pydn/subscriptions", "organizations_url": "https://api.github.com/users/pydn/orgs", "repos_url": "https://api.github.com/users/pydn/repos", "events_url": "https://api.github.com/users/pydn/events{/privacy}", "received_events_url": "https://api.github.com/users/pydn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Maybe you should start with [XLNet](https://arxiv.org/abs/1906.08237).", "@iedmrc Thanks, I'm sure that most of these models could do this. I'm looking for guidance on adjusting the architecture of these transformer models to look for context form surrounding sentences to classify the current sentence. XLNetForSequenceClassification takes in a document as a whole for classification.\r\n\r\nI imagine many others have confronted this problem, so I'm reaching out for guidance on where to start.", "@pydn You are probably looking for a model like this: https://github.com/allenai/sequential_sentence_classification", "@armancohan Thank you! This is exactly what I was looking for." ]
1,574
1,574
1,574
NONE
null
## ❓ Questions & Help I'm curious how one would go about classifying documents at the sentence level using the Sequence Classification classes where the classification of a document's sentence would use other sentences in the document for context. For example, if a document says, "I truly hate this product. Thanks [company name]!" The two sentences should be classified as negative. However, if you classify each sentence separately, "I truly hate this product" would be classified as negative, while "Thanks [company name]!" would likely be classified as positive. I understand I could classify the document as a whole, but I'm looking for a more granular level of text classification. Any guidance on making adjustments for this need would be greatly appreciated. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1888/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1888/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1887
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1887/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1887/comments
https://api.github.com/repos/huggingface/transformers/issues/1887/events
https://github.com/huggingface/transformers/issues/1887
525,927,375
MDU6SXNzdWU1MjU5MjczNzU=
1,887
Using GPU for gpt2-xl
{ "login": "samer-noureddine", "id": 32775563, "node_id": "MDQ6VXNlcjMyNzc1NTYz", "avatar_url": "https://avatars.githubusercontent.com/u/32775563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/samer-noureddine", "html_url": "https://github.com/samer-noureddine", "followers_url": "https://api.github.com/users/samer-noureddine/followers", "following_url": "https://api.github.com/users/samer-noureddine/following{/other_user}", "gists_url": "https://api.github.com/users/samer-noureddine/gists{/gist_id}", "starred_url": "https://api.github.com/users/samer-noureddine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/samer-noureddine/subscriptions", "organizations_url": "https://api.github.com/users/samer-noureddine/orgs", "repos_url": "https://api.github.com/users/samer-noureddine/repos", "events_url": "https://api.github.com/users/samer-noureddine/events{/privacy}", "received_events_url": "https://api.github.com/users/samer-noureddine/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "We have not released a new Pypi/conda version yet, but will do so in the next few days. You can install from source if you want to use GPT2-xl before that!", "Via docker:\r\nhttps://github.com/huggingface/transformers/blob/master/docker/Dockerfile\r\n\r\nVia pip:\r\nhttps://github.com/huggingface/transformers/blob/master/docker/Dockerfile\r\nor \r\nhttps://github.com/huggingface/transformers/issues/1837#issuecomment-554594306" ]
1,574
1,575
1,575
NONE
null
I want to use gpt2-xl using my PC's GPU (NVIDIA GeForce GTX 1070). The usual way to do this is via conda, but it appears that the latest version of gpt-2 is not available for conda. For example, the following installation commands don't work: ``` conda install transformers conda install git+https://github.com/huggingface/transformers ``` The latest version of transformers for conda is a[ month old,](https://anaconda.org/conda-forge/transformers) and therefore doesn't include gpt2-xl. How can I use my machine's GPU to run gpt2-xl without using conda? Will there be conda support soon?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1887/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1887/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1886
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1886/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1886/comments
https://api.github.com/repos/huggingface/transformers/issues/1886/events
https://github.com/huggingface/transformers/issues/1886
525,870,307
MDU6SXNzdWU1MjU4NzAzMDc=
1,886
save_pretrained on CamembertTokenizer
{ "login": "YohannLeFaou", "id": 13356017, "node_id": "MDQ6VXNlcjEzMzU2MDE3", "avatar_url": "https://avatars.githubusercontent.com/u/13356017?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YohannLeFaou", "html_url": "https://github.com/YohannLeFaou", "followers_url": "https://api.github.com/users/YohannLeFaou/followers", "following_url": "https://api.github.com/users/YohannLeFaou/following{/other_user}", "gists_url": "https://api.github.com/users/YohannLeFaou/gists{/gist_id}", "starred_url": "https://api.github.com/users/YohannLeFaou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YohannLeFaou/subscriptions", "organizations_url": "https://api.github.com/users/YohannLeFaou/orgs", "repos_url": "https://api.github.com/users/YohannLeFaou/repos", "events_url": "https://api.github.com/users/YohannLeFaou/events{/privacy}", "received_events_url": "https://api.github.com/users/YohannLeFaou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This should be fixed in #1860 which should be merged shortly" ]
1,574
1,575
1,575
NONE
null
## 🐛 Bug This is probably something you already know, but `save_pretrained` for `CamembertTokenizer` seems not working at the moment. Here is the error message I get: ``` --------------------------------------------------------------------------- OSError Traceback (most recent call last) <ipython-input-38-52393f32f8a0> in <module> 1 results = {} 2 if args.do_eval and args.local_rank in [-1, 0]: ----> 3 tokenizer = tokenizer_class.from_pretrained(args.output_dir, do_lower_case=args.do_lower_case) 4 checkpoints = [args.output_dir] 5 if args.eval_all_checkpoints: /mnt/azmnt/code/Users/adm/transformers/transformers/tokenization_utils.py in from_pretrained(cls, *inputs, **kwargs) 281 282 """ --> 283 return cls._from_pretrained(*inputs, **kwargs) 284 285 /mnt/azmnt/code/Users/adm/transformers/transformers/tokenization_utils.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 410 411 # Instantiate tokenizer. --> 412 tokenizer = cls(*init_inputs, **init_kwargs) 413 414 # Save inputs and kwargs for saving and re-loading with ``save_pretrained`` /mnt/azmnt/code/Users/adm/transformers/transformers/tokenization_camembert.py in __init__(self, vocab_file, bos_token, eos_token, sep_token, cls_token, unk_token, pad_token, mask_token, additional_special_tokens, **kwargs) 55 self.max_len_sentences_pair = self.max_len - 4 # take into account special tokens 56 self.sp_model = spm.SentencePieceProcessor() ---> 57 self.sp_model.Load(str(vocab_file)) 58 # HACK: These tokens were added by fairseq but don't seem to be actually used when duplicated in the actual 59 # sentencepiece vocabulary (this is the case for <s> and </s> /anaconda/envs/azureml_py36/lib/python3.6/site-packages/sentencepiece.py in Load(self, filename) 116 117 def Load(self, filename): --> 118 return _sentencepiece.SentencePieceProcessor_Load(self, filename) 119 120 def LoadOrDie(self, filename): OSError: Not found: "None": No such file or directory Error #2 ``` Best
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1886/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1886/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1885
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1885/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1885/comments
https://api.github.com/repos/huggingface/transformers/issues/1885/events
https://github.com/huggingface/transformers/issues/1885
525,853,112
MDU6SXNzdWU1MjU4NTMxMTI=
1,885
GPT2 Tokenizer Special Token ID Bug
{ "login": "sshearing", "id": 19912805, "node_id": "MDQ6VXNlcjE5OTEyODA1", "avatar_url": "https://avatars.githubusercontent.com/u/19912805?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshearing", "html_url": "https://github.com/sshearing", "followers_url": "https://api.github.com/users/sshearing/followers", "following_url": "https://api.github.com/users/sshearing/following{/other_user}", "gists_url": "https://api.github.com/users/sshearing/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshearing/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshearing/subscriptions", "organizations_url": "https://api.github.com/users/sshearing/orgs", "repos_url": "https://api.github.com/users/sshearing/repos", "events_url": "https://api.github.com/users/sshearing/events{/privacy}", "received_events_url": "https://api.github.com/users/sshearing/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, there have been a few fixes done on the tokenizers recently that haven't been released on Pypi yet. When I run your script with the library installed from source, I obtain: \r\n\r\n```py\r\n[SEP] 50258\r\n[PAD] 50257\r\nAll special tokens: ['[PAD]', '[SEP]', '<|endoftext|>']\r\nAll special ids: [50257, 50258, 50256]\r\n```\r\n\r\nWould you mind installing from source using `pip install git+https://github.com/huggingface/transformers` and let me know if it fixes your problem,?", "That fixed it, thank you!" ]
1,574
1,574
1,574
NONE
null
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): GPT2 Tokenizer The problem arise when using: my own modified script: The problem arises when I try to add special tokens to the GPT2 tokenizer, specifically a pad token and a sep token. The tasks I am working on is: * [ ] my own task or dataset: summarization on the xsum dataset, however, the current bug does not actually affect the model, but the pre-processing. ## To Reproduce <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> from transformers import AutoTokenizer encoder = AutoTokenizer.from_pretrained('gpt2') encoder.add_special_tokens({'pad_token': '[PAD]', 'sep_token': '[SEP]'}) print(encoder.sep_token, encoder.sep_token_id) print(encoder.pad_token, encoder.pad_token_id) print('All special tokens:', encoder.all_special_tokens) print('All special ids:', encoder.all_special_ids) ## Expected behavior The expected output of this script is below: [SEP] 50258 [PAD] 50257 All special tokens: ['<|endoftext|>', '[PAD]', '[SEP]'] All special ids: [50256, 50257, 50258] The actual output is: [SEP] 50258 [PAD] 50257 All special tokens: ['<|endoftext|>', '[PAD]', '[SEP]'] All special ids: [50256, 50256, 50256] As you can see, All special IDS do not match the actual ids of the [SEP] and [PAD] token. Not sure why this is the case. Am I misunderstanding something about how add_special_tokens works? ## Environment * OS: Ubuntu 18.04 * Python version: 3.7 * PyTorch version: 1.0.0 * PyTorch Transformers version (or branch): 2.1.0 * Using GPU ? No * Distributed of parallel setup ? No * Any other relevant information: None ## Additional context <!-- Add any other context about the problem here. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1885/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1885/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1884
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1884/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1884/comments
https://api.github.com/repos/huggingface/transformers/issues/1884/events
https://github.com/huggingface/transformers/issues/1884
525,835,396
MDU6SXNzdWU1MjU4MzUzOTY=
1,884
Wrong definition of the `logging_steps` parameter at the `run_lm_finetuning.py`
{ "login": "iedmrc", "id": 13666448, "node_id": "MDQ6VXNlcjEzNjY2NDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/13666448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iedmrc", "html_url": "https://github.com/iedmrc", "followers_url": "https://api.github.com/users/iedmrc/followers", "following_url": "https://api.github.com/users/iedmrc/following{/other_user}", "gists_url": "https://api.github.com/users/iedmrc/gists{/gist_id}", "starred_url": "https://api.github.com/users/iedmrc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iedmrc/subscriptions", "organizations_url": "https://api.github.com/users/iedmrc/orgs", "repos_url": "https://api.github.com/users/iedmrc/repos", "events_url": "https://api.github.com/users/iedmrc/events{/privacy}", "received_events_url": "https://api.github.com/users/iedmrc/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,574
1,580
1,580
CONTRIBUTOR
null
Hi, `logging_steps` parameter defined as `"Log every X updates steps."` but it's been affected by `gradient_accumulation_steps` as follows: https://github.com/huggingface/transformers/blob/f3386d938348628c91457fc7d8650c223317a053/examples/run_lm_finetuning.py#L243-L254 Therefore, if you, e.g., set logging_steps=1000 and gradient_accumulation_steps=5, it'll log in every 5000 steps. That affects `evaluate_during_training` in a not intended way.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1884/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1884/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1883
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1883/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1883/comments
https://api.github.com/repos/huggingface/transformers/issues/1883/events
https://github.com/huggingface/transformers/issues/1883
525,760,902
MDU6SXNzdWU1MjU3NjA5MDI=
1,883
F score 0 in combining RoBERTa and BiLSTM
{ "login": "EngSalem", "id": 13298690, "node_id": "MDQ6VXNlcjEzMjk4Njkw", "avatar_url": "https://avatars.githubusercontent.com/u/13298690?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EngSalem", "html_url": "https://github.com/EngSalem", "followers_url": "https://api.github.com/users/EngSalem/followers", "following_url": "https://api.github.com/users/EngSalem/following{/other_user}", "gists_url": "https://api.github.com/users/EngSalem/gists{/gist_id}", "starred_url": "https://api.github.com/users/EngSalem/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EngSalem/subscriptions", "organizations_url": "https://api.github.com/users/EngSalem/orgs", "repos_url": "https://api.github.com/users/EngSalem/repos", "events_url": "https://api.github.com/users/EngSalem/events{/privacy}", "received_events_url": "https://api.github.com/users/EngSalem/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I got the same situation. However, when I changed the learning rate to 1e-5, the problem was solved.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,574
1,581
1,581
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I am trying to stack an LSTM on top of RoBERTa model for binary classification problem I tried two moods , freezing BERT embedding and Fine-tuning In case of freezing the embedding I get around 57% F-score compared to regular fine tune BERT which got me 81% When I tried to unfreeze the embedding the f score is always 0 , Most probably I am doing something wrong, but I can't spot it I would appreciate some help Model part ``` class RoBERTaLSTMClassifier(nn.Module): def __init__(self, bert_config, num_classes, hidden_size=None, dropout=0.5): """ bert: pretrained bert model num_classes: the number of num_classes hidden_size: the number of hiddens which will be used by LSTM layer dropout: dropout rate """ super(RoBERTaLSTMClassifier, self).__init__() self.num_classes = num_classes self.model = RobertaModel(bert_config) if hidden_size is None: self.hidden_size = bert_config.hidden_size else: self.hidden_size = hidden_size self.lstm = nn.LSTM(bert_config.hidden_size, self.hidden_size, bidirectional=True,batch_first=True) self.dropout = nn.Dropout(dropout) self.classifier = nn.Linear(self.hidden_size * 2, 1) self.softmax = nn.Softmax() ## add sigmoid non linearity for binary classification self.sig = nn.Sigmoid() def forward(self, input_ids, attention_mask, current_batch_size, hidden): """ all_layers: whether or not to return all encoded_layers return: logits in the following format (batch_size, num_classes) """ #with torch.no_grad(): ## freeze embedding from BERT outputs = self.model(input_ids=input_ids, attention_mask=attention_mask) # last hidden state is input to the LSTM output, (hidden_h, hidden_c) = self.lstm(outputs[0], hidden) output_hidden = torch.cat((hidden_h[0], hidden_h[1]), dim=1) #[B, H*2] logits = self.classifier(self.dropout(output_hidden)) #[B, C] sig_out = self.sig(logits).view(current_batch_size, -1) ## get the last batch output sig_out = sig_out[:, -1] # get last batch of labels hidden = (hidden_h, hidden_c) return sig_out, hidden def init_bilstm_hidden(self, batch_size): h0 = torch.zeros(2, batch_size, self.hidden_size).to(device) # 2 for bidirection c0 = torch.zeros(2, batch_size, self.hidden_size).to(device) return (h0, c0) ``` The training loop part ``` from sklearn.metrics import f1_score from tqdm import tqdm, trange import numpy as np lr=0.001 roberta_conf = RobertaConfig.from_pretrained('roberta-base') num_classes = 2 hidden_size = 256 LSTMRoBERTaModel = RoBERTaLSTMClassifier(roberta_conf, num_classes=num_classes,hidden_size= hidden_size,dropout=0.5) criterion = nn.BCELoss() ## binary cross entropy optimizer = torch.optim.Adam(LSTMRoBERTaModel.parameters(), lr=lr) epochs = 5 counter = 0 max_grad_norm = 1.0 nb_tr_examples, nb_tr_steps = 0, 0 for _ in trange(epochs, desc="Epoch"): LSTMRoBERTaModel.cuda() LSTMRoBERTaModel.train() tr_loss = 0 y_preds = [] y_true = [] hidden_init = LSTMRoBERTaModel.init_bilstm_hidden(batch_size=bs) h = hidden_init for step, batch in enumerate(train_dataloader): batch = tuple(t.to(device) for t in batch) b_input_ids, b_input_mask, b_labels = batch current_batch_size = b_input_ids.size()[0] ## ## need to ask why converting to tuple h = tuple([each.data for each in h]) ## forward pass preds, h = LSTMRoBERTaModel.forward(b_input_ids, b_input_mask, current_batch_size,h) loss = criterion(preds.squeeze(),b_labels.float()) # track train loss tr_loss += loss.item() nb_tr_examples += b_input_ids.size(0) nb_tr_steps += 1 # gradient clipping torch.nn.utils.clip_grad_norm_(parameters=LSTMRoBERTaModel.parameters(), max_norm=max_grad_norm) loss.backward() optimizer.step() LSTMRoBERTaModel.zero_grad() # print train loss per epoch print("\nTrain loss: {}".format(tr_loss/nb_tr_steps)) LSTMRoBERTaModel.eval() eval_loss, eval_accuracy = 0, 0 nb_eval_steps, nb_eval_examples = 0, 0 val_h = LSTMRoBERTaModel.init_bilstm_hidden(bs) for batch in dev_dataloader: batch = tuple(t.to(device) for t in batch) b_input_ids, b_input_mask, b_labels = batch current_batch_size = b_input_ids.size()[0] with torch.no_grad(): preds, val_h = LSTMRoBERTaModel.forward(b_input_ids, b_input_mask, current_batch_size, val_h) loss = criterion(preds.squeeze(),b_labels.float()) eval_loss += loss y_preds.extend(np.round(preds.data.cpu())) y_true.extend(b_labels.data.cpu()) #print(preds[2], b_labels[2] ) #eval_accuracy += f1_score(torch.tensor.numpy(b_labels.float), toch.tensor.numpy(preds)) nb_eval_examples += b_input_ids.size(0) nb_eval_steps += 1 eval_loss = eval_loss/nb_eval_steps print("Validation loss: {}".format(eval_loss)) print("F1 - Score: {}".format(f1_score(y_true,y_preds))) #print("F1- Score: {}".format(eval_accuracy/nb_eval_steps)) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1883/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1883/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1882
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1882/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1882/comments
https://api.github.com/repos/huggingface/transformers/issues/1882/events
https://github.com/huggingface/transformers/issues/1882
525,709,764
MDU6SXNzdWU1MjU3MDk3NjQ=
1,882
XLNetForSequenceClassification and CLS token
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "organizations_url": "https://api.github.com/users/ghost/orgs", "repos_url": "https://api.github.com/users/ghost/repos", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "received_events_url": "https://api.github.com/users/ghost/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Ok, I found why : the padding is done before the sentence, and not after. " ]
1,574
1,574
1,574
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Hi, As we can see in the official XLNet code ([https://github.com/zihangdai/xlnet]()), the [SEP] and [CLS] tokens are added before the zero-padding. (in convert_single_sequence() in classifier_utils.py) (example : "This is my sequence ! [SEP][CLS]") What is strange is that in the XLNetModelForSequenceClassification, we use a SequenceSummary which will keep the last value from the hidden state, i.e. the last token from padding (which doesn't correspond to the real [CLS] token)? (As I can see, it's working like that in the official code too). Maybe someone here can explain me if this is an error, or if I forgot something ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1882/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1882/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1881
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1881/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1881/comments
https://api.github.com/repos/huggingface/transformers/issues/1881/events
https://github.com/huggingface/transformers/pull/1881
525,617,419
MDExOlB1bGxSZXF1ZXN0MzQzMjA0NjU4
1,881
convert list to set in tokenize().split_on_tokens()
{ "login": "578123043", "id": 16147509, "node_id": "MDQ6VXNlcjE2MTQ3NTA5", "avatar_url": "https://avatars.githubusercontent.com/u/16147509?v=4", "gravatar_id": "", "url": "https://api.github.com/users/578123043", "html_url": "https://github.com/578123043", "followers_url": "https://api.github.com/users/578123043/followers", "following_url": "https://api.github.com/users/578123043/following{/other_user}", "gists_url": "https://api.github.com/users/578123043/gists{/gist_id}", "starred_url": "https://api.github.com/users/578123043/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/578123043/subscriptions", "organizations_url": "https://api.github.com/users/578123043/orgs", "repos_url": "https://api.github.com/users/578123043/repos", "events_url": "https://api.github.com/users/578123043/events{/privacy}", "received_events_url": "https://api.github.com/users/578123043/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1881?src=pr&el=h1) Report\n> Merging [#1881](https://codecov.io/gh/huggingface/transformers/pull/1881?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f3386d938348628c91457fc7d8650c223317a053?src=pr&el=desc) will **increase** coverage by `1.35%`.\n> The diff coverage is `100%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1881/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1881?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1881 +/- ##\n==========================================\n+ Coverage 82.72% 84.08% +1.35% \n==========================================\n Files 97 97 \n Lines 14316 14316 \n==========================================\n+ Hits 11843 12037 +194 \n+ Misses 2473 2279 -194\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1881?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1881/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `92.14% <100%> (ø)` | :arrow_up: |\n| [transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1881/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `82% <0%> (+1.33%)` | :arrow_up: |\n| [transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1881/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `96.46% <0%> (+2.21%)` | :arrow_up: |\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1881/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `73.61% <0%> (+2.43%)` | :arrow_up: |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1881/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `71.76% <0%> (+12.35%)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1881/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `97.08% <0%> (+15.53%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1881/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `92.95% <0%> (+83.09%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1881?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1881?src=pr&el=footer). Last update [f3386d9...821ba9e](https://codecov.io/gh/huggingface/transformers/pull/1881?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Hi, thanks for looking into it! What's your use-case for adding 207 special tokens?", "In the kaggle Tensorflow 2 Nature question competition. I try to add some additional Sequence Embedding Such as [Tableid=13] and split short sentence. ", "I may misunderstand, but why not use the `add_tokens` method rather than the `add_special_tokens` method, which is reserved for tokens like CLS or MASK?", "Yes, `add_special_tokens` method is reserved for a limited number of tokens with special properties and usage like CLS or MASK. For other uses, go for `add_tokens`.\r\n", "Here is how we solved the performance issue when adding custom vocabulary: In the `add_tokens` method, we simply integrate `new_tokens` into the `self.vocab`.\r\n\r\n```\r\nfrom transformers import BertTokenizer, WordpieceTokenizer\r\nfrom collections import OrderedDict\r\n\r\n\r\nclass CustomVocabBertTokenizer(BertTokenizer):\r\n def add_tokens(self, new_tokens):\r\n new_tokens = [token for token in tokens if not (token in self.vocab or token in self.all_special_tokens)]\r\n\r\n self.vocab = OrderedDict([\r\n *self.vocab.items(),\r\n *[\r\n (token, i + len(self.vocab))\r\n for i, token in enumerate(new_tokens)\r\n ]\r\n ])\r\n\r\n self.ids_to_tokens = OrderedDict([(ids, tok) for tok, ids in self.vocab.items()])\r\n self.wordpiece_tokenizer = WordpieceTokenizer(vocab=self.vocab, unk_token=self.unk_token)\r\n\r\n return len(new_tokens)\r\n```" ]
1,574
1,582
1,575
NONE
null
As [issue 1830](https://github.com/huggingface/transformers/issues/1830), I meet the same question when i add some special_tokens in Tokenizer. But I think it is property **self.all_special_tokens** that slow the code. property self.all_special_tokens will **be called so many time** when we added some special token. An easy way to solve this problem is to create a temporary Set. In my implementation, it faster about 10 times when 207 special tokens are added, I do not get a precise number because of multiprocessing : )
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1881/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1881/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1881", "html_url": "https://github.com/huggingface/transformers/pull/1881", "diff_url": "https://github.com/huggingface/transformers/pull/1881.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1881.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/1880
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1880/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1880/comments
https://api.github.com/repos/huggingface/transformers/issues/1880/events
https://github.com/huggingface/transformers/issues/1880
525,381,645
MDU6SXNzdWU1MjUzODE2NDU=
1,880
question about 'add_prefix_space' of encode method
{ "login": "weiguowilliam", "id": 31396452, "node_id": "MDQ6VXNlcjMxMzk2NDUy", "avatar_url": "https://avatars.githubusercontent.com/u/31396452?v=4", "gravatar_id": "", "url": "https://api.github.com/users/weiguowilliam", "html_url": "https://github.com/weiguowilliam", "followers_url": "https://api.github.com/users/weiguowilliam/followers", "following_url": "https://api.github.com/users/weiguowilliam/following{/other_user}", "gists_url": "https://api.github.com/users/weiguowilliam/gists{/gist_id}", "starred_url": "https://api.github.com/users/weiguowilliam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/weiguowilliam/subscriptions", "organizations_url": "https://api.github.com/users/weiguowilliam/orgs", "repos_url": "https://api.github.com/users/weiguowilliam/repos", "events_url": "https://api.github.com/users/weiguowilliam/events{/privacy}", "received_events_url": "https://api.github.com/users/weiguowilliam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Actually, both 616 and 1820 are indices for `my`. The difference lies in the prefix space that was added: in the case that it was not added (1820), it is identified as being the beginning of the sentence or part of a word.\r\n\r\nIn the case that it was added (616), it is identified as being the beginning of a word in a sentence.\r\n\r\nYou can check the behavior by calling `tokenize` instead of `encode`, for example:\r\n\r\n```py\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\n\r\nprint(tokenizer.tokenize(\"my son is jeremy\",add_prefix_space=True))\r\n# ['Ġmy', 'Ġson', 'Ġis', 'Ġj', 'ere', 'my']\r\n\r\nprint(tokenizer.tokenize(\"my son is my son\",add_prefix_space=False))\r\n# ['my', 'Ġson', 'Ġis', 'Ġmy', 'Ġson']\r\n```", "Thank you! ", "still don't understand what is the difference in \r\n```\r\ntokenizer = tokenizers.ByteLevelBPETokenizer(\r\n vocab_file=PATH+'vocab-roberta-base.json', \r\n merges_file=PATH+'merges-roberta-base.txt',\r\n lowercase=True,\r\n add_prefix_space=True\r\n)\r\n```", "The argument you mention is for the initialization of a tokenizer from the `huggingface/tokenizers` library, you would have better luck opening an issue there." ]
1,574
1,590
1,574
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Hi, I have a question about the 'add_prefix_space' parameter of encode method. I'll use small gpt2 for example. > tokenizer = GPT2Tokenizer.from_pretrained('gpt2') > print(tokenizer.encode("my",add_prefix_space=True)) #616 > print(tokenizer.encode("my",add_prefix_space=False)) #1820 I think 616 is the index for the word 'my'(' my') since there's a space before letter 'm'. And 1820 is the index for '-me' which is part of a word. Am I right? Thank you!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1880/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1880/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1879
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1879/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1879/comments
https://api.github.com/repos/huggingface/transformers/issues/1879/events
https://github.com/huggingface/transformers/issues/1879
525,368,726
MDU6SXNzdWU1MjUzNjg3MjY=
1,879
run_squad hangs for small max_seq_length
{ "login": "immawatson", "id": 57159300, "node_id": "MDQ6VXNlcjU3MTU5MzAw", "avatar_url": "https://avatars.githubusercontent.com/u/57159300?v=4", "gravatar_id": "", "url": "https://api.github.com/users/immawatson", "html_url": "https://github.com/immawatson", "followers_url": "https://api.github.com/users/immawatson/followers", "following_url": "https://api.github.com/users/immawatson/following{/other_user}", "gists_url": "https://api.github.com/users/immawatson/gists{/gist_id}", "starred_url": "https://api.github.com/users/immawatson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/immawatson/subscriptions", "organizations_url": "https://api.github.com/users/immawatson/orgs", "repos_url": "https://api.github.com/users/immawatson/repos", "events_url": "https://api.github.com/users/immawatson/events{/privacy}", "received_events_url": "https://api.github.com/users/immawatson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Great, thanks for letting us know!" ]
1,574
1,574
1,574
NONE
null
## 🐛 Bug I am experimenting with minimal examples to understand how sequences are generated, setting `max_seq_length` very low to trigger document chunking, and I hit this bug: When `max_seq_length` is smaller than the query, `max_tokens_for_doc` becomes negative causing an infinite loop. https://github.com/huggingface/transformers/blob/f3386d938348628c91457fc7d8650c223317a053/examples/utils_squad.py#L241-L258 Obviously having `max_seq_length` smaller than the query isn't very useful, and I should be setting `max_query_length`, But it would still be nice to have an assertion catch this. ### Steps to reproduce 1. `rm $SQUAD_DIR/cached*` 2. train with `--max_seq_length 16` ``` python examples/run_squad.py \ --model_type bert \ --model_name_or_path bert-base-uncased \ --do_lower_case \ --do_train \ --train_file $SQUAD_DIR/train-v1.1.json \ --predict_file $SQUAD_DIR/dev-v1.1.json \ --per_gpu_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 16 \ --doc_stride 8 \ --output_dir /tmp/squad_seq_len_bug ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1879/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1879/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1878
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1878/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1878/comments
https://api.github.com/repos/huggingface/transformers/issues/1878/events
https://github.com/huggingface/transformers/issues/1878
525,346,481
MDU6SXNzdWU1MjUzNDY0ODE=
1,878
Is this a bug?
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "organizations_url": "https://api.github.com/users/ghost/orgs", "repos_url": "https://api.github.com/users/ghost/repos", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "received_events_url": "https://api.github.com/users/ghost/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "duplicated \r\nhttps://github.com/huggingface/transformers/issues/1837#issue-523145270", "I checked the other comment but did not work. Any idea ? \r\n```\r\n__init__() got an unexpected keyword argument 'num_warmup_steps'\r\n```\r\n", "Hi, could you specify which versions of Python and Transformers is your environment running on?", "Thanks\r\n```\r\npython: 3.6.8\r\nTransformers: 2.1.1\r\n```\r\n", "Did you install transformers from source or from pypi?", "Could you try installing from source and telling me if it fixes your problem?", "Thanks it worked", "worked with building from the source and this \r\n```\r\nfrom transformers import AdamW,get_linear_schedule_with_warmup\r\n```", "Great to hear!", "whats the meaning of installing from source ,i am a little bit confused. could you please tell me ,thank you.\r\ni've tried 'pip install git+https://github.com/huggingface/transformers' or pip install transformers \r\nor pip install pytorhc-transformers. But they all doesn't work.", "> Hello,\r\n> I tried to import this:\r\n> \r\n> `from transformers import AdamW, get_linear_schedule_with_warmup`\r\n> but got error : model not found\r\n> but when i did this, it worked:\r\n> \r\n> ```\r\n> from transformers import AdamW\r\n> from transformers import WarmupLinearSchedule as get_linear_schedule_with_warmup\r\n> ```\r\n> \r\n> however when I set the scheduler like this :\r\n> \r\n> ```\r\n> scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=args.warmup_steps, num_training_steps=t_total) \r\n> ```\r\n> \r\n> I got this error :\r\n> \r\n> ```\r\n> __init__() got an unexpected keyword argument 'num_warmup_steps'\r\n> ```\r\n\r\ncan you kindly tell me how to install from source.", "@TLCFYBJJHYYSND See https://github.com/huggingface/transformers#from-source", "In order to install Transformers library, you have to open your command line and enter:\r\n```\r\npip install git+https://github.com/huggingface/transformers.git\r\n```\r\n> > Hello,\r\n> > I tried to import this:\r\n> > `from transformers import AdamW, get_linear_schedule_with_warmup`\r\n> > but got error : model not found\r\n> > but when i did this, it worked:\r\n> > ```\r\n> > from transformers import AdamW\r\n> > from transformers import WarmupLinearSchedule as get_linear_schedule_with_warmup\r\n> > ```\r\n> > \r\n> > \r\n> > however when I set the scheduler like this :\r\n> > ```\r\n> > scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=args.warmup_steps, num_training_steps=t_total) \r\n> > ```\r\n> > \r\n> > \r\n> > I got this error :\r\n> > ```\r\n> > __init__() got an unexpected keyword argument 'num_warmup_steps'\r\n> > ```\r\n> \r\n> can you kindly tell me how to install from source.", "Thanks for that, it really works!!!" ]
1,574
1,576
1,574
NONE
null
Hello, I tried to import this: `from transformers import AdamW, get_linear_schedule_with_warmup` but got error : model not found but when i did this, it worked: ``` from transformers import AdamW from transformers import WarmupLinearSchedule as get_linear_schedule_with_warmup ``` however when I set the scheduler like this : ``` scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=args.warmup_steps, num_training_steps=t_total) ``` I got this error : ``` __init__() got an unexpected keyword argument 'num_warmup_steps' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1878/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1878/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1877
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1877/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1877/comments
https://api.github.com/repos/huggingface/transformers/issues/1877/events
https://github.com/huggingface/transformers/issues/1877
525,330,957
MDU6SXNzdWU1MjUzMzA5NTc=
1,877
Can the HuggingFace GPT2DoubleHeadsModel either for regular language modelling or solving multiple-choice questions? or is it only for solving multiple-choice questions?
{ "login": "h56cho", "id": 52889259, "node_id": "MDQ6VXNlcjUyODg5MjU5", "avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h56cho", "html_url": "https://github.com/h56cho", "followers_url": "https://api.github.com/users/h56cho/followers", "following_url": "https://api.github.com/users/h56cho/following{/other_user}", "gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}", "starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h56cho/subscriptions", "organizations_url": "https://api.github.com/users/h56cho/orgs", "repos_url": "https://api.github.com/users/h56cho/repos", "events_url": "https://api.github.com/users/h56cho/events{/privacy}", "received_events_url": "https://api.github.com/users/h56cho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,574
1,574
1,574
NONE
null
Hello, According to the HuggingFace Transformer's website (https://huggingface.co/transformers/model_doc/gpt2.html#gpt2doubleheadsmodel), GPT2DoubleHeadsModel is the GPT2 Model transformer with a language modelling and a multiple-choice classification head on top e.g. for RocStories/SWAG tasks. Does this mean that the GPT2DoubleHeadsModel can be used for both the regular language modelling tasks (predicting the next token) and also for solving multiple-choice questions? Or does this mean that GPT2DoubleHeadsModel can be used to test machines on the multiple-choice type questions only? Thank you,
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1877/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1877/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1876
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1876/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1876/comments
https://api.github.com/repos/huggingface/transformers/issues/1876/events
https://github.com/huggingface/transformers/pull/1876
525,323,366
MDExOlB1bGxSZXF1ZXN0MzQyOTM3NzY0
1,876
Mean does not exist in TF2
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1876?src=pr&el=h1) Report\n> Merging [#1876](https://codecov.io/gh/huggingface/transformers/pull/1876?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f3386d938348628c91457fc7d8650c223317a053?src=pr&el=desc) will **increase** coverage by `1.35%`.\n> The diff coverage is `0%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1876/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1876?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1876 +/- ##\n==========================================\n+ Coverage 82.72% 84.08% +1.35% \n==========================================\n Files 97 97 \n Lines 14316 14316 \n==========================================\n+ Hits 11843 12037 +194 \n+ Misses 2473 2279 -194\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1876?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1876/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3V0aWxzLnB5) | `92.68% <0%> (ø)` | :arrow_up: |\n| [transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1876/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `82% <0%> (+1.33%)` | :arrow_up: |\n| [transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1876/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `96.46% <0%> (+2.21%)` | :arrow_up: |\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1876/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `73.61% <0%> (+2.43%)` | :arrow_up: |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1876/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `71.76% <0%> (+12.35%)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1876/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `97.08% <0%> (+15.53%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1876/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `92.95% <0%> (+83.09%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1876?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1876?src=pr&el=footer). Last update [f3386d9...3de31f8](https://codecov.io/gh/huggingface/transformers/pull/1876?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks a lot @LysandreJik!" ]
1,574
1,576
1,575
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1876/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1876/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1876", "html_url": "https://github.com/huggingface/transformers/pull/1876", "diff_url": "https://github.com/huggingface/transformers/pull/1876.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1876.patch", "merged_at": 1575015993000 }
https://api.github.com/repos/huggingface/transformers/issues/1875
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1875/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1875/comments
https://api.github.com/repos/huggingface/transformers/issues/1875/events
https://github.com/huggingface/transformers/issues/1875
525,316,325
MDU6SXNzdWU1MjUzMTYzMjU=
1,875
Clarifications about the Quick-tour of the fine-tuning scripts?
{ "login": "bulutsuzku", "id": 57968219, "node_id": "MDQ6VXNlcjU3OTY4MjE5", "avatar_url": "https://avatars.githubusercontent.com/u/57968219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bulutsuzku", "html_url": "https://github.com/bulutsuzku", "followers_url": "https://api.github.com/users/bulutsuzku/followers", "following_url": "https://api.github.com/users/bulutsuzku/following{/other_user}", "gists_url": "https://api.github.com/users/bulutsuzku/gists{/gist_id}", "starred_url": "https://api.github.com/users/bulutsuzku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bulutsuzku/subscriptions", "organizations_url": "https://api.github.com/users/bulutsuzku/orgs", "repos_url": "https://api.github.com/users/bulutsuzku/repos", "events_url": "https://api.github.com/users/bulutsuzku/events{/privacy}", "received_events_url": "https://api.github.com/users/bulutsuzku/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, what exactly is your task? Is it question answering, sequence classification, language modeling, other?", "> Hi, what exactly is your task? Is it question answering, sequence classification, language modeling, other?\r\n\r\nHi LysandreJik\r\n\r\nI have two tasks: First one is sequence classification. Second one is Relation Extraction.\r\n", "For sequence classification, you can indeed take inspiration from the script `run_glue.py`. Those scripts are meant as examples showcasing how to manage models for different tasks, so that users may train our models in any way they see fit. The models are standard PyTorch models so they can be trained like any other model.\r\n\r\nFor relation extraction, there are no models specifically targeting this task but feel free to adapt a standard model by adding a few layers on top. For training there are examples in our [HMTL repository](https://github.com/huggingface/hmtl) which may be of help." ]
1,574
1,574
1,574
NONE
null
## ❓ Questions & Help The [quick tour](https://github.com/huggingface/transformers#quick-tour-of-the-fine-tuningusage-scripts) mentions an example of fine-tuning on different GLUE tasks. My task/trained model has nothing to do with GLUE tasks, i.e.: I already have a pre-trained pytorch model file and the formatted corpus files (train.tsv, dev.tsv and test.tsv). Is the run_glue.py script meant to be edited by me for doing a new *NON GLUE* fine-tuning? Or it is a matter of adjusting parameters on the CLI? If none of them applies, where I can find an example of fine-tuning with support for "customized" task?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1875/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1875/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1874
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1874/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1874/comments
https://api.github.com/repos/huggingface/transformers/issues/1874/events
https://github.com/huggingface/transformers/issues/1874
525,258,147
MDU6SXNzdWU1MjUyNTgxNDc=
1,874
Disparitry with Fairseq Roberta implementation for predicting the mask token
{ "login": "hamediramin", "id": 25848270, "node_id": "MDQ6VXNlcjI1ODQ4Mjcw", "avatar_url": "https://avatars.githubusercontent.com/u/25848270?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hamediramin", "html_url": "https://github.com/hamediramin", "followers_url": "https://api.github.com/users/hamediramin/followers", "following_url": "https://api.github.com/users/hamediramin/following{/other_user}", "gists_url": "https://api.github.com/users/hamediramin/gists{/gist_id}", "starred_url": "https://api.github.com/users/hamediramin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hamediramin/subscriptions", "organizations_url": "https://api.github.com/users/hamediramin/orgs", "repos_url": "https://api.github.com/users/hamediramin/repos", "events_url": "https://api.github.com/users/hamediramin/events{/privacy}", "received_events_url": "https://api.github.com/users/hamediramin/received_events", "type": "User", "site_admin": false }
[ { "id": 1834056761, "node_id": "MDU6TGFiZWwxODM0MDU2NzYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling", "name": "Core: Modeling", "color": "FF8446", "default": false, "description": "Internals of the library; Models." } ]
closed
false
null
[]
[ "We would need more information", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Also cc @joeddav ", "Ok, I've reproduced an issue here.\r\n\r\n```python\r\nsentence = ' My favorite type of cheese is <mask>!'\r\nroberta_fairseq = torch.hub.load('pytorch/fairseq', 'roberta.base')\r\nroberta_hf = pipeline('fill-mask', model='roberta-base')\r\n\r\nroberta_fairseq.fill_mask(sentence)\r\n\"\"\"[(' My favorite type of cheese is goat!', 0.10264863073825836, ' goat'),\r\n (' My favorite type of cheese is cream!', 0.07233648002147675, ' cream'),\r\n (' My favorite type of cheese is broccoli!', 0.057516228407621384,' broccoli'),\r\n (' My favorite type of cheese is bacon!', 0.037444233894348145, ' bacon'),\r\n (' My favorite type of cheese is ham!', 0.03281955048441887, ' ham')]\"\"\"\r\n\r\nroberta_hf(sentence)\r\n\"\"\"[{'sequence': '<s> My favorite type of cheese is goat!</s>',\r\n 'score': 0.09398240596055984,\r\n 'token': 24791},\r\n {'sequence': '<s> My favorite type of cheese is cream!</s>',\r\n 'score': 0.07240654528141022,\r\n 'token': 6353},\r\n {'sequence': '<s> My favorite type of cheese is broccoli!</s>',\r\n 'score': 0.06303773820400238,\r\n 'token': 34803},\r\n {'sequence': '<s> My favorite type of cheese is bacon!</s>',\r\n 'score': 0.04124978929758072,\r\n 'token': 18599},\r\n {'sequence': '<s> My favorite type of cheese is jack!</s>',\r\n 'score': 0.03125162795186043,\r\n 'token': 10267}]\"\"\"\r\n```\r\n\r\nThey don't align when I use roberta large either. cc @sshleifer.", "Probably want to compare the encoded input IDs. If they are the same, then there is a model difference (which is a big deal). Otherwise it can be attributed to a different tokenisation (e.g. difference in spaces or difference in adding special tokens).", "The input IDs do match (again, when an empty space is prepended in fairseq's case): `[ 0, 1308, 2674, 1907, 9, 7134, 16, 50264, 328, 2]`. \r\n\r\nI doesn't look like a pipelines issue:\r\n\r\n```python\r\ntokenizer = RobertaTokenizer.from_pretrained('roberta-base')\r\nroberta_hf = RobertaForMaskedLM.from_pretrained('roberta-base')\r\nroberta_fairseq = torch.hub.load('pytorch/fairseq', 'roberta.base')\r\nroberta_fairseq.model.eval()\r\n\r\nsequence = 'My favorite type of cheese is gouda!'\r\ntokens = torch.tensor([tokenizer.encode(sequence)])\r\nhf_out = roberta_hf.forward(tokens)[0]\r\nfairseq_out = roberta_fairseq.model.forward(tokens)[0]\r\n\r\ntorch.mean(torch.abs(fairseq_out - hf_out)).item()\r\n# 0.053489409387111664\r\n```\r\n\r\nMasked LM outputs differ by an average of `0.053`.", "I don't have any time to look into this further, but from looking at the named parameters of both implementations, it seems that there is a difference in the LM head, that is, everything that comes after the encode (embedding + 12 layers). I can't pinpoint it exactly, but it seems note-worthy that the weight LM head in fairseq, comes from the token embedding weights\r\n\r\nhttps://github.com/pytorch/fairseq/blob/4923f34790761f41170fd88cd06e4d00ab0c527c/fairseq/models/roberta/model.py#L286-L291\r\n\r\nI am not sure in how far that is the same in transformers, but it seemed peculiar. Particularly, it seems that `transformers` does the 'None' case here, whereas it should actually take the token embedding weights.\r\n\r\nhttps://github.com/pytorch/fairseq/blob/4923f34790761f41170fd88cd06e4d00ab0c527c/fairseq/models/roberta/model.py#L211-L213\r\n\r\nI might be completely wrong, though. Other input welcome.\r\n\r\n**Tl;dr**\r\n\r\nTransformers does this\r\n\r\n```python\r\nself.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False)\r\n```\r\n\r\nwhereas fairseq does\r\n\r\n```python\r\nself.lm_head = RobertaLMHead(\r\n embed_dim=args.encoder_embed_dim,\r\n output_dim=len(dictionary),\r\n activation_fn=args.activation_fn,\r\n weight=self.sentence_encoder.embed_tokens.weight,\r\n)\r\n...\r\n# `weight` is **not** None\r\nif weight is None:\r\n weight = nn.Linear(embed_dim, output_dim, bias=False).weight\r\nself.weight = weight\r\n```\r\n\r\nI'd be very interested to know _why_ fairseq does this. Why would you want specifically those weights to be the same? ", "I implemented my findings in https://github.com/huggingface/transformers/pull/2928.\r\n\r\nThe following snippet (based on the problem stated above) runs as expected, with identical results for both `transformers` and `fairseq`.\r\n\r\n```python\r\nimport torch\r\nfrom transformers import RobertaTokenizer, RobertaForMaskedLM, pipeline\r\n\r\n# init HuggingFace\r\ntokenizer_hf = RobertaTokenizer.from_pretrained('roberta-base')\r\nroberta_hf = RobertaForMaskedLM.from_pretrained('roberta-base')\r\n# init fairseq\r\nroberta_fairseq = torch.hub.load('pytorch/fairseq', 'roberta.base')\r\nroberta_fairseq.model.eval()\r\n\r\n# LM test\r\nsequence = 'My favorite type of cheese is gouda!'\r\ntokens = torch.tensor([tokenizer_hf.encode(sequence)])\r\nhf_out = roberta_hf.forward(tokens)[0]\r\nfairseq_out = roberta_fairseq.model.forward(tokens)[0]\r\nprint(torch.mean(torch.abs(fairseq_out - hf_out)).item())\r\n# should be 0.0\r\n\r\n# ---\r\n# pipeline test fill-mask\r\nsentence = ' My favorite type of cheese is <mask>!'\r\nprint(roberta_fairseq.fill_mask(sentence))\r\nroberta_hf = pipeline('fill-mask', tokenizer=tokenizer_hf, model='roberta-base')\r\nprint(roberta_hf(sentence))\r\n# should have identical predictions\r\n```", "Resolved by #2958" ]
1,574
1,582
1,582
NONE
null
## ❓ Questions & Help When experimenting with the fill mask functionality on the fairseq repo I realized there is a disparity with the results I get from huggingface implementation. Wondering if there is a mismatch between the model released here and their latest release. Thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1874/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1873
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1873/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1873/comments
https://api.github.com/repos/huggingface/transformers/issues/1873/events
https://github.com/huggingface/transformers/pull/1873
525,203,965
MDExOlB1bGxSZXF1ZXN0MzQyODM5Nzk3
1,873
German DistilBERT
{ "login": "stefan-it", "id": 20651387, "node_id": "MDQ6VXNlcjIwNjUxMzg3", "avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stefan-it", "html_url": "https://github.com/stefan-it", "followers_url": "https://api.github.com/users/stefan-it/followers", "following_url": "https://api.github.com/users/stefan-it/following{/other_user}", "gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}", "starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions", "organizations_url": "https://api.github.com/users/stefan-it/orgs", "repos_url": "https://api.github.com/users/stefan-it/repos", "events_url": "https://api.github.com/users/stefan-it/events{/privacy}", "received_events_url": "https://api.github.com/users/stefan-it/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1873?src=pr&el=h1) Report\n> Merging [#1873](https://codecov.io/gh/huggingface/transformers/pull/1873?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f3386d938348628c91457fc7d8650c223317a053?src=pr&el=desc) will **increase** coverage by `1.35%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1873/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1873?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1873 +/- ##\n==========================================\n+ Coverage 82.72% 84.08% +1.35% \n==========================================\n Files 97 97 \n Lines 14316 14316 \n==========================================\n+ Hits 11843 12037 +194 \n+ Misses 2473 2279 -194\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1873?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tokenization\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1873/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9kaXN0aWxiZXJ0LnB5) | `100% <ø> (ø)` | :arrow_up: |\n| [transformers/configuration\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1873/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fZGlzdGlsYmVydC5weQ==) | `89.74% <ø> (ø)` | :arrow_up: |\n| [transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1873/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2Rpc3RpbGJlcnQucHk=) | `95.87% <ø> (ø)` | :arrow_up: |\n| [transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1873/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `82% <0%> (+1.33%)` | :arrow_up: |\n| [transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1873/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `96.46% <0%> (+2.21%)` | :arrow_up: |\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1873/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `73.61% <0%> (+2.43%)` | :arrow_up: |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1873/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `71.76% <0%> (+12.35%)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1873/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `97.08% <0%> (+15.53%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1873/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `92.95% <0%> (+83.09%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1873?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1873?src=pr&el=footer). Last update [f3386d9...da06afa](https://codecov.io/gh/huggingface/transformers/pull/1873?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "fyi I just adjusted the permissions on the s3 objects.", "Perfect!" ]
1,574
1,575
1,575
COLLABORATOR
null
Hi, this PR adds the German DistilBERT to the library 🤗 Thanks to the Hugging Face team (incl. hardware support) the German DistilBERT was trained on 1/2 of the data that was used for training the [German DBMDZ BERT](https://github.com/dbmdz/german-bert) model for ~4 days. Evaluation on NER tasks (German CoNLL and GermEval) shows a performance difference of 1.3% on average compared to the German BERT model. --- Remaining tasks: * [x] Model, configuration and vocab is already uploaded to S3, only file permissions need to be adjusted
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1873/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1873/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1873", "html_url": "https://github.com/huggingface/transformers/pull/1873", "diff_url": "https://github.com/huggingface/transformers/pull/1873.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1873.patch", "merged_at": 1575015947000 }
https://api.github.com/repos/huggingface/transformers/issues/1871
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1871/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1871/comments
https://api.github.com/repos/huggingface/transformers/issues/1871/events
https://github.com/huggingface/transformers/issues/1871
525,132,664
MDU6SXNzdWU1MjUxMzI2NjQ=
1,871
Understanding feature creation
{ "login": "AVSuni", "id": 29711150, "node_id": "MDQ6VXNlcjI5NzExMTUw", "avatar_url": "https://avatars.githubusercontent.com/u/29711150?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AVSuni", "html_url": "https://github.com/AVSuni", "followers_url": "https://api.github.com/users/AVSuni/followers", "following_url": "https://api.github.com/users/AVSuni/following{/other_user}", "gists_url": "https://api.github.com/users/AVSuni/gists{/gist_id}", "starred_url": "https://api.github.com/users/AVSuni/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AVSuni/subscriptions", "organizations_url": "https://api.github.com/users/AVSuni/orgs", "repos_url": "https://api.github.com/users/AVSuni/repos", "events_url": "https://api.github.com/users/AVSuni/events{/privacy}", "received_events_url": "https://api.github.com/users/AVSuni/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Just to clarify, I am asking this because I originally tried to create a custom vocab/merges files by giving a new path for tokenizer.from_pretrained for my new files. Failure to do so led me to notice that giving a new path for the original files downloaded from S3 also fail in the new path. If I put my custom files to the path used for the S3 cache (with identical filenames used by the cache, everything works fine. In other words, if I replace the contents of the following files with my custom content, everything works:\r\n\r\n`11/19/2019 18:31:15 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-vocab.json from cache at /root/.cache/torch/transformers/d0c5776499adc1ded22493fae699da0971c1ee4c2587111707a4d177d20257a2.ef00af9e673c7160b4d41cfda1f48c5f4cba57d5142754525572a846a1ab1b9b\r\n11/19/2019 18:31:15 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-merges.txt from cache at /root/.cache/torch/transformers/b35e7cd126cd4229a746b5d5c29a749e8e84438b14bcdb575950584fe33207e8.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda` ", "Have you learned anything about this? I'm currently running to the same issue trying to specify an existing model. I noticed that the cache_lm_* file that it creates is only 5 bytes, and gedit says the files is corrupt.", "Yes, so bizarrely even if you have the original hugggingface files in a new location, an empty data frame (the cache_lm file) will be created. Try replacing the original hugging face files downloaded from S3 with your custom vocab/merges files. They should be in /root/.cache/torch/transformers\r\n\r\nIf you still run into the same problem, then either your data is too small or the vocab/merges files are not in the correct format.", "@AVSuni I get the same issue when using a checkpoint generated by huggingface/transformers. Even when using cached versions of the merge/vocab files from `/root/.cache/torch/transformers/`. Even when I delete the `cache_lm_*` file, specifying the model rather than giving a model name to be used causes this issue.", "Maybe I'm being superstitious, but setting a block size seems to alleviate the issue. I didn't bother tracing the code, but I did notice that when I set `block_size`, the cache file is set with the number of the block size. When I do not set the block size, the temp file has some absurdly long number in the cache file where the block size would be. I don't really see that documented anywhere, but here we are.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,574
1,581
1,581
NONE
null
## ❓ Questions & Help Hi, I am trying to understand feature creation when using run_lm_finetuning.py. If I let the script download the tokenizer files, everything works fine. If I manually assign a folder for the tokenizer with vocab.json and merges.txt downloaded from Hugging Face, I get an empty dataframe during feature creation. `11/19/2019 16:36:20 - INFO - __main__ - Creating features from dataset file at /data/test 11/19/2019 16:41:53 - INFO - __main__ - Saving features into cached file /data/test/roberta-base_cached_lm_999999999998_moleculenet_roberta_train.csv Traceback (most recent call last): File "run_lm_finetuning.py", line 558, in <module> main() File "run_lm_finetuning.py", line 510, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_lm_finetuning.py", line 175, in train train_sampler = RandomSampler(train_dataset) if args.local_rank == -1 else DistributedSampler(train_dataset) File "/usr/local/conda3/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 94, in __init__ "value, but got num_samples={}".format(self.num_samples)) ValueError: num_samples should be a positive integer value, but got num_samples=0`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1871/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1871/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1870
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1870/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1870/comments
https://api.github.com/repos/huggingface/transformers/issues/1870/events
https://github.com/huggingface/transformers/pull/1870
524,961,807
MDExOlB1bGxSZXF1ZXN0MzQyNjQwMjA2
1,870
XLNet for Token classification
{ "login": "alexzubiaga", "id": 17120045, "node_id": "MDQ6VXNlcjE3MTIwMDQ1", "avatar_url": "https://avatars.githubusercontent.com/u/17120045?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexzubiaga", "html_url": "https://github.com/alexzubiaga", "followers_url": "https://api.github.com/users/alexzubiaga/followers", "following_url": "https://api.github.com/users/alexzubiaga/following{/other_user}", "gists_url": "https://api.github.com/users/alexzubiaga/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexzubiaga/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexzubiaga/subscriptions", "organizations_url": "https://api.github.com/users/alexzubiaga/orgs", "repos_url": "https://api.github.com/users/alexzubiaga/repos", "events_url": "https://api.github.com/users/alexzubiaga/events{/privacy}", "received_events_url": "https://api.github.com/users/alexzubiaga/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1870?src=pr&el=h1) Report\n> Merging [#1870](https://codecov.io/gh/huggingface/transformers/pull/1870?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f3386d938348628c91457fc7d8650c223317a053?src=pr&el=desc) will **increase** coverage by `1.37%`.\n> The diff coverage is `88.4%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1870/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1870?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1870 +/- ##\n==========================================\n+ Coverage 82.72% 84.09% +1.37% \n==========================================\n Files 97 97 \n Lines 14316 14383 +67 \n==========================================\n+ Hits 11843 12096 +253 \n+ Misses 2473 2287 -186\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1870?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tests/modeling\\_tf\\_xlnet\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1870/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX3hsbmV0X3Rlc3QucHk=) | `96.05% <100%> (+0.3%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1870/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbmV0LnB5) | `88.53% <100%> (+0.34%)` | :arrow_up: |\n| [transformers/tests/modeling\\_xlnet\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1870/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3hsbmV0X3Rlc3QucHk=) | `94.28% <81.81%> (-1.85%)` | :arrow_down: |\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1870/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `73.95% <82.6%> (+2.77%)` | :arrow_up: |\n| [transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1870/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `82% <0%> (+1.33%)` | :arrow_up: |\n| [transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1870/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `96.46% <0%> (+2.21%)` | :arrow_up: |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1870/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `71.76% <0%> (+12.35%)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1870/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `97.08% <0%> (+15.53%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1870/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `92.95% <0%> (+83.09%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1870?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1870?src=pr&el=footer). Last update [f3386d9...4193aa9](https://codecov.io/gh/huggingface/transformers/pull/1870?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This looks good to me, thank you @alexzubiaga !", "This is great, thanks a lot @alexzubiaga (and nice work adding the tests!).\r\nMerging" ]
1,574
1,575
1,575
NONE
null
Hi, this PR adds a XLNet based token classifier for PyTorch `XLNetForTokenClassification` and TensorFlow `TFXLNetForTokenClassification` with unit test that allows performing sequence labeling tasks like NER or PoS tagging.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1870/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1870", "html_url": "https://github.com/huggingface/transformers/pull/1870", "diff_url": "https://github.com/huggingface/transformers/pull/1870.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1870.patch", "merged_at": 1575536049000 }
https://api.github.com/repos/huggingface/transformers/issues/1869
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1869/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1869/comments
https://api.github.com/repos/huggingface/transformers/issues/1869/events
https://github.com/huggingface/transformers/issues/1869
524,921,099
MDU6SXNzdWU1MjQ5MjEwOTk=
1,869
Pre-training a smaller version of BERT on own data
{ "login": "paul-you", "id": 23263212, "node_id": "MDQ6VXNlcjIzMjYzMjEy", "avatar_url": "https://avatars.githubusercontent.com/u/23263212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/paul-you", "html_url": "https://github.com/paul-you", "followers_url": "https://api.github.com/users/paul-you/followers", "following_url": "https://api.github.com/users/paul-you/following{/other_user}", "gists_url": "https://api.github.com/users/paul-you/gists{/gist_id}", "starred_url": "https://api.github.com/users/paul-you/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/paul-you/subscriptions", "organizations_url": "https://api.github.com/users/paul-you/orgs", "repos_url": "https://api.github.com/users/paul-you/repos", "events_url": "https://api.github.com/users/paul-you/events{/privacy}", "received_events_url": "https://api.github.com/users/paul-you/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, you might be interested in freezing some layers rather than initializing a model with only 3 layers. Freezing some layers mean that they won't be affected by the backpropagation.\r\n\r\nIf that's what you're looking for, @BramVanroy's answer may be of help: https://github.com/huggingface/transformers/issues/1431\r\n\r\nWe do not have an NSP example in our examples." ]
1,574
1,575
1,575
NONE
null
## ❓ Questions & Help Hello, <!-- A clear and concise description of the question. --> I want to load the first 3-4 layers of BERT, pre-train those on my own data and then fine-tune the model on the target task. Is this possible? I took a look at the [run_lm_finetuning.py](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py) and changed the lines 478-479: ` config = config_class.from_pretrained(args.config_name if args.config_name else args.model_name_or_path, cache_dir=args.cache_dir if args.cache_dir else None)` to ` config = BertConfig(num_hidden_layers=3) ` I'm not sure, if this change is sufficient and if the first or last three layers will be taken from the pre-trained BERT? Is the _next sentence prediction_ pre-training task implemented somewhere ? Thanks in Advance
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1869/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1869/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1868
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1868/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1868/comments
https://api.github.com/repos/huggingface/transformers/issues/1868/events
https://github.com/huggingface/transformers/pull/1868
524,901,118
MDExOlB1bGxSZXF1ZXN0MzQyNTg5ODk1
1,868
XLNet for Token classification
{ "login": "alexzubiaga", "id": 17120045, "node_id": "MDQ6VXNlcjE3MTIwMDQ1", "avatar_url": "https://avatars.githubusercontent.com/u/17120045?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexzubiaga", "html_url": "https://github.com/alexzubiaga", "followers_url": "https://api.github.com/users/alexzubiaga/followers", "following_url": "https://api.github.com/users/alexzubiaga/following{/other_user}", "gists_url": "https://api.github.com/users/alexzubiaga/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexzubiaga/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexzubiaga/subscriptions", "organizations_url": "https://api.github.com/users/alexzubiaga/orgs", "repos_url": "https://api.github.com/users/alexzubiaga/repos", "events_url": "https://api.github.com/users/alexzubiaga/events{/privacy}", "received_events_url": "https://api.github.com/users/alexzubiaga/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,574
1,574
1,574
NONE
null
Hi, this PR adds a `TFXLNetForTokenClassification` implementation and unit test that allows to perform sequence labeling tasks like NER or PoS tagging.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1868/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1868/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1868", "html_url": "https://github.com/huggingface/transformers/pull/1868", "diff_url": "https://github.com/huggingface/transformers/pull/1868.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1868.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/1867
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1867/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1867/comments
https://api.github.com/repos/huggingface/transformers/issues/1867/events
https://github.com/huggingface/transformers/issues/1867
524,881,347
MDU6SXNzdWU1MjQ4ODEzNDc=
1,867
How to fine-tune BERT on a large training dataset?
{ "login": "wcgan", "id": 43312978, "node_id": "MDQ6VXNlcjQzMzEyOTc4", "avatar_url": "https://avatars.githubusercontent.com/u/43312978?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wcgan", "html_url": "https://github.com/wcgan", "followers_url": "https://api.github.com/users/wcgan/followers", "following_url": "https://api.github.com/users/wcgan/following{/other_user}", "gists_url": "https://api.github.com/users/wcgan/gists{/gist_id}", "starred_url": "https://api.github.com/users/wcgan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wcgan/subscriptions", "organizations_url": "https://api.github.com/users/wcgan/orgs", "repos_url": "https://api.github.com/users/wcgan/repos", "events_url": "https://api.github.com/users/wcgan/events{/privacy}", "received_events_url": "https://api.github.com/users/wcgan/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I think you should check out pytorch dataloaders https://pytorch.org/docs/stable/data.html\r\nAlso, gradient accumulation is helpful.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,574
1,579
1,579
NONE
null
## ❓ Questions & Help Hi, I want to fine-tune BERT on a large training dataset. With around 1.5million training examples, this is currently consuming around 60GB of RAM. Is there any way to reduce the RAM usage or load the training examples in parts? Thanks!.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1867/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1867/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1866
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1866/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1866/comments
https://api.github.com/repos/huggingface/transformers/issues/1866/events
https://github.com/huggingface/transformers/issues/1866
524,878,431
MDU6SXNzdWU1MjQ4Nzg0MzE=
1,866
BertForTokenClassification for NER . what is the conclusion of this output ?
{ "login": "AjitAntony", "id": 46282348, "node_id": "MDQ6VXNlcjQ2MjgyMzQ4", "avatar_url": "https://avatars.githubusercontent.com/u/46282348?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AjitAntony", "html_url": "https://github.com/AjitAntony", "followers_url": "https://api.github.com/users/AjitAntony/followers", "following_url": "https://api.github.com/users/AjitAntony/following{/other_user}", "gists_url": "https://api.github.com/users/AjitAntony/gists{/gist_id}", "starred_url": "https://api.github.com/users/AjitAntony/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AjitAntony/subscriptions", "organizations_url": "https://api.github.com/users/AjitAntony/orgs", "repos_url": "https://api.github.com/users/AjitAntony/repos", "events_url": "https://api.github.com/users/AjitAntony/events{/privacy}", "received_events_url": "https://api.github.com/users/AjitAntony/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Why not just use flair? Flair has integrated BERT.", "BERT was performing sentence embedding better than Flair(tired all different type of stacked embedding ) but less when compared to USC . Flair had a functionality that gave NER tagging directly . i was expecting the same will would be available in BERT but its not available directly .It would be good if BERT gives a direct plug and play functionality for NER task . ", "@ajbot2019 \r\n\r\n1. compute prediction list\r\nhttps://github.com/huggingface/transformers/blob/master/examples/run_ner.py#L242\r\n\r\n2. print it with \r\nhttps://github.com/huggingface/transformers/blob/master/examples/run_ner.py#L503\r\n\r\nshell scripts to train and evaluate for CoNLL 2003 (english) dataset.\r\nhttps://github.com/dsindex/transformers_examples\r\nthis may help you.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,574
1,580
1,580
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Hi , Im trying to perform NER using BertForTokenClassification .I saw this sample code in transformers GIT page. from transformers import BertForTokenClassification tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForTokenClassification.from_pretrained('bert-base-uncased') input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1 labels = torch.tensor([1] * input_ids.size(1)).unsqueeze(0) # Batch size 1 print(labels) outputs = model(input_ids, labels=labels) loss, scores = outputs[:2] output loss: tensor(0.5975, grad_fn=<NllLossBackward>) output scores: tensor([[[-0.1622, 0.1824], [-0.1552, -0.0534], [-0.3032, -0.1166], [-0.2453, -0.1182], [-0.4388, -0.1898], [-0.3159, -0.1067]]], grad_fn=<AddBackward0>) 1.When i printed the loss and score i got below values .Now how should i infer this output ? what dose these value represent for performing NER ? what should i do to get the NER tags for the sentence "Hello, my dog is cute" . 2.i referred few NER codes in GIT using BERT and they have humongous line of code written for performing the NER . Is there any simple way to perform NER using bert ? like how Flair library has very simple method for performing the NER task ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1866/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1866/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1865
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1865/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1865/comments
https://api.github.com/repos/huggingface/transformers/issues/1865/events
https://github.com/huggingface/transformers/issues/1865
524,824,317
MDU6SXNzdWU1MjQ4MjQzMTc=
1,865
Run bert for multi-classification but loss never decrease
{ "login": "MrKZZ", "id": 18312628, "node_id": "MDQ6VXNlcjE4MzEyNjI4", "avatar_url": "https://avatars.githubusercontent.com/u/18312628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MrKZZ", "html_url": "https://github.com/MrKZZ", "followers_url": "https://api.github.com/users/MrKZZ/followers", "following_url": "https://api.github.com/users/MrKZZ/following{/other_user}", "gists_url": "https://api.github.com/users/MrKZZ/gists{/gist_id}", "starred_url": "https://api.github.com/users/MrKZZ/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MrKZZ/subscriptions", "organizations_url": "https://api.github.com/users/MrKZZ/orgs", "repos_url": "https://api.github.com/users/MrKZZ/repos", "events_url": "https://api.github.com/users/MrKZZ/events{/privacy}", "received_events_url": "https://api.github.com/users/MrKZZ/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Your copy&pasta is broken. Perhaps Roberto can help you? [Text classification with RoBERTa](https://rsilveira79.github.io/fermenting_gradients/machine_learning/nlp/pytorch/text_classification_roberta/)", "> Your copy&pasta is broken. Perhaps Roberto can help you? [Text classification with RoBERTa](https://rsilveira79.github.io/fermenting_gradients/machine_learning/nlp/pytorch/text_classification_roberta/)\r\n\r\nhahaha, actually i solve this problem last night. It's because WarmupLinearSchedule, and I set a t_total args which is smaller than warmup steps , so the lr is nearly zero actually.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,574
1,580
1,580
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I copy the code in example/run_glue.py for text classification task with bert model. All i change in this code is changed to a multi-classification task. I didn't change the main stream of this code. however, after 80 epoches running, i just found the eval result didn't change from the begging to the end. Next time i print the loss and i found the loss nearly unchanged from the begining to the end. i change loss from 1e-5 to 1e-3 and it is not the case, the loss is still nearly unchanged. i just sum the loss together in an single epoch, and after several epochs it only changed from 4703 to 4700 . So that's my problem and any ideas will be appreciated . ` import torch from transformers import BertForSequenceClassification, BertTokenizer, InputExample, AdamW, WarmupLinearSchedule from torch.utils.data import DataLoader, Dataset, SequentialSampler, RandomSampler, TensorDataset from torch.utils.data.distributed import DistributedSampler import random import numpy as np import os, pickle import argparse from tqdm import tqdm, trange import copy, json import glob from apex import amp from transformers import glue_convert_examples_to_features as convert_examples_to_features from Bert_eval import evaluate def train(args, model, tokenizer): with open("../../data/train_texts", "r") as fr: texts = fr.readlines() with open("../../data/train_labels", "r") as fr: labels = fr.readlines() examples = load_dataset(texts, labels) label_list = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"] features = convert_examples_to_features(examples, tokenizer, label_list=label_list, output_mode="classification") cached_path = "cached_file" torch.save(features, cached_path) all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long) all_attention_mask = torch.tensor([f.attention_mask for f in features], dtype=torch.long) all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long) all_labels = torch.tensor([f.label for f in features], dtype=torch.long) train_dataset = TensorDataset(all_input_ids, all_attention_mask, all_token_type_ids, all_labels) train_dataloader = DataLoader(train_dataset, batch_size=args.train_batch_size) no_decay = ['bias', 'LayerNorm.weight'] optimizer_grouped_parameters = [ {'params': [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], 'weight_decay': args.weight_decay}, {'params': [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], 'weight_decay': 0.0} ] optimizer = AdamW(optimizer_grouped_parameters, lr=args.learning_rate, eps=args.adam_epsilon) scheduler = WarmupLinearSchedule(optimizer, warmup_steps=args.warmup_steps, t_total=args.t_total) model, optimizer = amp.initialize(model, optimizer, opt_level=args.fp16_opt_level) # multi-gpu training (should be after apex fp16 initialization) if args.n_gpu > 1: model = torch.nn.DataParallel(model) # Distributed training (should be after apex fp16 initialization) if args.local_rank != -1: model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_rank], output_device=args.local_rank, find_unused_parameters=True) # Train! print("***** Running training *****") print(" Num examples = %d", len(train_dataset)) print(" Num Epochs = %d", args.num_train_epochs) print(" Gradient Accumulation steps = %d", args.gradient_accumulation_steps) print(" Total optimization steps = %d", args.t_total) global_step = 0 train_iterator = trange(int(args.num_train_epochs), desc="Epoch")#, disable=args.local_rank not in [-1, 0]) set_seed(args) for _ in train_iterator: tr_loss = 0.0 print("global_step: ", global_step) epoch_iterator = tqdm(train_dataloader, desc="Iteration", disable=args.local_rank not in [-1, 0]) print("optimizer: ", optimizer) for step, batch in enumerate(epoch_iterator): # print("batch: ", len(batch)) model.train() batch = tuple(t.to(args.device) for t in batch) inputs = {'input_ids': batch[0], 'attention_mask': batch[1], 'token_type_ids': batch[2], 'labels': batch[3]} outputs = model(**inputs) loss = outputs[0] # model outputs are always tuple in transformers (see doc) print("loss:", loss) loss.backward() tr_loss += loss.item() if (step + 1) % args.gradient_accumulation_steps == 0: if args.fp16: torch.nn.utils.clip_grad_norm_(amp.master_params(optimizer), args.max_grad_norm) else: torch.nn.utils.clip_grad_norm_(model.parameters(), args.max_grad_norm) optimizer.step() scheduler.step() # Update learning rate schedule model.zero_grad() global_step += 1 output_dir = os.path.join(args.output_dir, 'checkpoint-{}'.format(global_step)) if not os.path.exists(output_dir): os.makedirs(output_dir) result = evaluate(args, model, tokenizer) print("result: ", result) model_to_save = model.module if hasattr(model, 'module') else model # Take care of distributed/parallel training model_to_save.save_pretrained(output_dir) torch.save(args, os.path.join(output_dir, 'training_args.bin')) def load_dataset(lines, labels): """ convert examples for the training sets for document classification """ examples = [] for (i, (line, label)) in enumerate(zip(lines, labels)): line = line.strip() label = label.strip() # label = str(i % 2) guid = i examples.append( InputExample(guid=guid, text_a=line, label=label) ) return examples if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--data_dir", type=str, default="./data/", help="data path for train or test") parser.add_argument("--train_batch_size", type=int, default=32, help="training batch size") parser.add_argument("--t_total", type=int, default=100, help="training epoch") parser.add_argument("--weight_decay", default=0.0, type=float, help="Weight decay if we apply some.") parser.add_argument("--output_mode", default="classification", type=str, help="task name.") parser.add_argument("--learning_rate", default=1e-3, type=float, help="The initial learning rate for Adam.") parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") parser.add_argument("--num_train_epochs", default=100, type=float, help="Total number of training epochs to perform.") parser.add_argument("--max_steps", default=-1, type=int, help="If > 0: set total number of training steps to perform. Override num_train_epochs.") parser.add_argument("--warmup_steps", default=0, type=int, help="Linear warmup over warmup_steps.") parser.add_argument("--n_gpu", type=int, default=1, help="number of gpus to run") parser.add_argument('--logging_steps', type=int, default=50, help="Log every X updates steps.") parser.add_argument('--save_steps', type=int, default=5000, help="Save checkpoint every X updates steps.") parser.add_argument("--per_gpu_eval_batch_size", default=256, type=int, help="task name.") parser.add_argument("--do_train", action="store_true", help="train model flag") parser.add_argument("--do_eval", action="store_true", help="eval model flag") parser.add_argument("--eval_all_checkpoints", action='store_true', help="Evaluate all checkpoints starting with the same prefix as model_name ending and ending with step number") parser.add_argument("--no_cuda", action='store_true', help="Avoid using CUDA when available") parser.add_argument('--overwrite_output_dir', action='store_true', help="Overwrite the content of the output directory") parser.add_argument('--overwrite_cache', action='store_true', help="Overwrite the cached training and evaluation sets") parser.add_argument('--seed', type=int, default=42, help="random seed for initialization") parser.add_argument("--adam_epsilon", default=1e-8, type=float, help="Epsilon for Adam optimizer.") parser.add_argument('--fp16', action='store_true', help="Whether to use 16-bit (mixed) precision (through NVIDIA apex) instead of 32-bit") parser.add_argument('--fp16_opt_level', type=str, default='O1', help="For fp16: Apex AMP optimization level selected in ['O0', 'O1', 'O2', and 'O3']." "See details at https://nvidia.github.io/apex/amp.html") parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") parser.add_argument('--gradient_accumulation_steps', type=int, default=1, help="Number of updates steps to accumulate before performing a backward/update pass.") parser.add_argument("--output_dir", default="./checkpoint", type=str, help="The output directory where the model predictions and checkpoints will be written.") parser.add_argument("--eval_checkpoint", type=str, default="1730", help="the checkpoint to reload") # parser.add_argument("--gpu", type=int, default=0, help="choose gpu device") args = parser.parse_args() if args.local_rank == -1 or args.no_cuda: device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu") args.n_gpu = torch.cuda.device_count() else: # Initializes the distributed backend which will take care of sychronizing nodes/GPUs torch.cuda.set_device(args.local_rank) device = torch.device("cuda", args.local_rank) torch.distributed.init_process_group(backend='nccl') args.n_gpu = 1 args.device = device model_class = BertForSequenceClassification tokenizer_class = BertTokenizer pretrained_weights = "bert-base-chinese" tokenizer = tokenizer_class.from_pretrained(pretrained_weights) model = model_class.from_pretrained(pretrained_weights, num_labels=10)#, output_hidden_states=True, output_attentions=True) if torch.cuda.is_available(): model.to(args.device) train(args, model, tokenizer) if True: # Create output directory if needed if not os.path.exists(args.output_dir) and args.local_rank in [-1, 0]: os.makedirs(args.output_dir) print("model saved") model_to_save = model.module if hasattr(model, 'module') else model # Take care of distributed/parallel training model_to_save.save_pretrained(args.output_dir) tokenizer.save_pretrained(args.output_dir) print("model saved into ", args.output_dir) # Good practice: save your training arguments together with the trained model torch.save(args, os.path.join(args.output_dir, 'training_args.bin')) # Load a trained model and vocabulary that you have fine-tuned model = model_class.from_pretrained(args.output_dir) tokenizer = tokenizer_class.from_pretrained(args.output_dir) model.to(args.device) results = {} if True: tokenizer = tokenizer_class.from_pretrained(args.output_dir)#, do_lower_case=args.do_lower_case) checkpoints = [args.output_dir] if args.eval_all_checkpoints: checkpoints = list(os.path.dirname(c) for c in sorted(glob.glob(args.output_dir + '/**/' + WEIGHTS_NAME, recursive=True))) for checkpoint in checkpoints: global_step = checkpoint.split('-')[-1] if len(checkpoints) > 1 else "" prefix = checkpoint.split('/')[-1] if checkpoint.find('checkpoint') != -1 else "" model = model_class.from_pretrained(checkpoint) model.to(args.device) result = evaluate(args, model, tokenizer, prefix=prefix) result = dict((k + '_{}'.format(global_step), v) for k, v in result.items()) results.update(result) print("result: ", results) `
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1865/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1865/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1864
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1864/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1864/comments
https://api.github.com/repos/huggingface/transformers/issues/1864/events
https://github.com/huggingface/transformers/issues/1864
524,771,067
MDU6SXNzdWU1MjQ3NzEwNjc=
1,864
tensorflow2.0 does not has mean, but reduce mean
{ "login": "andompesta", "id": 6725612, "node_id": "MDQ6VXNlcjY3MjU2MTI=", "avatar_url": "https://avatars.githubusercontent.com/u/6725612?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andompesta", "html_url": "https://github.com/andompesta", "followers_url": "https://api.github.com/users/andompesta/followers", "following_url": "https://api.github.com/users/andompesta/following{/other_user}", "gists_url": "https://api.github.com/users/andompesta/gists{/gist_id}", "starred_url": "https://api.github.com/users/andompesta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andompesta/subscriptions", "organizations_url": "https://api.github.com/users/andompesta/orgs", "repos_url": "https://api.github.com/users/andompesta/repos", "events_url": "https://api.github.com/users/andompesta/events{/privacy}", "received_events_url": "https://api.github.com/users/andompesta/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This bug has been reviewed and corrected in this PR #1876!\r\nClose this issue." ]
1,574
1,575
1,575
CONTRIBUTOR
null
## 🐛 Bug tensorflow2.0 does not has a mean function but only a reduce_mean(). ``` output = tf.mean(hidden_states, axis=1) ``` should be: ``` output = tf.reduce_mean(hidden_states, axis=1) ``` <!-- Important information --> Model I am using: TFXLMForSequenceClassification with summary_type == 'mean' Language I am using the model on (English, Chinese....): any The problem arise when using: when loading the model, due to the lazy initialisation of tf2.0 you have to run the model with dummy input. ``` TFXLMForSequenceClassification.from_pretrained(os.path.join(args.model_path, "xlm-mlm-17-1280-tf_model.h5"), config=config) ``` The tasks I am working on is: * [ x] my own task or dataset: I want to finetune the model on a custom english dataset and transfer it to multiple language ## To Reproduce Steps to reproduce the behavior: 1. load config file and add summary info ``` config = XLMConfig.from_pretrained(os.path.join(args.data_path, "xlm-mlm-17-1280-config.json")) config.summary_use_proj = True config.summary_type = 'mean' config.summary_proj_to_labels = True config.num_labels = len(LABELS) config.summary_activation = "tanh" ``` 2. load the model ``` model = TFXLMForSequenceClassification.from_pretrained(os.path.join(args.model_path, "xlm-mlm-17-1280-tf_model.h5"), config=config) ``` <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ## Expected behavior You should see an error stating that tf2.0 does not has a mean function. File "transformers/modeling_tf_utils.py", line 442: ``output = tf.mean(hidden_states, axis=1)`` should be ``output = tf.reduce_mean(hidden_states, axis=1)`` ```AttributeError: module 'tensorflow' has no attribute 'mean'``` ## Environment * OS: Ubuntu16.04 * Python version: 3.7.5 * Tensorflow version: 2.0 * Transformers version (or branch): 2.1.1 * Using GPU: yes * Distributed of parallel setup : not distributed, only single GPU ## Additional context <!-- Add any other context about the problem here. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1864/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1864/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1863
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1863/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1863/comments
https://api.github.com/repos/huggingface/transformers/issues/1863/events
https://github.com/huggingface/transformers/issues/1863
524,737,968
MDU6SXNzdWU1MjQ3Mzc5Njg=
1,863
How do I train OpenAIGPTDoubleHeadsModel from scratch?
{ "login": "g-karthik", "id": 3851993, "node_id": "MDQ6VXNlcjM4NTE5OTM=", "avatar_url": "https://avatars.githubusercontent.com/u/3851993?v=4", "gravatar_id": "", "url": "https://api.github.com/users/g-karthik", "html_url": "https://github.com/g-karthik", "followers_url": "https://api.github.com/users/g-karthik/followers", "following_url": "https://api.github.com/users/g-karthik/following{/other_user}", "gists_url": "https://api.github.com/users/g-karthik/gists{/gist_id}", "starred_url": "https://api.github.com/users/g-karthik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/g-karthik/subscriptions", "organizations_url": "https://api.github.com/users/g-karthik/orgs", "repos_url": "https://api.github.com/users/g-karthik/repos", "events_url": "https://api.github.com/users/g-karthik/events{/privacy}", "received_events_url": "https://api.github.com/users/g-karthik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You can initialize a model by calling its constructor with a configuration:\r\n\r\n```py\r\nfrom transformers import OpenAIGPTDoubleHeadsModel, OpenAIGPTConfig\r\n\r\nconfig = OpenAIGPTConfig()\r\nmodel = OpenAIGPTDoubleHeadsModel(config)\r\n```", "Cool, so if I instantiate the model with the default config, it will use the same vocab as the pre-trained model but it won't initialize the pre-trained weights, is that correct?\r\n\r\nAlso, could you please point me to where the model weights are initialized when we don't use pre-trained weights?", "If you instantiate a model with a default configuration without specifying any argument, it will instantiate your model according to the default configuration values, [visible here](https://github.com/huggingface/transformers/blob/master/transformers/configuration_openai.py#L59).\r\n\r\nThe model weights are initialized in the `OpenAIGPTPreTrainedModel`'s [`_init_weights` method](https://github.com/huggingface/transformers/blob/master/transformers/modeling_openai.py#L267)." ]
1,574
1,575
1,575
NONE
null
## ❓ Questions & Help It looks like I cannot train `OpenAIGPTDoubleHeadsModel` from scratch because it necessarily needs to be initialized using the `from_pretrained()` method. How should I initialize this model to be able to train it from scratch? As a hack, perhaps I could initialize it using `from_pretrained()` and then reset the pre-trained initialization so any fine-tuning would essentially be the same as pre-training?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1863/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1863/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1862
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1862/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1862/comments
https://api.github.com/repos/huggingface/transformers/issues/1862/events
https://github.com/huggingface/transformers/issues/1862
524,692,708
MDU6SXNzdWU1MjQ2OTI3MDg=
1,862
save all 12 layers outputs for each token
{ "login": "vr25", "id": 22553367, "node_id": "MDQ6VXNlcjIyNTUzMzY3", "avatar_url": "https://avatars.githubusercontent.com/u/22553367?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vr25", "html_url": "https://github.com/vr25", "followers_url": "https://api.github.com/users/vr25/followers", "following_url": "https://api.github.com/users/vr25/following{/other_user}", "gists_url": "https://api.github.com/users/vr25/gists{/gist_id}", "starred_url": "https://api.github.com/users/vr25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vr25/subscriptions", "organizations_url": "https://api.github.com/users/vr25/orgs", "repos_url": "https://api.github.com/users/vr25/repos", "events_url": "https://api.github.com/users/vr25/events{/privacy}", "received_events_url": "https://api.github.com/users/vr25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, to retrieve all layers you can set the `output_hidden_states` flag to `True` in your model configuration:\r\n\r\n```py\r\nfrom transformers import BertConfig, BertModel\r\n\r\nconfig = BertConfig.from_pretrained(\"bert-base-uncased\")\r\nconfig.output_hidden_states = True\r\nmodel = BertModel.from_pretrained(\"bert-base-uncased\", config=config)\r\n\r\noutputs = model(inputs)\r\nhidden_states = outputs[-1]\r\n```\r\n\r\nThe model will now output an additional value which will be a tuple of size 1 + n_layer:\r\n`hidden_states[0]`: output of the embedding layer\r\n`hidden_states[1]`: output of the first layer\r\n...\r\n`hidden_states[12]`: output of the last layer\r\n" ]
1,574
1,575
1,575
NONE
null
Hi, I am following [this](https://mccormickml.com/2019/05/14/BERT-word-embeddings-tutorial/) post to create sentence vectors from the last 4 layers or sum all 12 layers and so on. Considering a given document has 10,000 tokens and I am planning to use "bert_base_uncased", I was wondering how to save all 12 layers for each token efficiently. Also, further, how to retrieve these layers in order to build sentence vectors. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1862/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1862/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1861
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1861/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1861/comments
https://api.github.com/repos/huggingface/transformers/issues/1861/events
https://github.com/huggingface/transformers/pull/1861
524,468,313
MDExOlB1bGxSZXF1ZXN0MzQyMjM3NTYz
1,861
Better TensorFlow 2 examples
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1861?src=pr&el=h1) Report\n> Merging [#1861](https://codecov.io/gh/huggingface/transformers/pull/1861?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a5a06a851e1da79138e53978aa079a093f243dde?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1861/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1861?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1861 +/- ##\n=======================================\n Coverage 81.43% 81.43% \n=======================================\n Files 122 122 \n Lines 18338 18338 \n=======================================\n Hits 14933 14933 \n Misses 3405 3405\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1861?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1861?src=pr&el=footer). Last update [a5a06a8...0f730de](https://codecov.io/gh/huggingface/transformers/pull/1861?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,574
1,651
1,586
MEMBER
null
This PR aims to improve the TensorFlow examples of the library. It aims to use many tf-specific paradigms, including but not limited to Keras API, AMP, XLA, `tensorflow_datasets`, `tf.train.Features` and `tf.train.Examples`, model saving using Keras callbacks. Two examples are specifically targeted: - [x] `run_tf_glue.py` - [ ] `run_tf_squad.py` Aims to have a similar architecture to the PyTorch examples: using a parser with similar arguments so as to be used from a terminal. Feedback is most welcome.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1861/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1861/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1861", "html_url": "https://github.com/huggingface/transformers/pull/1861", "diff_url": "https://github.com/huggingface/transformers/pull/1861.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1861.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/1860
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1860/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1860/comments
https://api.github.com/repos/huggingface/transformers/issues/1860/events
https://github.com/huggingface/transformers/pull/1860
524,362,290
MDExOlB1bGxSZXF1ZXN0MzQyMTUwNTcz
1,860
[WIP] Add support for CamembertForTokenClassification
{ "login": "stefan-it", "id": 20651387, "node_id": "MDQ6VXNlcjIwNjUxMzg3", "avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stefan-it", "html_url": "https://github.com/stefan-it", "followers_url": "https://api.github.com/users/stefan-it/followers", "following_url": "https://api.github.com/users/stefan-it/following{/other_user}", "gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}", "starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions", "organizations_url": "https://api.github.com/users/stefan-it/orgs", "repos_url": "https://api.github.com/users/stefan-it/repos", "events_url": "https://api.github.com/users/stefan-it/events{/privacy}", "received_events_url": "https://api.github.com/users/stefan-it/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1860?src=pr&el=h1) Report\n> Merging [#1860](https://codecov.io/gh/huggingface/transformers/pull/1860?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3916b334a86484af8442d1cfdb2f15695feae581?src=pr&el=desc) will **decrease** coverage by `0.03%`.\n> The diff coverage is `55.55%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1860/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1860?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1860 +/- ##\n==========================================\n- Coverage 84.08% 84.04% -0.04% \n==========================================\n Files 97 97 \n Lines 14316 14333 +17 \n==========================================\n+ Hits 12037 12046 +9 \n- Misses 2279 2287 +8\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1860?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/1860/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2NhbWVtYmVydC5weQ==) | `100% <100%> (ø)` | :arrow_up: |\n| [transformers/tokenization\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/1860/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9jYW1lbWJlcnQucHk=) | `36.92% <38.46%> (+0.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1860?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1860?src=pr&el=footer). Last update [3916b33...56c8486](https://codecov.io/gh/huggingface/transformers/pull/1860?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This looks good to me @stefan-it 👌 \r\n\r\nWe'll also need to do the TF models but that can be in a different PR.\r\n\r\ncc @LysandreJik @thomwolf ", "I tried to replicate the results for PoS tagging on ParTUT dataset (using the tagged 2.2 version from [here](https://github.com/UniversalDependencies/UD_French-ParTUT)). I could replicate the results mentioned in the CamemBERT paper for the multilingual (cased) BERT model: `run_ner` outputs F1-score, so I added the `accuracy_score` metric from the `seqeval` module for debugging ;) \r\n\r\nHowever, the CamemBERT model is ~10% behind it. I'm trying to figure out why, loss is really high compared to mBERT...\r\n", "LGTM thanks!" ]
1,574
1,574
1,574
COLLABORATOR
null
Hi, this PR adds a `CamembertForTokenClassification` implementation, so that fine-tuning of NER models is possible. Tasks: * [x] Implement `CamembertForTokenClassification` * [x] Add `CamembertForTokenClassification` to module import list * [x] Add support for CamemBERT model in `run_ner.py` example
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1860/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1860/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1860", "html_url": "https://github.com/huggingface/transformers/pull/1860", "diff_url": "https://github.com/huggingface/transformers/pull/1860.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1860.patch", "merged_at": 1574330168000 }
https://api.github.com/repos/huggingface/transformers/issues/1859
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1859/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1859/comments
https://api.github.com/repos/huggingface/transformers/issues/1859/events
https://github.com/huggingface/transformers/pull/1859
524,228,881
MDExOlB1bGxSZXF1ZXN0MzQyMDQwNzE4
1,859
Adds CamemBERT to Model architectures list
{ "login": "Paethon", "id": 237550, "node_id": "MDQ6VXNlcjIzNzU1MA==", "avatar_url": "https://avatars.githubusercontent.com/u/237550?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Paethon", "html_url": "https://github.com/Paethon", "followers_url": "https://api.github.com/users/Paethon/followers", "following_url": "https://api.github.com/users/Paethon/following{/other_user}", "gists_url": "https://api.github.com/users/Paethon/gists{/gist_id}", "starred_url": "https://api.github.com/users/Paethon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Paethon/subscriptions", "organizations_url": "https://api.github.com/users/Paethon/orgs", "repos_url": "https://api.github.com/users/Paethon/repos", "events_url": "https://api.github.com/users/Paethon/events{/privacy}", "received_events_url": "https://api.github.com/users/Paethon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1859?src=pr&el=h1) Report\n> Merging [#1859](https://codecov.io/gh/huggingface/transformers/pull/1859?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0477b307c7501ea76e01b03cb387a2312db752b3?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1859/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1859?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1859 +/- ##\n=======================================\n Coverage 84.08% 84.08% \n=======================================\n Files 97 97 \n Lines 14316 14316 \n=======================================\n Hits 12037 12037 \n Misses 2279 2279\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1859?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1859?src=pr&el=footer). Last update [0477b30...63af013](https://codecov.io/gh/huggingface/transformers/pull/1859?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks! Slightly reworded in next commit" ]
1,574
1,574
1,574
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1859/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1859/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1859", "html_url": "https://github.com/huggingface/transformers/pull/1859", "diff_url": "https://github.com/huggingface/transformers/pull/1859.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1859.patch", "merged_at": 1574086995000 }
https://api.github.com/repos/huggingface/transformers/issues/1858
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1858/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1858/comments
https://api.github.com/repos/huggingface/transformers/issues/1858/events
https://github.com/huggingface/transformers/issues/1858
524,065,422
MDU6SXNzdWU1MjQwNjU0MjI=
1,858
Conversion of [Model]ForSequenceClassification logits to probabilities
{ "login": "sgummidipundi", "id": 24970664, "node_id": "MDQ6VXNlcjI0OTcwNjY0", "avatar_url": "https://avatars.githubusercontent.com/u/24970664?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgummidipundi", "html_url": "https://github.com/sgummidipundi", "followers_url": "https://api.github.com/users/sgummidipundi/followers", "following_url": "https://api.github.com/users/sgummidipundi/following{/other_user}", "gists_url": "https://api.github.com/users/sgummidipundi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgummidipundi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgummidipundi/subscriptions", "organizations_url": "https://api.github.com/users/sgummidipundi/orgs", "repos_url": "https://api.github.com/users/sgummidipundi/repos", "events_url": "https://api.github.com/users/sgummidipundi/events{/privacy}", "received_events_url": "https://api.github.com/users/sgummidipundi/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Logits are the classification scores before softmax.\r\nI don't see why the results of your equation should add up to 1.\r\nYou may try to use the softmax function directly.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,574
1,579
1,579
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I noticed that whenever I would convert logits coming from the model to probabilities using the following equation: probability = e^logit/(1 + e^logit) The probabilities for my two classes do not add up to 1. Why is this?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1858/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1858/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1857
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1857/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1857/comments
https://api.github.com/repos/huggingface/transformers/issues/1857/events
https://github.com/huggingface/transformers/issues/1857
524,011,348
MDU6SXNzdWU1MjQwMTEzNDg=
1,857
XLM Masked Word Prediction
{ "login": "ceatlinar", "id": 24279886, "node_id": "MDQ6VXNlcjI0Mjc5ODg2", "avatar_url": "https://avatars.githubusercontent.com/u/24279886?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ceatlinar", "html_url": "https://github.com/ceatlinar", "followers_url": "https://api.github.com/users/ceatlinar/followers", "following_url": "https://api.github.com/users/ceatlinar/following{/other_user}", "gists_url": "https://api.github.com/users/ceatlinar/gists{/gist_id}", "starred_url": "https://api.github.com/users/ceatlinar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ceatlinar/subscriptions", "organizations_url": "https://api.github.com/users/ceatlinar/orgs", "repos_url": "https://api.github.com/users/ceatlinar/repos", "events_url": "https://api.github.com/users/ceatlinar/events{/privacy}", "received_events_url": "https://api.github.com/users/ceatlinar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Duplicate of #1842 " ]
1,574
1,574
1,574
NONE
null
## ❓ Questions & Help Does anyone knows how to mask a word in a sentence and then get predictions with probabilities from XLM models
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1857/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1857/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1856
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1856/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1856/comments
https://api.github.com/repos/huggingface/transformers/issues/1856/events
https://github.com/huggingface/transformers/issues/1856
523,994,469
MDU6SXNzdWU1MjM5OTQ0Njk=
1,856
multitask learning
{ "login": "antgr", "id": 2175768, "node_id": "MDQ6VXNlcjIxNzU3Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/2175768?v=4", "gravatar_id": "", "url": "https://api.github.com/users/antgr", "html_url": "https://github.com/antgr", "followers_url": "https://api.github.com/users/antgr/followers", "following_url": "https://api.github.com/users/antgr/following{/other_user}", "gists_url": "https://api.github.com/users/antgr/gists{/gist_id}", "starred_url": "https://api.github.com/users/antgr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/antgr/subscriptions", "organizations_url": "https://api.github.com/users/antgr/orgs", "repos_url": "https://api.github.com/users/antgr/repos", "events_url": "https://api.github.com/users/antgr/events{/privacy}", "received_events_url": "https://api.github.com/users/antgr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "from the documentation of the class,\r\nhttps://github.com/huggingface/transformers/blob/933841d903a032d93b5100220dc72db9d1283eca/pytorch_transformers/modeling_bert.py#L1100\r\n\r\nI understand that I could use the ```scores```, as input to an additional module that I could stack on top of BertForTokenClassification, for example for a second task. Is that correct? \r\n\r\n```loss, scores = outputs[:2] ```\r\n\r\nBut what I am thinking right now, is that scores could have small dimensions, so probably I would need the weights of the last layer. How could I extract them?\r\n\r\nAlways your thoughts on that would be much appreciated!", "for every other suffered person that needs an answer on that last question above, I think that the way to extract those weights, is if you open the black box of the implementation of this class, and here it is what you want:\r\noutputs = self.bert(..)\r\nso I think that reimplementation/enhancment is needed to support my needs as was given above.\r\nAm I missing something if I reimplement this class adding more functionality? I think that no, and that it is safe. ", "Hi, there are several things you can do to obtain the last layer representation. \r\n\r\n- First of all, you can use a standard `BertModel` on which you add your own classifier for token classification (that's what's done with `BertForTokenClassification`). This will allow you to easily switch the heads for your multi-task setup.\r\n\r\n- You could also use the `BertForTokenClassification`, as you have said, and use the inner model (model.bert) to obtain the last layer.\r\n\r\n- Finally, the cleanest way would be to output hidden states directly by specifying the option in the configuration:\r\n\r\n```py\r\nfrom transformers import BertConfig, BertForTokenClassification\r\nimport torch\r\n\r\nconfig = BertConfig.from_pretrained(\"bert-base-cased\")\r\nconfig.output_hidden_states = True\r\n\r\nmodel = BertForTokenClassification.from_pretrained(\"bert-base-cased\", config=config)\r\n\r\ninputs = torch.tensor([[1, 2, 3]])\r\n\r\noutputs = model(inputs)\r\ntoken_classification_outputs, hidden_states = outputs\r\n\r\nlast_layer_hidden_states = hidden_states[-1]\r\n```\r\n\r\nThe variable `last_layer_hidden_states` is of shape `[batch_size, seq_len, hidden_size]` and is the output of the last transformer layer.\r\n\r\nI hope this clears things up.", "https://github.com/jiant-dev/jiant" ]
1,574
1,594
1,575
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Hi, I would like to apply multitask learning, using for example two tasks, and the one (and maybe both) of the two tasks is sequential labeling like NER. To my understanding, in order to apply the library on such a task, the way to go is with BertForTokenClassification. Is that correct? But also what I think is that I do not have enough flexibility to use it/ adapt it to create a multitask model. Could you share your thoughts on that? Any help would be much appreciated.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1856/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1856/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1855
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1855/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1855/comments
https://api.github.com/repos/huggingface/transformers/issues/1855/events
https://github.com/huggingface/transformers/issues/1855
523,990,667
MDU6SXNzdWU1MjM5OTA2Njc=
1,855
Some questions about the abstractive summarization code.
{ "login": "linWujl", "id": 9161371, "node_id": "MDQ6VXNlcjkxNjEzNzE=", "avatar_url": "https://avatars.githubusercontent.com/u/9161371?v=4", "gravatar_id": "", "url": "https://api.github.com/users/linWujl", "html_url": "https://github.com/linWujl", "followers_url": "https://api.github.com/users/linWujl/followers", "following_url": "https://api.github.com/users/linWujl/following{/other_user}", "gists_url": "https://api.github.com/users/linWujl/gists{/gist_id}", "starred_url": "https://api.github.com/users/linWujl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/linWujl/subscriptions", "organizations_url": "https://api.github.com/users/linWujl/orgs", "repos_url": "https://api.github.com/users/linWujl/repos", "events_url": "https://api.github.com/users/linWujl/events{/privacy}", "received_events_url": "https://api.github.com/users/linWujl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi!\r\n\r\nFirst, please check the `run_summarization.py` script in the `example-summarization` branch as it is more up-to-date. To answer your questions:\r\n\r\na) We are following this implementation: https://arxiv.org/pdf/1908.08345.pdf where they add an extra [CLS] token for each new sentence.\r\n\r\nb) I would need to triple-check the authors’ code, but I hope it does not matter.\r\n\r\nc) Good point. If you look at the way the encoder-decoder is implemented you will see that masks passed to the decoder are automatically turned into causal (“look-ahead”), so the code does have the expected behavior :)", "Thank you so much." ]
1,573
1,574
1,574
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Hi, thanks for sharing the code, i was confused about some code and implements when i reading it. a) In run_summarization_finetuning.py, ![image](https://user-images.githubusercontent.com/9161371/69008385-bf6ac500-0984-11ea-8f02-72806bb68675.png) should cls_token_id be replaced by sep_token_id? b) In utils_summarization.py ![image](https://user-images.githubusercontent.com/9161371/69008407-17a1c700-0985-11ea-8ff4-a16cee796213.png) This will make sequence [cls] a b [sep] c d [sep] encoded as 0 0 0 1 1 1 0, should the order be changed? c) In the decoder, the deocder_mask only mask the pad token, i think look_ahead_mask is more accurate for we can't see the future words in advance. Looking forward to reply. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1855/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1855/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1854
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1854/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1854/comments
https://api.github.com/repos/huggingface/transformers/issues/1854/events
https://github.com/huggingface/transformers/issues/1854
523,965,809
MDU6SXNzdWU1MjM5NjU4MDk=
1,854
run glue problem
{ "login": "ZTurboX", "id": 5669444, "node_id": "MDQ6VXNlcjU2Njk0NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5669444?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZTurboX", "html_url": "https://github.com/ZTurboX", "followers_url": "https://api.github.com/users/ZTurboX/followers", "following_url": "https://api.github.com/users/ZTurboX/following{/other_user}", "gists_url": "https://api.github.com/users/ZTurboX/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZTurboX/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZTurboX/subscriptions", "organizations_url": "https://api.github.com/users/ZTurboX/orgs", "repos_url": "https://api.github.com/users/ZTurboX/repos", "events_url": "https://api.github.com/users/ZTurboX/events{/privacy}", "received_events_url": "https://api.github.com/users/ZTurboX/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you please respect the issue template so that we can help you better? It's hard to help without seeing the versions or the full error." ]
1,573
1,574
1,574
NONE
null
## ❓ Questions & Help How to fix this problem ![image](https://user-images.githubusercontent.com/5669444/69006304-2f1e8700-0968-11ea-9ebf-0fe4c4c19f83.png)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1854/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1853
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1853/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1853/comments
https://api.github.com/repos/huggingface/transformers/issues/1853/events
https://github.com/huggingface/transformers/pull/1853
523,957,722
MDExOlB1bGxSZXF1ZXN0MzQxODM0NDY5
1,853
typo "deay" -> "decay"
{ "login": "KazutoshiShinoda", "id": 16998772, "node_id": "MDQ6VXNlcjE2OTk4Nzcy", "avatar_url": "https://avatars.githubusercontent.com/u/16998772?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KazutoshiShinoda", "html_url": "https://github.com/KazutoshiShinoda", "followers_url": "https://api.github.com/users/KazutoshiShinoda/followers", "following_url": "https://api.github.com/users/KazutoshiShinoda/following{/other_user}", "gists_url": "https://api.github.com/users/KazutoshiShinoda/gists{/gist_id}", "starred_url": "https://api.github.com/users/KazutoshiShinoda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KazutoshiShinoda/subscriptions", "organizations_url": "https://api.github.com/users/KazutoshiShinoda/orgs", "repos_url": "https://api.github.com/users/KazutoshiShinoda/repos", "events_url": "https://api.github.com/users/KazutoshiShinoda/events{/privacy}", "received_events_url": "https://api.github.com/users/KazutoshiShinoda/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1853?src=pr&el=h1) Report\n> Merging [#1853](https://codecov.io/gh/huggingface/transformers/pull/1853?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0477b307c7501ea76e01b03cb387a2312db752b3?src=pr&el=desc) will **decrease** coverage by `0.09%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1853/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1853?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1853 +/- ##\n=========================================\n- Coverage 84.08% 83.98% -0.1% \n=========================================\n Files 97 97 \n Lines 14316 14316 \n=========================================\n- Hits 12037 12023 -14 \n- Misses 2279 2293 +14\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1853?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tests/modeling\\_tf\\_auto\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1853/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2F1dG9fdGVzdC5weQ==) | `96.36% <0%> (-1.82%)` | :arrow_down: |\n| [transformers/tests/modeling\\_tf\\_transfo\\_xl\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1853/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX3RyYW5zZm9feGxfdGVzdC5weQ==) | `93% <0%> (-1%)` | :arrow_down: |\n| [transformers/tests/modeling\\_tf\\_ctrl\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1853/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2N0cmxfdGVzdC5weQ==) | `93.06% <0%> (-1%)` | :arrow_down: |\n| [transformers/tests/modeling\\_tf\\_distilbert\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1853/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2Rpc3RpbGJlcnRfdGVzdC5weQ==) | `98.16% <0%> (-0.92%)` | :arrow_down: |\n| [transformers/tests/modeling\\_tf\\_openai\\_gpt\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1853/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX29wZW5haV9ncHRfdGVzdC5weQ==) | `93.85% <0%> (-0.88%)` | :arrow_down: |\n| [transformers/tests/modeling\\_tf\\_gpt2\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1853/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2dwdDJfdGVzdC5weQ==) | `93.85% <0%> (-0.88%)` | :arrow_down: |\n| [transformers/tests/modeling\\_tf\\_xlm\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1853/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX3hsbV90ZXN0LnB5) | `94.35% <0%> (-0.81%)` | :arrow_down: |\n| [transformers/tests/modeling\\_tf\\_roberta\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1853/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX3JvYmVydGFfdGVzdC5weQ==) | `74.4% <0%> (-0.8%)` | :arrow_down: |\n| [transformers/tests/modeling\\_tf\\_xlnet\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1853/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX3hsbmV0X3Rlc3QucHk=) | `95.03% <0%> (-0.71%)` | :arrow_down: |\n| [transformers/tests/modeling\\_tf\\_bert\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1853/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2JlcnRfdGVzdC5weQ==) | `95.59% <0%> (-0.63%)` | :arrow_down: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/1853/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1853?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1853?src=pr&el=footer). Last update [0477b30...de3cb08](https://codecov.io/gh/huggingface/transformers/pull/1853?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,573
1,574
1,574
CONTRIBUTOR
null
a typo has been fixed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1853/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1853/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1853", "html_url": "https://github.com/huggingface/transformers/pull/1853", "diff_url": "https://github.com/huggingface/transformers/pull/1853.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1853.patch", "merged_at": 1574095807000 }
https://api.github.com/repos/huggingface/transformers/issues/1852
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1852/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1852/comments
https://api.github.com/repos/huggingface/transformers/issues/1852/events
https://github.com/huggingface/transformers/issues/1852
523,890,600
MDU6SXNzdWU1MjM4OTA2MDA=
1,852
XLNet model params for Question answering
{ "login": "Swathygsb", "id": 23665054, "node_id": "MDQ6VXNlcjIzNjY1MDU0", "avatar_url": "https://avatars.githubusercontent.com/u/23665054?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Swathygsb", "html_url": "https://github.com/Swathygsb", "followers_url": "https://api.github.com/users/Swathygsb/followers", "following_url": "https://api.github.com/users/Swathygsb/following{/other_user}", "gists_url": "https://api.github.com/users/Swathygsb/gists{/gist_id}", "starred_url": "https://api.github.com/users/Swathygsb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Swathygsb/subscriptions", "organizations_url": "https://api.github.com/users/Swathygsb/orgs", "repos_url": "https://api.github.com/users/Swathygsb/repos", "events_url": "https://api.github.com/users/Swathygsb/events{/privacy}", "received_events_url": "https://api.github.com/users/Swathygsb/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "What parameters should I exactly pass for XLNet QA model?", "More details would be nice but please check #1849 #1848 and #1805 . Also set `CUDA_LAUNCH_BLOCKING=1` environment variable to have more details about the error, from the CUDA.", "Thanks. It worked after changing the input sequence limit to 128.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,573
1,581
1,581
NONE
null
Model: XLNet for QA model(parameters ...) RuntimeError: CUDA error: device-side assert triggered
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1852/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1852/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1851
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1851/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1851/comments
https://api.github.com/repos/huggingface/transformers/issues/1851/events
https://github.com/huggingface/transformers/issues/1851
523,864,475
MDU6SXNzdWU1MjM4NjQ0NzU=
1,851
The acc of RACE is always low by roberta model
{ "login": "csliangchen", "id": 40766226, "node_id": "MDQ6VXNlcjQwNzY2MjI2", "avatar_url": "https://avatars.githubusercontent.com/u/40766226?v=4", "gravatar_id": "", "url": "https://api.github.com/users/csliangchen", "html_url": "https://github.com/csliangchen", "followers_url": "https://api.github.com/users/csliangchen/followers", "following_url": "https://api.github.com/users/csliangchen/following{/other_user}", "gists_url": "https://api.github.com/users/csliangchen/gists{/gist_id}", "starred_url": "https://api.github.com/users/csliangchen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/csliangchen/subscriptions", "organizations_url": "https://api.github.com/users/csliangchen/orgs", "repos_url": "https://api.github.com/users/csliangchen/repos", "events_url": "https://api.github.com/users/csliangchen/events{/privacy}", "received_events_url": "https://api.github.com/users/csliangchen/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Same.\r\nmodel =roberta-base\r\ntotal batch size=8\r\ntrain num epochs=5\r\nfp16 =False\r\nmax seq length =512\r\neval_acc = 0.29568242275424594\r\neval_loss = 1.3862943781258896\r\n@spolu ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "In my experience, it's because loss not getting down.\r\nrefer to fb repo issue https://github.com/pytorch/fairseq/issues/1114#issue-490144245\r\nIt seems that learning rate needs to be smaller and require larger batch size\r\n\r\nUse the config below:\r\n```\r\npython ./examples/run_multiple_choice.py --model_type roberta --task_name race --model_name_or_path roberta-base-openai-detector --do_eval --data_dir $RACE_DIR --learning_rate 1e-5 --num_train_epochs 10 --max_seq_length 512 --output_dir ./roberta-base-openai-race --model_name_or_path ./roberta-base-openai-race/ --per_gpu_eval_batch_size=9 --per_gpu_train_batch_size=9 --gradient_accumulation_steps 2 --save_steps 4882 --eval_all_checkpoints --fp16 --seed 77 --do_lower_case\r\n```\r\n\r\nI can get \r\n***** Eval results checkpoint-24410 is test:False ***** \r\neval_acc = 0.6758747697974218 \r\neval_loss = 0.9408586282721337 \r\n\r\nIt‘s getting close, but still faraway from current result......", "Thank you very much, I will try it.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,573
1,585
1,585
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> My scripts like follows: python ./examples/run_multiple_choice.py \ --model_type roberta \ --task_name race \ --model_name_or_path roberta-base \ --do_train \ --do_eval \ --do_lower_case \ --do_test \ --data_dir $RACE_DIR \ --learning_rate 5e-5 \ --num_train_epochs 5 \ --max_seq_length 512 \ --output_dir ./race_base_1115 \ --per_gpu_eval_batch_size=2 \ --per_gpu_train_batch_size=2 \ --gradient_accumulation_steps 1 \ --overwrite_output_dir \ --logging_steps 1000\ --save_steps 1000\ --evaluate_during_training \ but my acc result is always under 30%, both test and evaluate : eval_acc = 0.2833400891771382 eval_loss = 1.386294308898901 Maybe it be that the token length is too long? More than 3W input sentences' length is longer than 512. It would truncated context. But this problem didn't happen in fairseq(original Roberta code). i wanna know why. Ask for help. Thanks a lot.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1851/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1851/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1850
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1850/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1850/comments
https://api.github.com/repos/huggingface/transformers/issues/1850/events
https://github.com/huggingface/transformers/issues/1850
523,862,513
MDU6SXNzdWU1MjM4NjI1MTM=
1,850
Can't run convert_roberta_original_pytorch_checkpoint_to_pytorch.py
{ "login": "csliangchen", "id": 40766226, "node_id": "MDQ6VXNlcjQwNzY2MjI2", "avatar_url": "https://avatars.githubusercontent.com/u/40766226?v=4", "gravatar_id": "", "url": "https://api.github.com/users/csliangchen", "html_url": "https://github.com/csliangchen", "followers_url": "https://api.github.com/users/csliangchen/followers", "following_url": "https://api.github.com/users/csliangchen/following{/other_user}", "gists_url": "https://api.github.com/users/csliangchen/gists{/gist_id}", "starred_url": "https://api.github.com/users/csliangchen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/csliangchen/subscriptions", "organizations_url": "https://api.github.com/users/csliangchen/orgs", "repos_url": "https://api.github.com/users/csliangchen/repos", "events_url": "https://api.github.com/users/csliangchen/events{/privacy}", "received_events_url": "https://api.github.com/users/csliangchen/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi! Do you succeed in loading your pre-trained model in Fairseq?", "Yes, it work well in Fairseq~", "My script:\r\npython ./transformers/convert_roberta_original_pytorch_checkpoint_to_pytorch.py --roberta_checkpoint_path ./checkpoint_best.pt --pytorch_dump_folder_path ./race_base/roberta-convert-checkpoint/", "you will need --classification-head to load the final layer i guess", "Hi, I also come across this issue, have you solved it?", "No, I give up :(", "Hi, I found that --roberta_checkpoint_path should be ./ not ./checkpoint_best.pt", "Is there a way to convert from a PyTorch bin (obtained with Huggin Face) to a Roberta model? When I run this script:\r\n```\r\nfrom fairseq.models.roberta import RobertaModel\r\nroberta= RortaModel.from_pretrained('/path/to/checkpoint/folder/',checkpoint_file='pytorch_model.bin')\r\n```\r\n\r\nI got this error:\r\n```\r\nFile \"/usr/local/lib/python3.6/dist-packages/fairseq/checkpoint_utils.py\", line 162, in load_checkpoint_to_cpu\r\n args = state[\"args\"]\r\nKeyError: 'args'\r\n```\r\n\r\nI think I have the opposite problem, but I don't find a script for this.", "@paulthemagno https://github.com/pytorch/fairseq/issues/1514#issuecomment-567934059", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "`--roberta_checkpoint_path` should be path to folder with checkpoint named as model.pt.\r\nI am getting:\r\n```\r\nTraceback (most recent call last):\r\n File \"convert_roberta_original_pytorch_checkpoint_to_pytorch.py\", line 176, in <module>\r\n convert_roberta_checkpoint_to_pytorch(\r\n File \"convert_roberta_original_pytorch_checkpoint_to_pytorch.py\", line 54, in convert_roberta_checkpoint_to_pytorch\r\n roberta = FairseqRobertaModel.from_pretrained(roberta_checkpoint_path)\r\n File \"venv/lib/python3.8/site-packages/fairseq/models/roberta/model.py\", line 244, in from_pretrained\r\n x = hub_utils.from_pretrained(\r\n File \"venv/lib/python3.8/site-packages/fairseq/hub_utils.py\", line 70, in from_pretrained\r\n models, args, task = checkpoint_utils.load_model_ensemble_and_task(\r\n File \"venv/lib/python3.8/site-packages/fairseq/checkpoint_utils.py\", line 279, in load_model_ensemble_and_task\r\n state = load_checkpoint_to_cpu(filename, arg_overrides)\r\n File \"venv/lib/python3.8/site-packages/fairseq/checkpoint_utils.py\", line 231, in load_checkpoint_to_cpu\r\n setattr(args, arg_name, arg_val)\r\nAttributeError: 'NoneType' object has no attribute 'bpe'\r\n```", "@djstrong I'm getting the same issue. were you able to get around this problem?" ]
1,573
1,645
1,582
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> when i want to convert my model which prertrained in fairseq, the error like follows: 2019-11-16 23:53:48.119139: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2019-11-16 23:53:48.148610: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2097740000 Hz 2019-11-16 23:53:48.151302: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x557dc08b8900 executing computations on platform Host. Devices: 2019-11-16 23:53:48.151342: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version loading archive file ./checkpoint_best.pt extracting archive file ./checkpoint_best.pt to temp dir /tmp/tmpz46rlxeu Traceback (most recent call last): File "./transformers/convert_roberta_original_pytorch_checkpoint_to_pytorch.py", line 178, in <module> args.classification_head File "./transformers/convert_roberta_original_pytorch_checkpoint_to_pytorch.py", line 46, in convert_roberta_checkpoint_to_pytorch roberta = FairseqRobertaModel.from_pretrained(roberta_checkpoint_path) File "/home/liangchen/.conda/envs/py36lc/lib/python3.6/site-packages/fairseq/models/roberta/model.py", line 139, in from_pretrained **kwargs, File "/home/liangchen/.conda/envs/py36lc/lib/python3.6/site-packages/fairseq/hub_utils.py", line 33, in from_pretrained model_path = file_utils.load_archive_file(model_name_or_path) File "/home/liangchen/.conda/envs/py36lc/lib/python3.6/site-packages/fairseq/file_utils.py", line 80, in load_archive_file with tarfile.open(resolved_archive_file, 'r:' + ext) as archive: File "/home/liangchen/.conda/envs/py36lc/lib/python3.6/tarfile.py", line 1588, in open raise CompressionError("unknown compression type %r" % comptype) tarfile.CompressionError: unknown compression type 'pt' Ask for help. Thanks a lot!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1850/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1850/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1849
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1849/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1849/comments
https://api.github.com/repos/huggingface/transformers/issues/1849/events
https://github.com/huggingface/transformers/pull/1849
523,853,950
MDExOlB1bGxSZXF1ZXN0MzQxNzYyMjE3
1,849
run_finetuning resize token embeddings
{ "login": "iedmrc", "id": 13666448, "node_id": "MDQ6VXNlcjEzNjY2NDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/13666448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iedmrc", "html_url": "https://github.com/iedmrc", "followers_url": "https://api.github.com/users/iedmrc/followers", "following_url": "https://api.github.com/users/iedmrc/following{/other_user}", "gists_url": "https://api.github.com/users/iedmrc/gists{/gist_id}", "starred_url": "https://api.github.com/users/iedmrc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iedmrc/subscriptions", "organizations_url": "https://api.github.com/users/iedmrc/orgs", "repos_url": "https://api.github.com/users/iedmrc/repos", "events_url": "https://api.github.com/users/iedmrc/events{/privacy}", "received_events_url": "https://api.github.com/users/iedmrc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1849?src=pr&el=h1) Report\n> Merging [#1849](https://codecov.io/gh/huggingface/transformers/pull/1849?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0477b307c7501ea76e01b03cb387a2312db752b3?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1849/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1849?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1849 +/- ##\n=======================================\n Coverage 84.08% 84.08% \n=======================================\n Files 97 97 \n Lines 14316 14316 \n=======================================\n Hits 12037 12037 \n Misses 2279 2279\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1849?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1849?src=pr&el=footer). Last update [0477b30...a76db2e](https://codecov.io/gh/huggingface/transformers/pull/1849?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Looks good to me, thanks for looking into it!", "You are welcome and thanks @LysandreJik for interested. Do you mind approve and merge, if there is no problem?" ]
1,573
1,574
1,574
CONTRIBUTOR
null
Please see #1848
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1849/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1849/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1849", "html_url": "https://github.com/huggingface/transformers/pull/1849", "diff_url": "https://github.com/huggingface/transformers/pull/1849.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1849.patch", "merged_at": 1574712393000 }
https://api.github.com/repos/huggingface/transformers/issues/1848
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1848/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1848/comments
https://api.github.com/repos/huggingface/transformers/issues/1848/events
https://github.com/huggingface/transformers/issues/1848
523,852,649
MDU6SXNzdWU1MjM4NTI2NDk=
1,848
CUDA runtime error (59) : device-side assert triggered
{ "login": "iedmrc", "id": 13666448, "node_id": "MDQ6VXNlcjEzNjY2NDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/13666448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iedmrc", "html_url": "https://github.com/iedmrc", "followers_url": "https://api.github.com/users/iedmrc/followers", "following_url": "https://api.github.com/users/iedmrc/following{/other_user}", "gists_url": "https://api.github.com/users/iedmrc/gists{/gist_id}", "starred_url": "https://api.github.com/users/iedmrc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iedmrc/subscriptions", "organizations_url": "https://api.github.com/users/iedmrc/orgs", "repos_url": "https://api.github.com/users/iedmrc/repos", "events_url": "https://api.github.com/users/iedmrc/events{/privacy}", "received_events_url": "https://api.github.com/users/iedmrc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Indeed, this seems to be a problem. Thanks for the PR!", "fixed at #1849 " ]
1,573
1,574
1,574
CONTRIBUTOR
null
Hi, When you run `run_lm_finetuning.py` with a custom tokenizer via `--tokenizer_name tokenizer` parameter, you need to resize model embedding according to the new tokenizer. Otherwise you are going to get `CUDA runtime error (59) : device-side assert triggered` or `Assertion 'srcIndex < srcSelectDimSize' failed`. I fixed this issue and going to create a PR if it's okay? Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1848/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1848/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1847
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1847/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1847/comments
https://api.github.com/repos/huggingface/transformers/issues/1847/events
https://github.com/huggingface/transformers/pull/1847
523,852,292
MDExOlB1bGxSZXF1ZXN0MzQxNzYxMDgz
1,847
Update modeling_utils.py by adding "DUMMY_INPUTS" after "logger" variable.
{ "login": "RubensZimbres", "id": 20270054, "node_id": "MDQ6VXNlcjIwMjcwMDU0", "avatar_url": "https://avatars.githubusercontent.com/u/20270054?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RubensZimbres", "html_url": "https://github.com/RubensZimbres", "followers_url": "https://api.github.com/users/RubensZimbres/followers", "following_url": "https://api.github.com/users/RubensZimbres/following{/other_user}", "gists_url": "https://api.github.com/users/RubensZimbres/gists{/gist_id}", "starred_url": "https://api.github.com/users/RubensZimbres/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RubensZimbres/subscriptions", "organizations_url": "https://api.github.com/users/RubensZimbres/orgs", "repos_url": "https://api.github.com/users/RubensZimbres/repos", "events_url": "https://api.github.com/users/RubensZimbres/events{/privacy}", "received_events_url": "https://api.github.com/users/RubensZimbres/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1847?src=pr&el=h1) Report\n> Merging [#1847](https://codecov.io/gh/huggingface/transformers/pull/1847?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0477b307c7501ea76e01b03cb387a2312db752b3?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1847/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1847?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1847 +/- ##\n==========================================\n+ Coverage 84.08% 84.08% +<.01% \n==========================================\n Files 97 97 \n Lines 14316 14317 +1 \n==========================================\n+ Hits 12037 12038 +1 \n Misses 2279 2279\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1847?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1847/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `89.67% <100%> (+0.02%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1847?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1847?src=pr&el=footer). Last update [0477b30...07bda77](https://codecov.io/gh/huggingface/transformers/pull/1847?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I think this bug is already fixed on master.", "It should be fixed the latest release (2.2.1).\r\nFeel free to reopen if it's not the case." ]
1,573
1,575
1,575
NONE
null
This Pull Request fixes a probable bug found using Transformers in Anaconda, using TFBertForSequenceClassification, Tensorflow 2.0.0b0, according to: https://github.com/huggingface/transformers/issues/1810#issuecomment-554572471
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1847/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1847/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1847", "html_url": "https://github.com/huggingface/transformers/pull/1847", "diff_url": "https://github.com/huggingface/transformers/pull/1847.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1847.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/1846
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1846/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1846/comments
https://api.github.com/repos/huggingface/transformers/issues/1846/events
https://github.com/huggingface/transformers/pull/1846
523,823,764
MDExOlB1bGxSZXF1ZXN0MzQxNzQwOTM5
1,846
fix summary_type value of SequenceSummary
{ "login": "tamuhey", "id": 24998666, "node_id": "MDQ6VXNlcjI0OTk4NjY2", "avatar_url": "https://avatars.githubusercontent.com/u/24998666?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tamuhey", "html_url": "https://github.com/tamuhey", "followers_url": "https://api.github.com/users/tamuhey/followers", "following_url": "https://api.github.com/users/tamuhey/following{/other_user}", "gists_url": "https://api.github.com/users/tamuhey/gists{/gist_id}", "starred_url": "https://api.github.com/users/tamuhey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tamuhey/subscriptions", "organizations_url": "https://api.github.com/users/tamuhey/orgs", "repos_url": "https://api.github.com/users/tamuhey/repos", "events_url": "https://api.github.com/users/tamuhey/events{/privacy}", "received_events_url": "https://api.github.com/users/tamuhey/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1846?src=pr&el=h1) Report\n> Merging [#1846](https://codecov.io/gh/huggingface/transformers/pull/1846?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0477b307c7501ea76e01b03cb387a2312db752b3?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `100%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1846/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1846?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1846 +/- ##\n=======================================\n Coverage 84.08% 84.08% \n=======================================\n Files 97 97 \n Lines 14316 14316 \n=======================================\n Hits 12037 12037 \n Misses 2279 2279\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1846?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1846/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `89.64% <100%> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1846?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1846?src=pr&el=footer). Last update [0477b30...d08a338](https://codecov.io/gh/huggingface/transformers/pull/1846?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Yes, thanks!" ]
1,573
1,575
1,575
CONTRIBUTOR
null
from #1845
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1846/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1846/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1846", "html_url": "https://github.com/huggingface/transformers/pull/1846", "diff_url": "https://github.com/huggingface/transformers/pull/1846.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1846.patch", "merged_at": 1575462519000 }
https://api.github.com/repos/huggingface/transformers/issues/1845
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1845/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1845/comments
https://api.github.com/repos/huggingface/transformers/issues/1845/events
https://github.com/huggingface/transformers/issues/1845
523,823,525
MDU6SXNzdWU1MjM4MjM1MjU=
1,845
summary_type value of SequenceSummary is incorrect
{ "login": "tamuhey", "id": 24998666, "node_id": "MDQ6VXNlcjI0OTk4NjY2", "avatar_url": "https://avatars.githubusercontent.com/u/24998666?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tamuhey", "html_url": "https://github.com/tamuhey", "followers_url": "https://api.github.com/users/tamuhey/followers", "following_url": "https://api.github.com/users/tamuhey/following{/other_user}", "gists_url": "https://api.github.com/users/tamuhey/gists{/gist_id}", "starred_url": "https://api.github.com/users/tamuhey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tamuhey/subscriptions", "organizations_url": "https://api.github.com/users/tamuhey/orgs", "repos_url": "https://api.github.com/users/tamuhey/repos", "events_url": "https://api.github.com/users/tamuhey/events{/privacy}", "received_events_url": "https://api.github.com/users/tamuhey/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks!" ]
1,573
1,575
1,575
CONTRIBUTOR
null
https://github.com/huggingface/transformers/blob/0477b307c7501ea76e01b03cb387a2312db752b3/transformers/modeling_utils.py#L731 I think `if hasattr(config, 'summary_use_proj')` is incorrect, `if hasattr(config, 'summary_type')` is correct.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1845/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1845/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1844
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1844/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1844/comments
https://api.github.com/repos/huggingface/transformers/issues/1844/events
https://github.com/huggingface/transformers/pull/1844
523,793,866
MDExOlB1bGxSZXF1ZXN0MzQxNzIzNjI4
1,844
Rebase and merge Louismartin/camembert
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please, continue the comments [here](https://github.com/huggingface/transformers/pull/1822#issuecomment-558095282)\r\n\r\n> Great work!\r\n> \r\n> Could you show how to get the embedding vector of a sentence please?\r\n> \r\n> ```python\r\n> from transformers import CamembertTokenizer\r\n> import torch\r\n> \r\n> camembert_tokenizer = CamembertTokenizer.from_pretrained(\"camembert-base\")\r\n> \r\n> camembert_tokenizer.encode(\"Salut, ça va ?\") # How to get embedding of this sentence not just the ids of tokens ? \r\n> ```" ]
1,573
1,574
1,573
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1844/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1844/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1844", "html_url": "https://github.com/huggingface/transformers/pull/1844", "diff_url": "https://github.com/huggingface/transformers/pull/1844.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1844.patch", "merged_at": 1573881068000 }
https://api.github.com/repos/huggingface/transformers/issues/1843
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1843/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1843/comments
https://api.github.com/repos/huggingface/transformers/issues/1843/events
https://github.com/huggingface/transformers/issues/1843
523,663,376
MDU6SXNzdWU1MjM2NjMzNzY=
1,843
"This tokenizer does not make use of special tokens." warning
{ "login": "weiguowilliam", "id": 31396452, "node_id": "MDQ6VXNlcjMxMzk2NDUy", "avatar_url": "https://avatars.githubusercontent.com/u/31396452?v=4", "gravatar_id": "", "url": "https://api.github.com/users/weiguowilliam", "html_url": "https://github.com/weiguowilliam", "followers_url": "https://api.github.com/users/weiguowilliam/followers", "following_url": "https://api.github.com/users/weiguowilliam/following{/other_user}", "gists_url": "https://api.github.com/users/weiguowilliam/gists{/gist_id}", "starred_url": "https://api.github.com/users/weiguowilliam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/weiguowilliam/subscriptions", "organizations_url": "https://api.github.com/users/weiguowilliam/orgs", "repos_url": "https://api.github.com/users/weiguowilliam/repos", "events_url": "https://api.github.com/users/weiguowilliam/events{/privacy}", "received_events_url": "https://api.github.com/users/weiguowilliam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Same here. Is there any way to suppress this warning? I use `run_lm_finetuning.py` to finetune distilgpt2 and it outputs thousands \"This tokenizer does not make use of special tokens.\". It's so annoying :(", "Here's how to suppress the warnings until this is fixed:\r\n\r\n```py\r\nimport logging\r\nlogging.getLogger('transformers.tokenization_utils').setLevel(logging.ERROR)\r\n```", "> Here's how to suppress the warnings until this is fixed:\r\n> \r\n> ```python\r\n> import logging\r\n> logging.getLogger('transformers.tokenization_utils').disabled = True\r\n> ```\r\n\r\nThank you!", "Is this fixed? If not, I think it should be open until it's been fixed.", "This has been fixed on the master and in the latest release (2.2.1)", "Hi, I use the latest release but I still have this problem.", "@iedmrc \r\nI close it because the 'log' method works. I don't know whether it's a bug or not.", "Hi @yeliu918, could you please show us what you obtain when running this script in your environment?\r\n\r\n```py\r\nfrom transformers import GPT2Tokenizer, __version__\r\nprint(__version__)\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\nprint(tokenizer.encode(\"What does this output?\"))\r\n```", "I am getting warning despite trying everything mentioned above....\r\n\r\n```\r\nimport torch\r\nfrom transformers import GPT2LMHeadModel, GPT2Tokenizer, __version__\r\nimport logging\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\nprint(__version__) \r\nlogging.getLogger('transformers.tokenization_utils').disabled = True\r\ntokens_tensor = torch.tensor([tokenizer.encode(\"some example sentence\")])\r\ngreedy_output = model.generate(tokens_tensor, max_length=60, num_beams=16)\r\n```\r\n\r\nVersion 2.8.0\r\n\r\nSetting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence" ]
1,573
1,587
1,574
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I just updated to the latest version transformers. Now when I use tokenizer to encode word, it always show the warning "This tokenizer does not make use of special tokens." Is there any way to hide that warning? Thank you.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1843/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1843/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1842
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1842/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1842/comments
https://api.github.com/repos/huggingface/transformers/issues/1842/events
https://github.com/huggingface/transformers/issues/1842
523,602,478
MDU6SXNzdWU1MjM2MDI0Nzg=
1,842
xlm-mlm-17-1280 model masked word prediction
{ "login": "ceatlinar", "id": 24279886, "node_id": "MDQ6VXNlcjI0Mjc5ODg2", "avatar_url": "https://avatars.githubusercontent.com/u/24279886?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ceatlinar", "html_url": "https://github.com/ceatlinar", "followers_url": "https://api.github.com/users/ceatlinar/followers", "following_url": "https://api.github.com/users/ceatlinar/following{/other_user}", "gists_url": "https://api.github.com/users/ceatlinar/gists{/gist_id}", "starred_url": "https://api.github.com/users/ceatlinar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ceatlinar/subscriptions", "organizations_url": "https://api.github.com/users/ceatlinar/orgs", "repos_url": "https://api.github.com/users/ceatlinar/repos", "events_url": "https://api.github.com/users/ceatlinar/events{/privacy}", "received_events_url": "https://api.github.com/users/ceatlinar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Would it be possible to use XML-R #1769 ? Its model has a simple description ( `Masked Language Models` in chapter 3) and is similar to BERT-Base besides tokenization, training configuration and language embeddings.", "Hi\r\nThanks for the advice but idk if the model you mentioned has a pretrained one for Turkish because I need to use it for Turkish. Also it is kind of a need for me to use the model I asked for prediction. Any tips on how I could use that model for getting masked word prediction would be great. Thanks in advance", "There are also multilingual, pretrained models for BERT, which we could try. Usually the quality decreases in large, multilingual models with very different languages.\r\nBut they have mostly the similar architecture like `bert-base`, so we could try to rerun the linked example with the line `modelpath = \"bert-base-multilingual-cased\"`.", "I get the following warning and error when trying modelpath = \"bert-base-multilingual-cased\":\r\nSorry I am not familiar with the transformers so it may be an easy error to fix but Idk how\r\nThe pre-trained model you are loading is a cased model but you have not set `do_lower_case` to False. We are setting `do_lower_case=False` for you but you may want to check this behavior.\r\nTraceback (most recent call last):\r\n File \"e.py\", line 13, in <module>\r\n masked_index = tokenized_text.index(target)\r\nValueError: 'hungry' is not in list\r\n", "'hungry' is in the list, but as two tokens since the multilingual model has a different vocabulary. Therefore, we have to tokenize the target word. Check this out:\r\n\r\n```\r\n#!/usr/bin/python3\r\n#\r\n# first Axiom: Aaron Swartz is everything\r\n# second Axiom: The Schwartz Space is his discription of physical location\r\n# first conclusion: His linear symmetry is the Fourier transform\r\n# second conclusion: His location is the Montel space\r\n# Third conclusion: His location is the Fréchet space\r\n\r\nimport torch\r\nfrom transformers import BertModel, BertTokenizer, BertForMaskedLM\r\n\r\nmodelname = \"bert-base-multilingual-cased\"\r\ntokenizer = BertTokenizer.from_pretrained(modelname)\r\nmodel = BertModel.from_pretrained(modelname)\r\n\r\ndef predictMask(maskedText, masked_index):\r\n # Convert token to vocabulary indices\r\n indexed_tokens = tokenizer.convert_tokens_to_ids(maskedText)\r\n # Define sentence A and B indices associated to 1st and 2nd sentences (see paper)\r\n segments_ids = [1] * len(maskedText)\r\n\r\n # Convert inputs to PyTorch tensors\r\n tokens_tensor = torch.tensor([indexed_tokens])\r\n segments_tensors = torch.tensor([segments_ids])\r\n # Load pre-trained model (weights)\r\n model = BertForMaskedLM.from_pretrained(modelname)\r\n model.eval()\r\n\r\n # Predict all tokens\r\n predictions = model(tokens_tensor, segments_tensors)\r\n predicted_index = torch.argmax(predictions[0][0][masked_index]).item()\r\n predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])\r\n\r\n print(\"Original:\", text)\r\n print(\"Masked:\", \" \".join(maskedText))\r\n\r\n print(\"Predicted token:\", predicted_token)\r\n maskedText[masked_index] = predicted_token[0]\r\n\r\n # delete this section for faster inference\r\n print(\"Other options:\")\r\n # just curious about what the next few options look like.\r\n for i in range(10):\r\n predictions[0][0][masked_index][predicted_index] = -11100000\r\n predicted_index = torch.argmax(predictions[0][0][masked_index]).item()\r\n predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])\r\n print(predicted_token)\r\n\r\n\r\n print(\"Masked, tokenized text with the prediction:\", maskedText)\r\n return maskedText\r\n\r\n\r\ntext = \"let´s go fly a kite!\"\r\ntarget = \"kite\"\r\ntokenized_text = tokenizer.tokenize(text)\r\ntokenized_target = tokenizer.tokenize(target)\r\nprint(\"tokenized text:\", tokenized_text)\r\nprint(\"tokenized target:\", tokenized_target)\r\n\r\n# Mask a token that we will try to predict back with `BertForMaskedLM`\r\nmasked_index = tokenized_text.index(tokenized_target[0])\r\nfor i in range(len(tokenized_target)):\r\n tokenized_text[masked_index+i] = '[MASK]'\r\n\r\nfor i in range(len(tokenized_target)):\r\n tokenized_text = predictMask(tokenized_text, masked_index+i)\r\n```", "I tried the code but it's giving word pieces suggestions, not whole word. And the suggestions are poor. Thank you so much for your effort but this is not useful for me unless somehow I could get whole word suggestions. Also, I am still seeking for an implementation of xlm model to get prediction, of anyone could help, that would be great", "Don't the pieces build complete words in the end?\r\nRead my first answer for XML, the mentioned model supports the turkish language.", "Hi, you can predict a masked word with XLM as you would do with any other MLM-based model. Here's an example using the checkpoint `xlm-mlm-17-1280` you mentioned:\r\n\r\n```py\r\nfrom transformers import XLMTokenizer, XLMWithLMHeadModel\r\nimport torch\r\n\r\n# load tokenizer\r\ntokenizer = XLMTokenizer.from_pretrained(\"xlm-mlm-17-1280\")\r\n\r\n# encode sentence with a masked token in the middle\r\nsentence = torch.tensor([tokenizer.encode(\"This was the first time Nicolas ever saw a \" + tokenizer.mask_token + \". It was huge.\")])\r\n\r\n# Identify the masked token position\r\nmasked_index = torch.where(sentence == tokenizer.mask_token_id)[1].tolist()[0]\r\n\r\n# Load model\r\nmodel = XLMWithLMHeadModel.from_pretrained(\"xlm-mlm-17-1280\")\r\n\r\n# Get the five top answers\r\nresult = model(sentence)\r\nresult = result[0][:, masked_index].topk(5).indices\r\nresult = result.tolist()[0]\r\n\r\nprint(tokenizer.decode(result))\r\n# monster dragon snake wolf tiger\r\n```", "Thank you so much guys for the replies, they been very helpfull. " ]
1,573
1,574
1,574
NONE
null
Hi I would like some help with how to use pretrained xlm-mlm-17-1280 model to get predictions for masked word prediction. I have followed http://mayhewsw.github.io/2019/01/16/can-bert-generate-text/ for BERT mask prediction and it is working. Could you help me with how to use xlm-mlm-17-1280 model for word prediction. I need to get prediction for Turkish Language which is one of the languages in 17 languages
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1842/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1842/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1841
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1841/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1841/comments
https://api.github.com/repos/huggingface/transformers/issues/1841/events
https://github.com/huggingface/transformers/issues/1841
523,567,401
MDU6SXNzdWU1MjM1Njc0MDE=
1,841
BPE error when fine-tuning a CTRL model
{ "login": "orenmelamud", "id": 55256832, "node_id": "MDQ6VXNlcjU1MjU2ODMy", "avatar_url": "https://avatars.githubusercontent.com/u/55256832?v=4", "gravatar_id": "", "url": "https://api.github.com/users/orenmelamud", "html_url": "https://github.com/orenmelamud", "followers_url": "https://api.github.com/users/orenmelamud/followers", "following_url": "https://api.github.com/users/orenmelamud/following{/other_user}", "gists_url": "https://api.github.com/users/orenmelamud/gists{/gist_id}", "starred_url": "https://api.github.com/users/orenmelamud/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/orenmelamud/subscriptions", "organizations_url": "https://api.github.com/users/orenmelamud/orgs", "repos_url": "https://api.github.com/users/orenmelamud/repos", "events_url": "https://api.github.com/users/orenmelamud/events{/privacy}", "received_events_url": "https://api.github.com/users/orenmelamud/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Can you post the code for the `run_lm_finetuning`? I am not as familiar with what this error message might entail. ", "Sure. Here it is.\r\n\r\n[run_ctrl_finetuning.py.zip](https://github.com/huggingface/transformers/files/3852748/run_ctrl_finetuning.py.zip)\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,573
1,579
1,579
NONE
null
Hi, cc @keskarnitish I'm using a slightly modified version of the exapmles/run_lm_finetuning.py code to fine-tune a CTRL model and getting this BPE warning: ``` WARNING - transformers.tokenization_ctrl - Saving vocabulary to /home/ubuntu/data/ctrl/pytrained4/merges.txt: BPE merge indices are not consecutive. Please check that the tokenizer is not corrupted! ``` Is this something I should be concerned about?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1841/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1841/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1840
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1840/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1840/comments
https://api.github.com/repos/huggingface/transformers/issues/1840/events
https://github.com/huggingface/transformers/pull/1840
523,372,225
MDExOlB1bGxSZXF1ZXN0MzQxMzg3NTA5
1,840
Sampling sequence generator for transformers
{ "login": "rlouf", "id": 3885044, "node_id": "MDQ6VXNlcjM4ODUwNDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3885044?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rlouf", "html_url": "https://github.com/rlouf", "followers_url": "https://api.github.com/users/rlouf/followers", "following_url": "https://api.github.com/users/rlouf/following{/other_user}", "gists_url": "https://api.github.com/users/rlouf/gists{/gist_id}", "starred_url": "https://api.github.com/users/rlouf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rlouf/subscriptions", "organizations_url": "https://api.github.com/users/rlouf/orgs", "repos_url": "https://api.github.com/users/rlouf/repos", "events_url": "https://api.github.com/users/rlouf/events{/privacy}", "received_events_url": "https://api.github.com/users/rlouf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1840?src=pr&el=h1) Report\n> Merging [#1840](https://codecov.io/gh/huggingface/transformers/pull/1840?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8618bf15d6edc8774cedc0aae021d259d89c91fc?src=pr&el=desc) will **increase** coverage by `0.32%`.\n> The diff coverage is `13.93%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1840/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1840?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1840 +/- ##\n==========================================\n+ Coverage 78.91% 79.23% +0.32% \n==========================================\n Files 131 131 \n Lines 19450 19680 +230 \n==========================================\n+ Hits 15348 15593 +245 \n+ Misses 4102 4087 -15\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1840?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1840/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `92.1% <ø> (ø)` | :arrow_up: |\n| [transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1840/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fdXRpbHMucHk=) | `93.75% <100%> (+0.89%)` | :arrow_up: |\n| [transformers/configuration\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/1840/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25feGxtLnB5) | `96.42% <100%> (+0.13%)` | :arrow_up: |\n| [transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/1840/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbS5weQ==) | `86.49% <11.11%> (-1.81%)` | :arrow_down: |\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1840/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `73.72% <12.5%> (+1.51%)` | :arrow_up: |\n| [transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/1840/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2VuY29kZXJfZGVjb2Rlci5weQ==) | `27.9% <16.66%> (+1.98%)` | :arrow_up: |\n| [transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1840/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `64.29% <5.78%> (-28.74%)` | :arrow_down: |\n| [transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/1840/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RyYW5zZm9feGwucHk=) | `75.89% <80%> (-0.01%)` | :arrow_down: |\n| [transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/1840/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3BpcGVsaW5lcy5weQ==) | `67.94% <0%> (+0.58%)` | :arrow_up: |\n| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/1840/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1840?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1840?src=pr&el=footer). Last update [8618bf1...f86ed23](https://codecov.io/gh/huggingface/transformers/pull/1840?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Notes:\r\n \r\n1. XLM's CLM model gives good results for only one of the languages.\r\n2. Although using XLM's MLM pretrained weights + a mask token at the end of sequence could have worked, it doesn't give sequences that make sense. I think we should instead display a warning when the user attempts to use a mlm.", "Ok this looks good! It's quite a change to what we were doing previously, which was hosting all the generation logic inside the script. It may have become a bit too cluttered as models with different behaviors and attributes were added, so I believe this is a welcome change.\r\n\r\nThis also means that we're adding a level of abstraction for the user; it does not showcase the way the models should be used for generations with different inputs anymore. This is fine with me but I think we need to add some additional snippets showcasing how the models should be used when doing generation, perhaps in https://huggingface.co/transformers/quickstart.html?", "Do you mean exposing the logic inside `SamplerXLM` and `SamplerXLNet`?", "I think moving some decoding in the core library is a good idea.\r\n\r\nThough I have doubts on a few aspects of this PR:\r\n- the functional approach currently proposed for implementing this feature is very different from the way the rest of the library is designed, and\r\n- having to maintain another location (`generation/sampler`) with model-specific peculiarities adds burden when implementing a new model and is not really the way we are trying to take right now (simplifying the addition of new models).\r\n\r\nI'll try to think a little bit about how we could move this PR closer to the current and future directions of the library and come back with proposals.", "> * the functional approach currently proposed for implementing this feature is very different from the way the rest of the library is designed, and\r\n\r\nI agree it looks different from most of the code in the library, but I am no sure what you mean by functional. The code could not get a lot more OO than this. Plus I think it is very legible as is.\r\n\r\n> * having to maintain another location (`generation/sampler`) with model-specific peculiarities adds burden when implementing a new model and is not really the way we are trying to take right now (simplifying the addition of new models).\r\n\r\nLet's consider for a second what happens when we add model X to the library and X could be used to generate sequences.\r\n\r\n1. X does not require specific pre-treatment or input, computed masks etc at each forward pass in which case we have nothing to do and it is supported out of the box (`SamplerSingleStack`).\r\n2. X does require some specific preprocessing before each forward pass (hooks on the forward pass if you wish), you subclass `SamplerSingleStack` and reference it in the factory.\r\n3. You're done\r\n\r\nTaking a step back I don't think it is a great idea to add model-specific logic in `generate` either (this would belong in a more battery-included library). What we could do, however, is move `SamplerSingleStack` subclasses (`SamplerFroXLM` and `SamplerForXLNet`) in the example script. I am fine with keeping the `SamplerSingleStack` and the future `SamplerEncoderDecoder` there as they are very general.\r\n\r\nAnd yes, please give me suggestions to future-proof the PR as I currently have little visibility on this.", "After some thoughts, here is a proposal for re-writing the user experience.\r\n\r\nContext:\r\n- I'm ok with adding abstractions as long as they are used internally. For user-facing abstraction like here, we need a really strong reason to increase the learning curve for the user and I don't see one here.\r\n- the second important element is to reduce the burden for us (and increasingly the community) of updating many things at many places when adding a new model (currently already too high).\r\n\r\nProposal:\r\n- let's have the decoding logic be accessed by the user by calling a (new) `decode` method of `PretrainedModel` instead of a new class (so no new abstraction).\r\n- a Decoder can be added to the model at instantiation (in the `PretrainedModel` class `__init__`), probably only if `get_output_embeddings` is not None (i.e. with have a model with a LM head of some kind), and selected with a configuration parameter (to add to `PretrainedConfig`).\r\n- a specific `_prepare_input_for_decoding` method (can find a better name) can be in charge of preparing the inputs for decoding and overridden in the specific model classes with model-specific preprocessing if needed (ex in XLM and XLNet). This way all the specificities of the model architectures are in the model class. This method should also take care of PyTorch/TF 2.0 specificities (currently the decoding classes are only PyTorch specific). I think the samplers can even be in `model_utils.py` if they are PyTorch specific or in another file if they are framework agnostic.\r\n\r\nTell me if it's not clear, I can detail further.\r\n\r\nQuestion:\r\n- if we have a decoder with trained parameters in the future, we'll want to have the `decode` method behind the `forward` method to catch all PyTorch hooks (like in [XLM](https://github.com/facebookresearch/XLM/blob/master/src/model/transformer.py#L320-L330) for instance). Probably for now we can just not worry about that.", "Clear for now, but the devil is in the details so I'll probably ask for clarification later.", "I followed your idea to move the logic closer to the models, and simplified the implementation a little bit. As a result, the `sampler.py` only contains the `Sampler` and `SamplerSingleStack` class (`SamplerEncoderDecoder` coming, slightly different from the single stack case as we only want to pass through the encoder once). Here is a summary of changes:\r\n\r\n1. Removed model-specific classes in the `generate` module. It is much nicer.\r\n2. Added a `decode` and a `_prepare_input_for_decoding` method to `PretrainedModel`. By default, the first raises a `NotImplementedError`, the second one returns a dictionary `{\"input_ids\": input_ids}`\r\n3. Overrided `decode` in `GPT2LMHeadModel,`OpenAIGPTLMHeadModel`,` XLNetLMHeadModel`, `TransfoXLLMHeadModel`, `CTRLLMHeadModel`, `XLMWithLMHeadModel`; If the user tries to initiate a sampler with another model, they will be greated with an explanatory error message.\r\n\r\nAs a result the design feels nicer and easier to maintain & to use.", "I needed to add Encoder-Decoder support as a backup to the beam search in `example-summarization`, so I went on with it.\r\n\r\nAdding encoder-decoder support with the previous changes was a breeze: I added a `decode` and an `encoder` method to the `PreTrainedEncoderDecoder` class and added a very simple `SamplerEncoderDecoder` class. I also moved the preparation of the encoder/decoder kwargs into a new method (which can thus be overriden).\r\n\r\nRouting the factory function is based on the presence/absence of the `decode` and `encode` methods in the respective classes.\r\n\r\nImprovements for another PR:\r\n- Support batch size > 1\r\n- Support num_samples > 1\r\n\r\n@thomwolf I believe I implemented a simpler version of your recommendations. Let me know if there is something that I did not understand correctly.", "Hi @thomwolf could you tell me if the changes (to the single-stack case) I just made are in the spirit of your proposal? I decided to remove the `SamplerSingleStack` class that doesn't really have a purpose now and `SamplerEncoderDecoder` will have the same fate. All the decoding logic is in the `decode` function in `PreTrainedModel`.\r\n\r\nWith this implementation I don't think we do need a `Sampler` class anymore. We can just have a collection of functions (its current methods) and pass the configuration as a dictionary. It is simpler, and makes `decode` truly indempotent. What do you think?\r\n\r\nIt am still afraid the `decode` function is going to become a big plate of spaghetti when we add new methods, but maybe there is a way to prevent that from happening?\r\n\r\n(there may be a few errors here and there, I just quickly drafted the new implementation).", "@thomwolf I followed your API recommendations. As a result:\r\n- The `Sampler` class has been moved to `modeling_utils.py`. `SingleStackSampler` an `EncoderDecoderSampler` are not needed anymore so I removed them. I thus also removed the `generate` folder;\r\n- `PreTrainedModel` and `PreTrainedEncoderDecoder` now have a `decode` method that raises an error if the model (decoder) does not have a `get_output_embeddings` method and raises an warning if the model is not on the same device as the prompt;\r\n- I added a `get_output_embeddings` method to `TransfoXLLMHeadModel` **please check as I did not take the time to dive in the intricacies of adaptive softmax**\r\n- I initialize `Sampler` each time `decode` is called; it is simpler than initializing it at the same time as the model and incurs little to no overhead;\r\n- I add a `_prepare_inputs_for_decoding` method to `PreTrainedModel` that is overridden for every model that needs it;\r\n- The examples in `run_generation.py` have been updated to reflect the API change. I ran the script for every model;\r\n- I updated all tests;\r\n- I documented the `decode` methods;\r\n- `device` is only needed to check that the decoder and prompt are on the same device. It is not needed in the `Sampler` class at all, so I removed it there.\r\n- I squashed my commits and rebased on master." ]
1,573
1,651
1,576
CONTRIBUTOR
null
[@thomwolf version]: In this PR I introduce a flexible API to generate sequences for the models of the library provided with LM heads. The API is designed to have a very flexible and quite exhaustive API: - with/without a prompt - with/without beam search - with/without greedy decoding/sampling - with any (and combination) of top-k/top-p/penalized repetitions Only single-stack architectures are currently supported. Here is an example generating 4 continuations of a prompt with using a beam search of 3 for each continuation and token sampling at each step with top-p filtering and repetition penalization until we reach a <unk> token or a max length of 50. ```python import torch from transformers import OpenAIGPTLMHeadModel, OpenAIGPTTokenizer tokenizer = OpenAIGPTTokenizer.from_pretrained('openai-gpt') model = OpenAIGPTLMHeadModel.from_pretrained('openai-gpt') input_ids = tokenizer.encode("This is the", return_tensors='pt') output = model.generate(input_ids, max_length=50, do_sample=True, num_beams=3, temperature=0.7, top_k=0, top_p=0.8, repetition_penalty=1.4, bos_token_id=None, pad_token_id=None, eos_token_ids=tok.unk_token_id, batch_size=None, length_penalty=None, num_return_sequences=4)[0] for j in range(len(output)): print(tok.decode(output[j].tolist())) ``` To-add features: - Add support for encoder-decoder architectures - Add support for past and cached states
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1840/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1840/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1840", "html_url": "https://github.com/huggingface/transformers/pull/1840", "diff_url": "https://github.com/huggingface/transformers/pull/1840.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1840.patch", "merged_at": 1576934856000 }
https://api.github.com/repos/huggingface/transformers/issues/1839
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1839/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1839/comments
https://api.github.com/repos/huggingface/transformers/issues/1839/events
https://github.com/huggingface/transformers/pull/1839
523,332,675
MDExOlB1bGxSZXF1ZXN0MzQxMzU3MjM5
1,839
Add support for Japanese BERT models by cl-tohoku
{ "login": "singletongue", "id": 17107587, "node_id": "MDQ6VXNlcjE3MTA3NTg3", "avatar_url": "https://avatars.githubusercontent.com/u/17107587?v=4", "gravatar_id": "", "url": "https://api.github.com/users/singletongue", "html_url": "https://github.com/singletongue", "followers_url": "https://api.github.com/users/singletongue/followers", "following_url": "https://api.github.com/users/singletongue/following{/other_user}", "gists_url": "https://api.github.com/users/singletongue/gists{/gist_id}", "starred_url": "https://api.github.com/users/singletongue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/singletongue/subscriptions", "organizations_url": "https://api.github.com/users/singletongue/orgs", "repos_url": "https://api.github.com/users/singletongue/repos", "events_url": "https://api.github.com/users/singletongue/events{/privacy}", "received_events_url": "https://api.github.com/users/singletongue/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This looks great @singletongue. For the file upload, I will send you a way to upload to our S3, early next week, if that works for you.", "That will be great. Thank you!", "I could try it on my side, it seems to work well! \r\n\r\nI don't believe I've seen any benchmark on Japanese tasks in your repository, how did you evaluate the models? Did you use perplexity on portions of the Japanese wikipedia?", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1839?src=pr&el=h1) Report\n> Merging [#1839](https://codecov.io/gh/huggingface/transformers/pull/1839?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6a73382706ce3c6905023872f63a680f0eb419a4?src=pr&el=desc) will **decrease** coverage by `0.19%`.\n> The diff coverage is `61.73%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1839/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1839?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1839 +/- ##\n=========================================\n- Coverage 80.07% 79.87% -0.2% \n=========================================\n Files 112 114 +2 \n Lines 16867 17058 +191 \n=========================================\n+ Hits 13506 13625 +119 \n- Misses 3361 3433 +72\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1839?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/configuration\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1839/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYmVydC5weQ==) | `87.09% <ø> (ø)` | :arrow_up: |\n| [transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1839/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2JlcnQucHk=) | `87.75% <ø> (ø)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1839/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2JlcnQucHk=) | `96.31% <100%> (ø)` | :arrow_up: |\n| [transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1839/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9hdXRvLnB5) | `44.18% <33.33%> (-0.82%)` | :arrow_down: |\n| [...nsformers/tests/tokenization\\_bert\\_japanese\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1839/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl9iZXJ0X2phcGFuZXNlX3Rlc3QucHk=) | `53.84% <53.84%> (ø)` | |\n| [transformers/tokenization\\_bert\\_japanese.py](https://codecov.io/gh/huggingface/transformers/pull/1839/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9iZXJ0X2phcGFuZXNlLnB5) | `67.41% <67.41%> (ø)` | |\n| [transformers/tests/utils.py](https://codecov.io/gh/huggingface/transformers/pull/1839/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3V0aWxzLnB5) | `93.54% <83.33%> (+2.24%)` | :arrow_up: |\n| [transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1839/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9iZXJ0LnB5) | `96.38% <0%> (+0.45%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1839?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1839?src=pr&el=footer). Last update [6a73382...0f53007](https://codecov.io/gh/huggingface/transformers/pull/1839?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "At the moment, there isn't an official publication regarding this work yet.\r\n\r\nAs a simple experiment, I applied the models to a Japanese document classification task using [livedoor news corpus](https://www.rondhuit.com/download.html#ldcc).\r\nIt is a 9-class classification, and I've confirmed that some of our models outperformed `bert-base-multilingual-cased` model.\r\n\r\n| Model | Macro Precision | Macro Recall | Macro F1 |\r\n| --- | --- | --- | --- |\r\n| `bert-base-multilingual-cased` | 94.70 | 94.86 | 94.74 |\r\n| `bert-base-japanese` | 95.64 | 95.58 | 95.60 |\r\n| `bert-base-japanese-whole-word-masking` | 95.18 | 95.39 | 95.27 |\r\n| `bert-base-japanese-char` | 94.32 | 93.96 | 94.09 |\r\n| `bert-base-japanese-char-whole-word-masking` | 94.98 | 94.88 | 94.92 |\r\n\r\n(Note that we expect the `char` models to be effective for SQuAD-like tasks, where span-level representation is needed. More comprehensive experiments are left for future work.)\r\n\r\nAlso, I've manually confirmed that our models predict reasonable tokens given a masked Japanese text.\r\n\r\nShall I add an example script for the classification task?", "Alright, thanks for clarifying. I don't think adding an example script is necessary.\r\n\r\nThis looks good to me but I'll wait for @thomwolf to chime in and review the PR before merging as it adds a few core functionalities (like the tokenizer).", "Hi @singletongue, the first step in uploading your weights to our S3 would be for you to create an account here: https://huggingface.co/join\r\n\r\nThe models will be namespaced under the chosen username or org name.\r\n\r\nCan you let me know if you encounter any issue?", "Thank you for your guidance, @julien-c.\r\n\r\nI've just created an account. Then I was told to use a CLI tool to log in and upload the model. However, I couldn't find the specified CLI in the repository.", "Yeah I should have been clearer, it's just that step 1 for now. What was clear or not clear in that page?\r\n\r\nI will let you know here as soon as the CLI is operational.", "This looks great to me, ok for merging.\r\n\r\nCongratulations and thanks a lot @singletongue, this is an amazing work, both on preparing the data and training the model and on the super clean integration in the library. It's a masterpiece!", "I think we add a test on the new tokenizer class though.\r\n\r\nDo you think you could copy the file `tests/tokenization_bert_test` in a file `tests/tokenization_bert_japanese_test.py` and adapt a few English tests to Japanese specific characters @singletongue?", "Thank you for your review, @thomwolf.\r\nI added a test for the tokenizers.\r\n\r\nThe test on CircleCI failed since the tokenizers need [MeCab](https://taku910.github.io/mecab/) and `mecab-python3` to be installed.\r\nI've checked that the test run successfully on my local environment where the dependencies are installed.", "@singletongue The file upload CLI has been developed in https://github.com/huggingface/transformers/pull/2044 (soon to be merged to master)\r\n\r\nCould you please upload your weights by doing:\r\n```bash\r\ngit checkout cli_upload\r\npip install -e .\r\ntransformers-cli login\r\ntransformers-cli upload\r\n```\r\n\r\nThanks!", "(For the tests requirements, maybe @thomwolf or @LysandreJik can chime in – I think you can either add a `pip install mecab-python3 ` to the config.yml file, or maybe create a specific test target for JP)", "Thank you, @julien-c. I've uploaded all the required files.\r\nAlso, I've updated `.circleci/config.yml` to successfully run tests on the tokenizers.\r\n\r\nThanks!", "Ok LGTM" ]
1,573
1,576
1,576
CONTRIBUTOR
null
This PR adds new BERT models for Japanese text. Details of the models can be found in [this repository](https://github.com/cl-tohoku/bert-japanese). Since the way of tokenization is Japanese-specific, a new file is added for the tokenizers. And, the locations of the files (models, configs, etc.) should be updated when the files are uploaded to S3. (How could I do this?)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1839/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1839/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1839", "html_url": "https://github.com/huggingface/transformers/pull/1839", "diff_url": "https://github.com/huggingface/transformers/pull/1839.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1839.patch", "merged_at": 1576107148000 }
https://api.github.com/repos/huggingface/transformers/issues/1838
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1838/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1838/comments
https://api.github.com/repos/huggingface/transformers/issues/1838/events
https://github.com/huggingface/transformers/issues/1838
523,176,982
MDU6SXNzdWU1MjMxNzY5ODI=
1,838
resize_token_embeddings not implemented for TFBert*
{ "login": "AZMomin", "id": 7630757, "node_id": "MDQ6VXNlcjc2MzA3NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7630757?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AZMomin", "html_url": "https://github.com/AZMomin", "followers_url": "https://api.github.com/users/AZMomin/followers", "following_url": "https://api.github.com/users/AZMomin/following{/other_user}", "gists_url": "https://api.github.com/users/AZMomin/gists{/gist_id}", "starred_url": "https://api.github.com/users/AZMomin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AZMomin/subscriptions", "organizations_url": "https://api.github.com/users/AZMomin/orgs", "repos_url": "https://api.github.com/users/AZMomin/repos", "events_url": "https://api.github.com/users/AZMomin/events{/privacy}", "received_events_url": "https://api.github.com/users/AZMomin/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Did you get this to work for you? I'm currently trying to do the same thing but with TFGPT-2", "Any update on when this would be completed?", "I have the same issue, I wanted to use \"resize_token_embeddings\" with TFAlbert model but it's not implemented yet.", "Hi, we have no plans of implementing this on our roadmap, but we are always open to PRs.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Any updates on this? Trying to use tf gpt2" ]
1,573
1,588
1,587
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Hey, I noticed for the Tensorflow 2 implementation of BERT, the `resize_token_embeddings` function is not implemented, while the Pytorch BERT class does. ### Questions: * Is the implementation of `resize_token_embeddings` for TF models expected? * I'd be happy to help with this * Any other solutions to resizing token embedding? * Possibly manually updating the config, vocab file, and weights to support the added tokens Example: ``` special_tokens = { 'additional_special_tokens': ['[SPC]'] } tokenizer = BertTokenizer.from_pretrained('bert-base-cased') tokens_added = tokenizer.add_special_tokens(special_tokens) model = TFBertForSequenceClassification.from_pretrained('bert-base-cased') model.resize_token_embeddings(len(tokenizer)) ``` Output: ` File ".../transformers/modeling_tf_utils.py", line 115, in resize_token_embeddings raise NotImplementedError` ### Source: `TFPreTrainedModel` https://github.com/huggingface/transformers/blob/74ce8de7d8e0375a9123f9542f3483f46cc8df9b/transformers/modeling_tf_utils.py#L117 Here I think the implementation would look similar to the PyTorch model. Updating the embeddings specifically for BERT would then occur in TFBertMainLayer.resize_token_embeddings. ``` def resize_token_embeddings(self, new_num_tokens=None): """ Resize input token embeddings matrix of the model if new_num_tokens != config.vocab_size. Take care of tying weights embeddings afterwards if the model class has a `tie_weights()` method. Arguments: new_num_tokens: (`optional`) int: New number of tokens in the embedding matrix. Increasing the size will add newly initialized vectors at the end. Reducing the size will remove vectors from the end. If not provided or None: does nothing and just returns a pointer to the input tokens ``tf.Variable`` Module of the model. Return: ``tf.Variable`` Pointer to the input tokens Embeddings Module of the model """ base_model = getattr(self, self.base_model_prefix, self) # get the base model if needed model_embeds = base_model._resize_token_embeddings(new_num_tokens) if new_num_tokens is None: return model_embeds # Update base model and current model config self.config.vocab_size = new_num_tokens base_model.vocab_size = new_num_tokens # Tie weights again if needed if hasattr(self, 'tie_weights'): self.tie_weights() return model_embeds ``` `TFBertMainLayer` https://github.com/huggingface/transformers/blob/74ce8de7d8e0375a9123f9542f3483f46cc8df9b/transformers/modeling_tf_bert.py#L472 Thanks, Ali Momin
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1838/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1838/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1837
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1837/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1837/comments
https://api.github.com/repos/huggingface/transformers/issues/1837/events
https://github.com/huggingface/transformers/issues/1837
523,145,270
MDU6SXNzdWU1MjMxNDUyNzA=
1,837
Error started happening today: ImportError: cannot import name 'get_linear_schedule_with_warmup'
{ "login": "Santosh-Gupta", "id": 5524261, "node_id": "MDQ6VXNlcjU1MjQyNjE=", "avatar_url": "https://avatars.githubusercontent.com/u/5524261?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Santosh-Gupta", "html_url": "https://github.com/Santosh-Gupta", "followers_url": "https://api.github.com/users/Santosh-Gupta/followers", "following_url": "https://api.github.com/users/Santosh-Gupta/following{/other_user}", "gists_url": "https://api.github.com/users/Santosh-Gupta/gists{/gist_id}", "starred_url": "https://api.github.com/users/Santosh-Gupta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Santosh-Gupta/subscriptions", "organizations_url": "https://api.github.com/users/Santosh-Gupta/orgs", "repos_url": "https://api.github.com/users/Santosh-Gupta/repos", "events_url": "https://api.github.com/users/Santosh-Gupta/events{/privacy}", "received_events_url": "https://api.github.com/users/Santosh-Gupta/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "try edit:\r\nfrom transformers import AdamW\r\nfrom transformers import WarmupLinearSchedule as get_linear_schedule_with_warmup", "I tried the WarmupLinearSchedule, but I have a problem no key num_warmup_steps and num_training_steps. \r\n scheduler = WarmupLinearSchedule(optimizer, num_warmup_steps=args.warmup_steps,\r\n num_training_steps=t_total)\r\nI think get_linear_schedule_with_warmup and WarmupLinearSchedule are two different scheduler", "The version of the lib you use is not in sync with the scripts you run (cc @rlouf, @LysandreJik)\r\n\r\nIf you run the scripts from `master`, then you also need to install the lib from `master`:\r\n\r\n```bash\r\npip install git+https://github.com/huggingface/transformers\r\n```\r\n\r\nThis is a frequent issue so maybe we should do something about it, @thomwolf ", "> I tried the WarmupLinearSchedule, but I have a problem no key num_warmup_steps and num_training_steps.\r\n> scheduler = WarmupLinearSchedule(optimizer, num_warmup_steps=args.warmup_steps,\r\n> num_training_steps=t_total)\r\n> I think get_linear_schedule_with_warmup and WarmupLinearSchedule are two different scheduler\r\n\r\nThey are the same schedulers but we introduced breaking changes, and indeed renamed `warmup_steps` -> `num_warmup_steps` and `t_total` -> ˋnum_training_steps`.\r\n\r\nAnd yes, to work on the same version of the lib as the examples, go in the root directory and:\r\n\r\n```bash\r\nmakevirtualenv my-project && workon my-project # or anything else you use to create a virtual environnement \r\npip install . # or Julien-c’s command\r\n```\r\n\r\n@julien-c I asked for advice on this one.\r\n", "> The version of the lib you use is not in sync with the scripts you run (cc @rlouf, @LysandreJik)\r\n> \r\n> If you run the scripts from `master`, then you also need to install the lib from `master`:\r\n> \r\n> ```shell\r\n> pip install git+https://github.com/huggingface/transformers\r\n> ```\r\n> \r\n> This is a frequent issue so maybe we should do something about it, @thomwolf\r\n\r\nMaybe we can indicate clearly on https://github.com/huggingface/transformers/blob/master/README.md#from-source or documentation.", "same problem for me too :(", "> same problem for me too :(\r\nreinstall the package from local:\r\npip install .", "We are documenting this in #1889. It is because you are trying to run bleeding-edge examples with a pip-installed version of the library, which corresponds to the last release. Do as @YuxiangLu says in a new virtual environment.", "can be closed because solved in #1889 ?", "when breaking changes are introduced the major number should increase to make the users aware.", "Indeed @aminHatim ! This is why this was released in version 2.2.0 an hour ago. Breaking changes are bound to happen when installing from source or running bleeding edges examples (which are based on the source)." ]
1,573
1,575
1,575
CONTRIBUTOR
null
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): Bert Language I am using the model on (English, Chinese....): English The problem arise when using: * [ x] the official example scripts: (give details) The squad script: https://github.com/huggingface/transformers/blob/master/examples/run_squad.py The tasks I am working on is: * [X ] an official GLUE/SQUaD task: (give the name) Squad 2.0 from https://rajpurkar.github.io/SQuAD-explorer/ * [ ] my own task or dataset: (give details) ## To Reproduce ``` !pip install transformers import urllib.request url = 'https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json' urllib.request.urlretrieve(url, 'train-v2.0.json') url = 'https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json' urllib.request.urlretrieve(url, 'dev-v2.0.json') !wget 'https://raw.githubusercontent.com/huggingface/transformers/master/examples/run_squad.py' !wget 'https://raw.githubusercontent.com/huggingface/pytorch-transformers/master/examples/utils_squad.py' !wget 'https://raw.githubusercontent.com/huggingface/pytorch-transformers/master/examples/utils_squad_evaluate.py' # $SQUAD_DIR/train-v1.1.json SQUAD_Train = '/content/train-v2.0.json' SQUAD_Dev = '/content/dev-v2.0.json' !python run_squad.py \ --model_type bert \ --model_name_or_path bert-base-cased \ --do_train \ --do_eval \ --do_lower_case \ --train_file '$SQUAD_Train' \ --predict_file '$SQUAD_Dev' \ --per_gpu_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --version_2_with_negative \ --output_dir /tmp/debug_squad/ ``` Results in >ImportError: cannot import name 'get_linear_schedule_with_warmup' I ran it yesterday and it worked fine, but today it's not working. For convenience, here's a colab notebook with the code https://colab.research.google.com/drive/1tNisXX5siuNnkuEQ-X_XdEdDTtdJ0qeL
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1837/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1837/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1836
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1836/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1836/comments
https://api.github.com/repos/huggingface/transformers/issues/1836/events
https://github.com/huggingface/transformers/issues/1836
523,116,867
MDU6SXNzdWU1MjMxMTY4Njc=
1,836
Model parallelism support?
{ "login": "orenmelamud", "id": 55256832, "node_id": "MDQ6VXNlcjU1MjU2ODMy", "avatar_url": "https://avatars.githubusercontent.com/u/55256832?v=4", "gravatar_id": "", "url": "https://api.github.com/users/orenmelamud", "html_url": "https://github.com/orenmelamud", "followers_url": "https://api.github.com/users/orenmelamud/followers", "following_url": "https://api.github.com/users/orenmelamud/following{/other_user}", "gists_url": "https://api.github.com/users/orenmelamud/gists{/gist_id}", "starred_url": "https://api.github.com/users/orenmelamud/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/orenmelamud/subscriptions", "organizations_url": "https://api.github.com/users/orenmelamud/orgs", "repos_url": "https://api.github.com/users/orenmelamud/repos", "events_url": "https://api.github.com/users/orenmelamud/events{/privacy}", "received_events_url": "https://api.github.com/users/orenmelamud/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "How do you training CTRL ? Did you check this: https://github.com/huggingface/transformers/blob/0477b307c7501ea76e01b03cb387a2312db752b3/examples/run_lm_finetuning.py#L198\r\n\r\nYou can also decrease batch size up to 1.\r\n\r\nAnd also gradients can be accumulated:\r\nhttps://github.com/huggingface/transformers/blob/0477b307c7501ea76e01b03cb387a2312db752b3/examples/run_lm_finetuning.py#L389", "@iedmrc Thanks for the suggestions. I'm using a slightly modified version of run_lm_finetuning.py but for CTRL even with a batch size of 1 the fine tuning consumes more than 16 GB of memory. I tried also to fine tune with half precision (fp16), but for some reason that didn't help either (I have seen other people complain about similar issues here).", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "See https://github.com/huggingface/transformers/pull/3578. " ]
1,573
1,596
1,579
NONE
null
Hi. I'm running into memory issues when trying to fine tune the CTRL language model on a 16 GB GPU. Is there any built-in support for splitting models across more than one GPU? I suppose that mapping the input embedding layer to a different GPU or even the CPU would do the trick.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1836/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1835
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1835/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1835/comments
https://api.github.com/repos/huggingface/transformers/issues/1835/events
https://github.com/huggingface/transformers/issues/1835
523,033,574
MDU6SXNzdWU1MjMwMzM1NzQ=
1,835
Parallell compute failing for finetuning GPT2 using GPU
{ "login": "aclifton314", "id": 53267795, "node_id": "MDQ6VXNlcjUzMjY3Nzk1", "avatar_url": "https://avatars.githubusercontent.com/u/53267795?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aclifton314", "html_url": "https://github.com/aclifton314", "followers_url": "https://api.github.com/users/aclifton314/followers", "following_url": "https://api.github.com/users/aclifton314/following{/other_user}", "gists_url": "https://api.github.com/users/aclifton314/gists{/gist_id}", "starred_url": "https://api.github.com/users/aclifton314/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aclifton314/subscriptions", "organizations_url": "https://api.github.com/users/aclifton314/orgs", "repos_url": "https://api.github.com/users/aclifton314/repos", "events_url": "https://api.github.com/users/aclifton314/events{/privacy}", "received_events_url": "https://api.github.com/users/aclifton314/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,573
1,574
1,574
NONE
null
## ? Questions & Help **SYSTEM** OS: Ubuntu 18.04 Python version: 3.6.8 Torch version: 1.0.0 Transformers version: 2.1.1 I am trying to finetune GPT2 using a GPU and am getting an error. I run the `run_lm_finetuning.py` example script with the following parameters: ```python python run_lm_finetuning.py \ --output_dir=/home/user/gpt2_finetuned_isr/model \ --model_type=gpt2 \ --model_name_or_path=gpt2 \ --do_train \ --train_data_file=/path/to/train/data/train_data.txt \ --do_eval \ --eval_data_file=/path/to/test/data/test_data.txt ``` Here is the error: ```python /home/user/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /home/user/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /home/user/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /home/user/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /home/user/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /home/user/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) /home/user/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /home/user/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /home/user/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /home/user/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /home/user/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /home/user/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) To use data.metrics please install scikit-learn. See https://scikit-learn.org/stable/index.html 11/14/2019 11:01:48 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False 11/14/2019 11:01:49 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-config.json from cache at /home/user/.cache/torch/transformers/4be02c5697d91738003fb1685c9872f284166aa32e061576bbe6aaeb95649fcf.085d5f6a8e7812ea05ff0e6ed0645ab2e75d80387ad55c1ad9806ee70d272f80 11/14/2019 11:01:49 - INFO - transformers.configuration_utils - Model config { "attn_pdrop": 0.1, "embd_pdrop": 0.1, "finetuning_task": null, "initializer_range": 0.02, "layer_norm_epsilon": 1e-05, "n_ctx": 1024, "n_embd": 768, "n_head": 12, "n_layer": 12, "n_positions": 1024, "num_labels": 1, "output_attentions": false, "output_hidden_states": false, "output_past": true, "pruned_heads": {}, "resid_pdrop": 0.1, "summary_activation": null, "summary_first_dropout": 0.1, "summary_proj_to_labels": true, "summary_type": "cls_index", "summary_use_proj": true, "torchscript": false, "use_bfloat16": false, "vocab_size": 50257 } 11/14/2019 11:01:50 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json from cache at /home/user/.cache/torch/transformers/f2808208f9bec2320371a9f5f891c184ae0b674ef866b79c58177067d15732dd.1512018be4ba4e8726e41b9145129dc30651ea4fec86aa61f4b9f40bf94eac71 11/14/2019 11:01:50 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt from cache at /home/user/.cache/torch/transformers/d629f792e430b3c76a1291bb2766b0a047e36fae0588f9dbc1ae51decdff691b.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda 11/14/2019 11:01:50 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-pytorch_model.bin from cache at /home/user/.cache/torch/transformers/4295d67f022061768f4adc386234dbdb781c814c39662dd1662221c309962c55.778cf36f5c4e5d94c8cd9cefcf2a580c8643570eb327f0d4a1f007fab2acbdf1 /home/user/.local/lib/python3.6/site-packages/torch/cuda/__init__.py:117: UserWarning: Found GPU0 Quadro 4000 which is of cuda capability 2.0. PyTorch no longer supports this GPU because it is too old. warnings.warn(old_gpu_warn % (d, name, major, capability[1])) 11/14/2019 11:02:10 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, block_size=1024, cache_dir='', config_name='', device=device(type='cuda'), do_eval=True, do_lower_case=False, do_train=True, eval_all_checkpoints=False, eval_data_file='/home/user/gpt2_finetuned_isr/finetune_gpt2_test.txt', evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=5e-05, local_rank=-1, logging_steps=50, max_grad_norm=1.0, max_steps=-1, mlm=False, mlm_probability=0.15, model_name_or_path='gpt2', model_type='gpt2', n_gpu=1, no_cuda=False, num_train_epochs=1.0, output_dir='/home/user/gpt2_finetuned_isr/model', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=4, per_gpu_train_batch_size=4, save_steps=50, save_total_limit=None, seed=42, server_ip='', server_port='', tokenizer_name='', train_data_file='/home/user/gpt2_finetuned_isr/finetune_gpt2_train.txt', warmup_steps=0, weight_decay=0.0) 11/14/2019 11:02:10 - INFO - __main__ - Creating features from dataset file at /home/user/gpt2_finetuned_isr 11/14/2019 11:02:16 - WARNING - transformers.tokenization_utils - Token indices sequence length is longer than the specified maximum sequence length for this model (1003332 > 1024). Running this sequence through the model will result in indexing errors 11/14/2019 11:02:16 - WARNING - transformers.tokenization_utils - This tokenizer does not make use of special tokens. Input is returned with no modification. 11/14/2019 11:02:16 - WARNING - transformers.tokenization_utils - This tokenizer does not make use of special tokens. Input is returned with no modification. 11/14/2019 11:02:16 - WARNING - transformers.tokenization_utils - This tokenizer does not make use of special tokens. Input is returned with no modification. 11/14/2019 11:02:16 - WARNING - transformers.tokenization_utils - This tokenizer does not make use of special tokens. Input is returned with no modification. 11/14/2019 11:02:16 - WARNING - transformers.tokenization_utils - This tokenizer does not make use of special tokens. Input is returned with no modification. BTW, I get TONS of these tokenizer warnings... I won't paste them here as they say the same thing... 11/14/2019 11:02:16 - INFO - __main__ - Saving features into cached file /home/user/gpt2_finetuned_isr/cached_lm_1024_finetune_gpt2_train.txt 11/14/2019 11:02:16 - INFO - __main__ - ***** Running training ***** 11/14/2019 11:02:16 - INFO - __main__ - Num examples = 979 11/14/2019 11:02:16 - INFO - __main__ - Num Epochs = 1 11/14/2019 11:02:16 - INFO - __main__ - Instantaneous batch size per GPU = 4 11/14/2019 11:02:16 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 4 11/14/2019 11:02:16 - INFO - __main__ - Gradient Accumulation steps = 1 11/14/2019 11:02:16 - INFO - __main__ - Total optimization steps = 245 Epoch: 0%| | 0/1 [00:00<?, ?it/sTraceback (most recent call last): | 0/245 [00:00<?, ?it/s] File "run_lm_finetuning.py", line 545, in <module> main() File "run_lm_finetuning.py", line 497, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_lm_finetuning.py", line 228, in train outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels) File "/home/user/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/home/user/.local/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 533, in forward head_mask=head_mask) File "/home/user/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/home/user/.local/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 385, in forward position_ids = torch.arange(past_length, input_ids.size(-1) + past_length, dtype=torch.long, device=input_ids.device) RuntimeError: parallel_for failed: no kernel image is available for execution on the device Epoch: 0%| | 0/1 [00:00<?, ?it/s] Iteration: 0%| | 0/245 [00:00<?, ?it/s] ``` I see the pytorch warning about the GPU, but it still seems available to use: ```python Python 3.6.8 (default, Oct 7 2019, 12:59:55) [GCC 8.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torch >>> torch.cuda.current_device() /home/user/.local/lib/python3.6/site-packages/torch/cuda/__init__.py:117: UserWarning: Found GPU0 Quadro 4000 which is of cuda capability 2.0. PyTorch no longer supports this GPU because it is too old. warnings.warn(old_gpu_warn % (d, name, major, capability[1])) 0 >>> torch.cuda.device(0) <torch.cuda.device object at 0x7fb833751d30> >>> torch.cuda.device_count() 1 >>> torch.cuda.get_device_name(0) 'Quadro 4000' >>> torch.cuda.is_available() True ``` Any guidance is much appreciated!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1835/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1835/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1834
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1834/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1834/comments
https://api.github.com/repos/huggingface/transformers/issues/1834/events
https://github.com/huggingface/transformers/issues/1834
523,021,308
MDU6SXNzdWU1MjMwMjEzMDg=
1,834
Where is Model2Model PreTrainedEncoderDecoder in run_summerization_finetune
{ "login": "yeliu918", "id": 20387632, "node_id": "MDQ6VXNlcjIwMzg3NjMy", "avatar_url": "https://avatars.githubusercontent.com/u/20387632?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yeliu918", "html_url": "https://github.com/yeliu918", "followers_url": "https://api.github.com/users/yeliu918/followers", "following_url": "https://api.github.com/users/yeliu918/following{/other_user}", "gists_url": "https://api.github.com/users/yeliu918/gists{/gist_id}", "starred_url": "https://api.github.com/users/yeliu918/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yeliu918/subscriptions", "organizations_url": "https://api.github.com/users/yeliu918/orgs", "repos_url": "https://api.github.com/users/yeliu918/repos", "events_url": "https://api.github.com/users/yeliu918/events{/privacy}", "received_events_url": "https://api.github.com/users/yeliu918/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "@yeliu918 was there any ```run_summerization_finetune``` script before? Since I cannot find it now.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,573
1,583
1,583
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1834/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1834/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1833
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1833/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1833/comments
https://api.github.com/repos/huggingface/transformers/issues/1833/events
https://github.com/huggingface/transformers/pull/1833
522,938,456
MDExOlB1bGxSZXF1ZXN0MzQxMDM5OTgz
1,833
Token indices sequence length is longer than the specified maximum sequence length for this model
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,573
1,573
1,573
MEMBER
null
As of now the tokenizers output a specific warning when an encoded sequence is longer than the maximum specified sequence length, which is model-specific: ``` Token indices sequence length is longer than the specified maximum sequence length for this model (X > 1024). Running this sequence through the model will result in indexing errors ``` It is currently in the `convert_tokens_to_ids` and this leads to two issues: - using `encode` or `encode_plus` methods with a `max_length` specified will still output that warning as the `convert_tokens_to_ids` method is used before the truncation is done. (cf #1791) - since `prepare_for_model` was introduced, I personally feel that all modifications related to the model should happen in that method and not in `tokenize` or `convert_tokens_to_ids`. This PR aims to slightly change the behavior so that both aforementioned issues may be solved by putting the warning in the `prepare_for_model` method if no `max_length` is specified.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1833/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1833/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1833", "html_url": "https://github.com/huggingface/transformers/pull/1833", "diff_url": "https://github.com/huggingface/transformers/pull/1833.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1833.patch", "merged_at": 1573765490000 }
https://api.github.com/repos/huggingface/transformers/issues/1832
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1832/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1832/comments
https://api.github.com/repos/huggingface/transformers/issues/1832/events
https://github.com/huggingface/transformers/pull/1832
522,902,198
MDExOlB1bGxSZXF1ZXN0MzQxMDEwMjUw
1,832
replace LambdaLR scheduler wrappers by function
{ "login": "rlouf", "id": 3885044, "node_id": "MDQ6VXNlcjM4ODUwNDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3885044?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rlouf", "html_url": "https://github.com/rlouf", "followers_url": "https://api.github.com/users/rlouf/followers", "following_url": "https://api.github.com/users/rlouf/following{/other_user}", "gists_url": "https://api.github.com/users/rlouf/gists{/gist_id}", "starred_url": "https://api.github.com/users/rlouf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rlouf/subscriptions", "organizations_url": "https://api.github.com/users/rlouf/orgs", "repos_url": "https://api.github.com/users/rlouf/repos", "events_url": "https://api.github.com/users/rlouf/events{/privacy}", "received_events_url": "https://api.github.com/users/rlouf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes, great job tracking and fixing this.\r\nCan you update all the examples as well?", "Great job finding the issue!", "I updated the docs and examples. I am confused because the tests fail on this function:\r\n\r\n```\r\n=================================== FAILURES ===================================\r\n______________ TFDistilBertModelTest.test_pt_tf_model_equivalence ______________\r\n\r\nself = <transformers.tests.modeling_tf_distilbert_test.TFDistilBertModelTest testMethod=test_pt_tf_model_equivalence>\r\n\r\n def test_pt_tf_model_equivalence(self):\r\n if not is_torch_available():\r\n return\r\n \r\n import torch\r\n import transformers\r\n \r\n config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()\r\n \r\n for model_class in self.all_model_classes:\r\n pt_model_class_name = model_class.__name__[2:] # Skip the \"TF\" at the beggining\r\n pt_model_class = getattr(transformers, pt_model_class_name)\r\n \r\n config.output_hidden_states = True\r\n tf_model = model_class(config)\r\n pt_model = pt_model_class(config)\r\n \r\n # Check we can load pt model in tf and vice-versa with model => model functions\r\n tf_model = transformers.load_pytorch_model_in_tf2_model(tf_model, pt_model, tf_inputs=inputs_dict)\r\n pt_model = transformers.load_tf2_model_in_pytorch_model(pt_model, tf_model)\r\n \r\n # Check predictions on first output (logits/hidden-states) are close enought given low-level computational differences\r\n pt_model.eval()\r\n pt_inputs_dict = dict((name, torch.from_numpy(key.numpy()).to(torch.long))\r\n for name, key in inputs_dict.items())\r\n with torch.no_grad():\r\n pto = pt_model(**pt_inputs_dict)\r\n tfo = tf_model(inputs_dict)\r\n max_diff = np.amax(np.abs(tfo[0].numpy() - pto[0].numpy()))\r\n> self.assertLessEqual(max_diff, 2e-2)\r\nE AssertionError: nan not less than or equal to 0.02\r\n```\r\n\r\nwhich has no apparent link with my changes.", "This test has been failing on and off for a week or so now; I'll look into it soon." ]
1,573
1,573
1,573
CONTRIBUTOR
null
Custom schedulers are currently initiated by wrapping Pytorch's LambdaLR class and passing a method of the wrapping class to the __init__ function of LambdaLR. This approach is not appropriate for several reasons: 1. one does not need to define a class when it only defines a __init__() method; 2. instantiating the parent class by passing a method of the child class creates a cyclical reference which leads to memory leaks. See issues #1742 and #1134. In this commit we replace the wrapper classes with functions that instantiate `LambdaLR` with a custom learning rate function. We use a closure to specify the parameter of the latter. We also do a bit of renaming within the function to explicit the behaviour and removed docstrings that were subsequently not necessary.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1832/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1832/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1832", "html_url": "https://github.com/huggingface/transformers/pull/1832", "diff_url": "https://github.com/huggingface/transformers/pull/1832.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1832.patch", "merged_at": 1573765831000 }
https://api.github.com/repos/huggingface/transformers/issues/1831
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1831/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1831/comments
https://api.github.com/repos/huggingface/transformers/issues/1831/events
https://github.com/huggingface/transformers/pull/1831
522,883,756
MDExOlB1bGxSZXF1ZXN0MzQwOTk1MDg5
1,831
sum() is replaced by itertools.chain.from_iterable()
{ "login": "iedmrc", "id": 13666448, "node_id": "MDQ6VXNlcjEzNjY2NDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/13666448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iedmrc", "html_url": "https://github.com/iedmrc", "followers_url": "https://api.github.com/users/iedmrc/followers", "following_url": "https://api.github.com/users/iedmrc/following{/other_user}", "gists_url": "https://api.github.com/users/iedmrc/gists{/gist_id}", "starred_url": "https://api.github.com/users/iedmrc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iedmrc/subscriptions", "organizations_url": "https://api.github.com/users/iedmrc/orgs", "repos_url": "https://api.github.com/users/iedmrc/repos", "events_url": "https://api.github.com/users/iedmrc/events{/privacy}", "received_events_url": "https://api.github.com/users/iedmrc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1831?src=pr&el=h1) Report\n> Merging [#1831](https://codecov.io/gh/huggingface/transformers/pull/1831?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/155c782a2ccd103cf63ad48a2becd7c76a7d2115?src=pr&el=desc) will **decrease** coverage by `1.36%`.\n> The diff coverage is `100%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1831/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1831?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1831 +/- ##\n==========================================\n- Coverage 84.16% 82.79% -1.37% \n==========================================\n Files 94 94 \n Lines 14185 14186 +1 \n==========================================\n- Hits 11939 11746 -193 \n- Misses 2246 2440 +194\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1831?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1831/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `91.22% <100%> (+0.02%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1831/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `9.85% <0%> (-83.1%)` | :arrow_down: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1831/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `81.55% <0%> (-15.54%)` | :arrow_down: |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1831/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `59.41% <0%> (-12.36%)` | :arrow_down: |\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1831/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `71.18% <0%> (-2.44%)` | :arrow_down: |\n| [transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1831/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `94.24% <0%> (-2.22%)` | :arrow_down: |\n| [transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1831/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `80.66% <0%> (-1.34%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1831?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1831?src=pr&el=footer). Last update [155c782...7627dde](https://codecov.io/gh/huggingface/transformers/pull/1831?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Looks good to me, thank you @iedmrc!", "Great, thanks a lot @iedmrc!" ]
1,573
1,573
1,573
CONTRIBUTOR
null
sum() is the leanest method to flatten a string list, so it's been replaced by itertools.chain.from_iterable() . Please check #1830
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1831/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1831/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1831", "html_url": "https://github.com/huggingface/transformers/pull/1831", "diff_url": "https://github.com/huggingface/transformers/pull/1831.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1831.patch", "merged_at": 1573765915000 }
https://api.github.com/repos/huggingface/transformers/issues/1830
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1830/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1830/comments
https://api.github.com/repos/huggingface/transformers/issues/1830/events
https://github.com/huggingface/transformers/issues/1830
522,835,313
MDU6SXNzdWU1MjI4MzUzMTM=
1,830
GPT2 tokenizer is so slow because of sum()
{ "login": "iedmrc", "id": 13666448, "node_id": "MDQ6VXNlcjEzNjY2NDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/13666448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iedmrc", "html_url": "https://github.com/iedmrc", "followers_url": "https://api.github.com/users/iedmrc/followers", "following_url": "https://api.github.com/users/iedmrc/following{/other_user}", "gists_url": "https://api.github.com/users/iedmrc/gists{/gist_id}", "starred_url": "https://api.github.com/users/iedmrc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iedmrc/subscriptions", "organizations_url": "https://api.github.com/users/iedmrc/orgs", "repos_url": "https://api.github.com/users/iedmrc/repos", "events_url": "https://api.github.com/users/iedmrc/events{/privacy}", "received_events_url": "https://api.github.com/users/iedmrc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thank you for taking the time to look into this and opening a pull request!", "You are welcome,\r\nThis thing has big effort and I always support as a community member :) .\r\nIf you merge PR I would be appreciated because I want to use it originally as provided in the master branch, for my ongoing project.\r\nThanks!", "Your PR was merged, you can now use it from the master branch :)\r\nFeel free to open other issues if you find other sub-optimal processes." ]
1,573
1,573
1,573
CONTRIBUTOR
null
## 🐛 Bug Hi, As the discussion started in that #1621 issue, GPT2 tokenization is so slow even with 50MB of the dataset. I'm using `run_lm_finetuning.py` and here are the steps to reproduce the problem: - Have a dataset not an even bigger one. 20MB of a dataset is enough. - Call `run_lm_finetuning.py` to train (finetune) the dataset. Here are my parameters: ``` --train_data_file "/train/datafile" \ --eval_data_file "/eval/datafile" \ --output_dir "/train/model" \ --model_type gpt2 \ --model_name_or_path distilgpt2 \ --cache_dir "/train/cache" \ --do_train \ --evaluate_during_training \ --per_gpu_train_batch_size 1 \ --per_gpu_eval_batch_size 1 \ --gradient_accumulation_steps 5 \ --overwrite_output_dir \ --seed 99 ``` - You'll see it'll spend 20+ mins (depends on your cpu) to tokenize just 50MB of a text file. I dug into `huggingface/transformers` 's codebase and profiled the tokenization process. And it is obvious that this summation drains the time: https://github.com/huggingface/transformers/blob/155c782a2ccd103cf63ad48a2becd7c76a7d2115/transformers/tokenization_utils.py#L644 I run profiler and here is the result: ``` 73791524 function calls in 1566.379 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 27157 0.083 0.000 0.109 0.000 <frozen importlib._bootstrap>:1009(_handle_fromlist) 27157 0.065 0.000 0.128 0.000 _bootlocale.py:33(getpreferredencoding) 81471 0.070 0.000 0.327 0.000 locale.py:589(setlocale) 27157 0.422 0.000 0.876 0.000 locale.py:647(getpreferredencoding) 27157 0.363 0.000 5.997 0.000 regex.py:328(findall) 27157 0.662 0.000 1.682 0.000 regex.py:434(_compile) 4815114 8.744 0.000 16.641 0.000 tokenization_gpt2.py:139(bpe) 2030532 1.116 0.000 1.887 0.000 tokenization_gpt2.py:149(<lambda>) 27157 22.702 0.001 110.038 0.004 tokenization_gpt2.py:180(_tokenize) 25459527 5.702 0.000 5.702 0.000 tokenization_gpt2.py:194(<genexpr>) 10242602 1.764 0.000 1.764 0.000 tokenization_gpt2.py:195(<genexpr>) 1377876 1.678 0.000 1.975 0.000 tokenization_gpt2.py:91(get_pairs) 95205 0.526 0.000 0.910 0.000 tokenization_utils.py:1043(special_tokens_map) 95205 0.932 0.000 1.987 0.000 tokenization_utils.py:1055(all_special_tokens) 1 0.119 0.119 1566.379 1566.379 tokenization_utils.py:615(tokenize) 40789 0.099 0.000 0.169 0.000 tokenization_utils.py:623(split_on_token) 1 0.287 0.287 1566.260 1566.260 tokenization_utils.py:641(split_on_tokens) 54417 0.698 0.000 112.123 0.002 tokenization_utils.py:659(<genexpr>) 27157 0.063 0.000 0.063 0.000 {built-in method _locale.nl_langinfo} 81471 0.252 0.000 0.252 0.000 {built-in method _locale.setlocale} 761640 0.384 0.000 0.384 0.000 {built-in method builtins.getattr} 54314 0.022 0.000 0.022 0.000 {built-in method builtins.hasattr} 516605 0.150 0.000 0.150 0.000 {built-in method builtins.isinstance} 1821447 0.159 0.000 0.159 0.000 {built-in method builtins.len} 472563 3.469 0.000 5.355 0.000 {built-in method builtins.min} 1 1453.081 1453.081 1565.204 1565.204 {built-in method builtins.sum} 2043214 0.297 0.000 0.297 0.000 {method 'add' of 'set' objects} 456488 0.055 0.000 0.055 0.000 {method 'append' of 'list' objects} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 4815114 1.169 0.000 1.169 0.000 {method 'encode' of 'str' objects} 5550977 16.572 0.000 18.336 0.000 {method 'extend' of 'list' objects} 27157 3.952 0.000 3.952 0.000 {method 'findall' of '_regex.Pattern' objects} 2057689 0.784 0.000 0.784 0.000 {method 'get' of 'dict' objects} 735863 0.233 0.000 0.233 0.000 {method 'index' of 'tuple' objects} 4894984 38.307 0.000 44.010 0.000 {method 'join' of 'str' objects} 1 0.000 0.000 0.000 0.000 {method 'keys' of 'dict' objects} 4855903 1.365 0.000 1.365 0.000 {method 'split' of 'str' objects} 68048 0.009 0.000 0.009 0.000 {method 'strip' of 'str' objects} 95205 0.024 0.000 0.024 0.000 {method 'values' of 'dict' objects} ``` I turned it into this by removing `sum()` ``` (self._tokenize(token, **kwargs) if token not \ in self.added_tokens_encoder and token not in self.all_special_tokens \ else [token] for token in tokenized_text) ``` and here is the profiler result: ``` 73275678 function calls in 121.030 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 27157 0.058 0.000 0.076 0.000 <frozen importlib._bootstrap>:1009(_handle_fromlist) 27157 0.041 0.000 0.084 0.000 _bootlocale.py:33(getpreferredencoding) 81471 0.058 0.000 0.211 0.000 locale.py:589(setlocale) 27157 0.330 0.000 0.625 0.000 locale.py:647(getpreferredencoding) 27157 0.267 0.000 4.996 0.000 regex.py:328(findall) 27157 0.434 0.000 1.160 0.000 regex.py:434(_compile) 4815114 9.797 0.000 18.875 0.000 tokenization_gpt2.py:139(bpe) 2030532 1.270 0.000 2.100 0.000 tokenization_gpt2.py:149(<lambda>) 27157 24.693 0.001 119.272 0.004 tokenization_gpt2.py:180(_tokenize) 25459527 6.204 0.000 6.204 0.000 tokenization_gpt2.py:194(<genexpr>) 10242602 1.975 0.000 1.975 0.000 tokenization_gpt2.py:195(<genexpr>) 1377876 2.002 0.000 2.328 0.000 tokenization_gpt2.py:91(get_pairs) 68050 0.287 0.000 0.475 0.000 tokenization_utils.py:1043(special_tokens_map) 68050 0.507 0.000 1.061 0.000 tokenization_utils.py:1055(all_special_tokens) 1 0.031 0.031 121.030 121.030 tokenization_utils.py:615(tokenize) 27263 0.077 0.000 0.158 0.000 tokenization_utils.py:623(split_on_token) 1 0.178 0.178 120.999 120.999 tokenization_utils.py:641(split_on_tokens) 1 0.330 0.330 120.350 120.350 tokenization_utils.py:659(<listcomp>) 27157 0.043 0.000 0.043 0.000 {built-in method _locale.nl_langinfo} 81471 0.148 0.000 0.148 0.000 {built-in method _locale.setlocale} 544400 0.188 0.000 0.188 0.000 {built-in method builtins.getattr} 54314 0.014 0.000 0.014 0.000 {built-in method builtins.hasattr} 407985 0.092 0.000 0.092 0.000 {built-in method builtins.isinstance} 1807921 0.181 0.000 0.181 0.000 {built-in method builtins.len} 472563 3.992 0.000 6.092 0.000 {built-in method builtins.min} 2043214 0.326 0.000 0.326 0.000 {method 'add' of 'set' objects} 456488 0.064 0.000 0.064 0.000 {method 'append' of 'list' objects} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 4815114 1.259 0.000 1.259 0.000 {method 'encode' of 'str' objects} 5550977 18.064 0.000 20.040 0.000 {method 'extend' of 'list' objects} 27157 3.569 0.000 3.569 0.000 {method 'findall' of '_regex.Pattern' objects} 2057689 0.839 0.000 0.839 0.000 {method 'get' of 'dict' objects} 735863 0.273 0.000 0.273 0.000 {method 'index' of 'tuple' objects} 4894984 41.821 0.000 48.026 0.000 {method 'join' of 'str' objects} 1 0.000 0.000 0.000 0.000 {method 'keys' of 'dict' objects} 4842377 1.597 0.000 1.597 0.000 {method 'split' of 'str' objects} 54522 0.007 0.000 0.007 0.000 {method 'strip' of 'str' objects} 68050 0.012 0.000 0.012 0.000 {method 'values' of 'dict' objects} ``` You can see 121 seconds vs 1566 seconds. It is 12x times faster without `sum()`. Okay lets discuss do we need `sum()`? Actually, not. Because the `sum()` just flattens the array with the leanest way and there are far more efficient ways. See that [answer](https://stackoverflow.com/a/953097) on StackOverflow. Also as written in official python [doc](https://docs.python.org/3/library/functions.html#sum) , `sum()` is developed for numbers rather than strings. So I replaced `sum()` with `list(itertools.chain.from_iterable(text))` as follows and run profiler. ``` return list(itertools.chain.from_iterable((self._tokenize(token, **kwargs) if token not \ in self.added_tokens_encoder and token not in self.all_special_tokens \ else [token] for token in tokenized_text))) ``` Here is the result: ``` 73791524 function calls in 114.720 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 27157 0.045 0.000 0.060 0.000 <frozen importlib._bootstrap>:1009(_handle_fromlist) 27157 0.035 0.000 0.067 0.000 _bootlocale.py:33(getpreferredencoding) 81471 0.045 0.000 0.159 0.000 locale.py:589(setlocale) 27157 0.277 0.000 0.502 0.000 locale.py:647(getpreferredencoding) 27157 0.237 0.000 4.258 0.000 regex.py:328(findall) 27157 0.346 0.000 0.929 0.000 regex.py:434(_compile) 4815114 8.703 0.000 16.973 0.000 tokenization_gpt2.py:139(bpe) 2030532 1.171 0.000 1.923 0.000 tokenization_gpt2.py:149(<lambda>) 27157 22.988 0.001 112.449 0.004 tokenization_gpt2.py:180(_tokenize) 25459527 5.708 0.000 5.708 0.000 tokenization_gpt2.py:194(<genexpr>) 10242602 1.755 0.000 1.755 0.000 tokenization_gpt2.py:195(<genexpr>) 1377876 1.595 0.000 1.900 0.000 tokenization_gpt2.py:91(get_pairs) 95205 0.345 0.000 0.565 0.000 tokenization_utils.py:1043(special_tokens_map) 95205 0.581 0.000 1.236 0.000 tokenization_utils.py:1055(all_special_tokens) 1 0.022 0.022 114.720 114.720 tokenization_utils.py:615(tokenize) 40789 0.103 0.000 0.182 0.000 tokenization_utils.py:623(split_on_token) 1 0.583 0.583 114.698 114.698 tokenization_utils.py:641(split_on_tokens) 54417 0.248 0.000 113.314 0.002 tokenization_utils.py:659(<genexpr>) 27157 0.032 0.000 0.032 0.000 {built-in method _locale.nl_langinfo} 81471 0.111 0.000 0.111 0.000 {built-in method _locale.setlocale} 761640 0.219 0.000 0.219 0.000 {built-in method builtins.getattr} 54314 0.012 0.000 0.012 0.000 {built-in method builtins.hasattr} 516605 0.097 0.000 0.097 0.000 {built-in method builtins.isinstance} 1821447 0.166 0.000 0.166 0.000 {built-in method builtins.len} 472563 3.855 0.000 5.777 0.000 {built-in method builtins.min} 1 0.000 0.000 0.000 0.000 {built-in method from_iterable} 2043214 0.305 0.000 0.305 0.000 {method 'add' of 'set' objects} 456488 0.058 0.000 0.058 0.000 {method 'append' of 'list' objects} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 4815114 1.104 0.000 1.104 0.000 {method 'encode' of 'str' objects} 5550977 17.434 0.000 19.189 0.000 {method 'extend' of 'list' objects} 27157 3.092 0.000 3.092 0.000 {method 'findall' of '_regex.Pattern' objects} 2057689 0.759 0.000 0.759 0.000 {method 'get' of 'dict' objects} 735863 0.243 0.000 0.243 0.000 {method 'index' of 'tuple' objects} 4894984 41.030 0.000 46.738 0.000 {method 'join' of 'str' objects} 1 0.000 0.000 0.000 0.000 {method 'keys' of 'dict' objects} 4855903 1.396 0.000 1.396 0.000 {method 'split' of 'str' objects} 68048 0.009 0.000 0.009 0.000 {method 'strip' of 'str' objects} 95205 0.013 0.000 0.013 0.000 {method 'values' of 'dict' objects} ``` It significantly improves the speed as seen in the difference between 114 seconds and 1566 seconds. I'm going to create a pull request if everything is clear? Thank you for your effort.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1830/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/1830/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1829
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1829/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1829/comments
https://api.github.com/repos/huggingface/transformers/issues/1829/events
https://github.com/huggingface/transformers/issues/1829
522,808,987
MDU6SXNzdWU1MjI4MDg5ODc=
1,829
NotImplementedError when using TFDistilBertModel
{ "login": "Riccorl", "id": 10062216, "node_id": "MDQ6VXNlcjEwMDYyMjE2", "avatar_url": "https://avatars.githubusercontent.com/u/10062216?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Riccorl", "html_url": "https://github.com/Riccorl", "followers_url": "https://api.github.com/users/Riccorl/followers", "following_url": "https://api.github.com/users/Riccorl/following{/other_user}", "gists_url": "https://api.github.com/users/Riccorl/gists{/gist_id}", "starred_url": "https://api.github.com/users/Riccorl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Riccorl/subscriptions", "organizations_url": "https://api.github.com/users/Riccorl/orgs", "repos_url": "https://api.github.com/users/Riccorl/repos", "events_url": "https://api.github.com/users/Riccorl/events{/privacy}", "received_events_url": "https://api.github.com/users/Riccorl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "In my environemnt, it **works as expected**.\r\n\r\n**Environment:**\r\n\r\n- **O.S.** Linux Ubuntu\r\n- **Python**: 3.6.9\r\n- **HuggingFace's Transformers**: 2.1.1 (installed **today** from source with `pip install git+https://github.com/huggingface/transformers`)\r\n- **PyTorch**: 1.3.1\r\n- **TensorFlow**: 2.0\r\n\r\nBelow the execution:\r\n```\r\nPython 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) \r\n[GCC 7.3.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import transformers\r\n/home/<user>/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\r\n from ._conv import register_converters as _register_converters\r\n2019-11-14 13:56:38.199689: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA\r\n2019-11-14 13:56:38.337808: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3408000000 Hz\r\n2019-11-14 13:56:38.338355: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x560bcbc86ae0 executing computations on platform Host. Devices:\r\n2019-11-14 13:56:38.338375: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version\r\n>>> transformers.__version__\r\n'2.1.1'\r\n>>> import torch\r\n>>> torch.__version__\r\n'1.3.1'\r\n>>> import tensorflow as tf\r\n>>> tf.__version__\r\n'2.0.0'\r\n>>> import platform\r\n>>> platform.platform()\r\n'Linux-4.15.0-69-generic-x86_64-with-debian-buster-sid'\r\n>>> platform.python_version()\r\n'3.6.9'\r\n>>> from transformers import DistilBertTokenizer, TFDistilBertModel\r\n>>> tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')\r\n>>> model = TFDistilBertModel.from_pretrained('distilbert-base-uncased')\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 363423424/363423424 [00:39<00:00, 9179101.04B/s]\r\n2019-11-14 13:58:58.633432: W tensorflow/python/util/util.cc:299] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.\r\n>>> \r\n```\r\n\r\n> ## Bug\r\n> I have this error when I train a model with TFDistilBertModel. The same code works if I change to BERT for example. The transformers is loaded with `TFDistilBertModel.from_pretrained(\"distilbert-base-uncased\")`\r\n> \r\n> ```\r\n> File \"/home/<user>/miniconda3/envs/nlp/lib/python3.7/site-packages/tensorflow_core/python/autograph/impl/api.py\", line 237, in wrapper\r\n> raise e.ag_error_metadata.to_exception(e)\r\n> NotImplementedError: in converted code:\r\n> relative to /home/<user>:\r\n> \r\n> ../models.py:68 call *\r\n> bert_hidden_states = self.model(\r\n> miniconda3/envs/nlp/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py:842 __call__\r\n> outputs = call_fn(cast_inputs, *args, **kwargs)\r\n> miniconda3/envs/nlp/lib/python3.7/site-packages/transformers/modeling_tf_distilbert.py:545 call *\r\n> outputs = self.distilbert(inputs, **kwargs)\r\n> miniconda3/envs/nlp/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py:842 __call__\r\n> outputs = call_fn(cast_inputs, *args, **kwargs)\r\n> miniconda3/envs/nlp/lib/python3.7/site-packages/transformers/modeling_tf_distilbert.py:431 call *\r\n> raise NotImplementedError\r\n> \r\n> NotImplementedError: \r\n> ```\r\n> \r\n> ## Environment\r\n> * OS: Manjaro\r\n> * Python version: 3.7.5\r\n> * TF: 2.0\r\n> * Using GPU ? Yes", "@TheEdoardo93 thank you for showing me it's not a problem of the library. I uninstalled and reinstalled from source (like you did) and now it works." ]
1,573
1,573
1,573
NONE
null
## 🐛 Bug <!-- Important information --> I have this error when I train a model with TFDistilBertModel. The same code works if I change to BERT for example. The transformers is loaded with `TFDistilBertModel.from_pretrained("distilbert-base-uncased")` ``` File "/home/<user>/miniconda3/envs/nlp/lib/python3.7/site-packages/tensorflow_core/python/autograph/impl/api.py", line 237, in wrapper raise e.ag_error_metadata.to_exception(e) NotImplementedError: in converted code: relative to /home/<user>: ../models.py:68 call * bert_hidden_states = self.model( miniconda3/envs/nlp/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py:842 __call__ outputs = call_fn(cast_inputs, *args, **kwargs) miniconda3/envs/nlp/lib/python3.7/site-packages/transformers/modeling_tf_distilbert.py:545 call * outputs = self.distilbert(inputs, **kwargs) miniconda3/envs/nlp/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py:842 __call__ outputs = call_fn(cast_inputs, *args, **kwargs) miniconda3/envs/nlp/lib/python3.7/site-packages/transformers/modeling_tf_distilbert.py:431 call * raise NotImplementedError NotImplementedError: ``` ## Environment * OS: Manjaro * Python version: 3.7.5 * TF: 2.0 * Using GPU ? Yes <!-- Add any other context about the problem here. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1829/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1829/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1828
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1828/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1828/comments
https://api.github.com/repos/huggingface/transformers/issues/1828/events
https://github.com/huggingface/transformers/issues/1828
522,745,287
MDU6SXNzdWU1MjI3NDUyODc=
1,828
GPT2 tokenizer is so slow because of regex.findall
{ "login": "iedmrc", "id": 13666448, "node_id": "MDQ6VXNlcjEzNjY2NDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/13666448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iedmrc", "html_url": "https://github.com/iedmrc", "followers_url": "https://api.github.com/users/iedmrc/followers", "following_url": "https://api.github.com/users/iedmrc/following{/other_user}", "gists_url": "https://api.github.com/users/iedmrc/gists{/gist_id}", "starred_url": "https://api.github.com/users/iedmrc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iedmrc/subscriptions", "organizations_url": "https://api.github.com/users/iedmrc/orgs", "repos_url": "https://api.github.com/users/iedmrc/repos", "events_url": "https://api.github.com/users/iedmrc/events{/privacy}", "received_events_url": "https://api.github.com/users/iedmrc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,573
1,573
1,573
CONTRIBUTOR
null
## 🐛 Bug Hi, As the discussion started in that #1621 issue, GPT2 tokenization is so slow even with 50MB of the dataset. I'm using `run_lm_finetuning.py` and here are the steps to reproduce the problem: - Have a dataset not an even bigger one. 50MB of a dataset is enough. - Call `run_lm_finetuning.py` to train (finetune) the dataset. Here are my parameters: ``` --train_data_file "/train/datafile" \ --eval_data_file "/eval/datafile" \ --output_dir "/train/model" \ --model_type gpt2 \ --model_name_or_path distilgpt2 \ --cache_dir "/train/cache" \ --do_train \ --evaluate_during_training \ --per_gpu_train_batch_size 1 \ --per_gpu_eval_batch_size 1 \ --gradient_accumulation_steps 5 \ --overwrite_output_dir \ --seed 99 ``` - You'll see it'll spend 20+ mins (depends on your cpu) to tokenize just 50MB of a text file. I dug into `huggingface/transformers` 's codebase and profiled the tokenization process. And it seems like this regex drains the time: https://github.com/huggingface/transformers/blob/155c782a2ccd103cf63ad48a2becd7c76a7d2115/transformers/tokenization_gpt2.py#L193 I run cProfiler and here is the result: ``` 19537 function calls in 0.077 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.000 0.000 <frozen importlib._bootstrap>:1009(_handle_fromlist) 1 0.000 0.000 0.077 0.077 <string>:1(<module>) 1 0.000 0.000 0.000 0.000 _bootlocale.py:33(getpreferredencoding) 3 0.000 0.000 0.000 0.000 locale.py:589(setlocale) 1 0.000 0.000 0.000 0.000 locale.py:647(getpreferredencoding) 1 0.000 0.000 0.035 0.035 regex.py:328(findall) 1 0.000 0.000 0.000 0.000 regex.py:434(_compile) 1457 0.004 0.000 0.006 0.000 tokenization_gpt2.py:139(bpe) 350 0.000 0.000 0.000 0.000 tokenization_gpt2.py:149(<lambda>) 1 0.009 0.009 0.077 0.077 tokenization_gpt2.py:191(plot) 6536 0.002 0.000 0.002 0.000 tokenization_gpt2.py:196(<genexpr>) 3240 0.003 0.000 0.003 0.000 tokenization_gpt2.py:197(<genexpr>) 607 0.001 0.000 0.001 0.000 tokenization_gpt2.py:91(get_pairs) 1 0.000 0.000 0.000 0.000 {built-in method _locale.nl_langinfo} 3 0.000 0.000 0.000 0.000 {built-in method _locale.setlocale} 1 0.000 0.000 0.077 0.077 {built-in method builtins.exec} 2 0.000 0.000 0.000 0.000 {built-in method builtins.hasattr} 5 0.000 0.000 0.000 0.000 {built-in method builtins.isinstance} 332 0.000 0.000 0.000 0.000 {built-in method builtins.len} 86 0.001 0.000 0.001 0.000 {built-in method builtins.min} 350 0.000 0.000 0.000 0.000 {method 'add' of 'set' objects} 85 0.000 0.000 0.000 0.000 {method 'append' of 'list' objects} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 1457 0.000 0.000 0.000 0.000 {method 'encode' of 'str' objects} 1594 0.007 0.000 0.010 0.000 {method 'extend' of 'list' objects} 1 0.035 0.035 0.035 0.035 {method 'findall' of '_regex.Pattern' objects} 351 0.000 0.000 0.000 0.000 {method 'get' of 'dict' objects} 137 0.000 0.000 0.000 0.000 {method 'index' of 'tuple' objects} 1474 0.014 0.000 0.016 0.000 {method 'join' of 'str' objects} 1457 0.001 0.000 0.001 0.000 {method 'split' of 'str' objects} ``` P.s. Here `plot()` is the intermadiate function I named in order to profile and it covers that section: ``` for token in re.findall(self.pat, text): if sys.version_info[0] == 2: token = ''.join(self.byte_encoder[ord(b)] for b in token) # Maps all our bytes to unicode strings, avoiding controle tokens of the BPE (spaces in our case) else: token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8')) # Maps all our bytes to unicode strings, avoiding controle tokens of the BPE (spaces in our case) bpe_tokens.extend(bpe_token for bpe_token in self.bpe(token).split(' ')) ``` Is there anyone having that issue? Haven't anybody tried to finetune GPT2 with a dataset larger than 300MB? It's been 39 hours and my tokenization still in progress. Is it normal or could we optimize it? Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1828/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1828/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1827
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1827/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1827/comments
https://api.github.com/repos/huggingface/transformers/issues/1827/events
https://github.com/huggingface/transformers/issues/1827
522,696,567
MDU6SXNzdWU1MjI2OTY1Njc=
1,827
How to get all layers(12) hidden states of BERT?
{ "login": "ChaoYue0307", "id": 9592372, "node_id": "MDQ6VXNlcjk1OTIzNzI=", "avatar_url": "https://avatars.githubusercontent.com/u/9592372?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ChaoYue0307", "html_url": "https://github.com/ChaoYue0307", "followers_url": "https://api.github.com/users/ChaoYue0307/followers", "following_url": "https://api.github.com/users/ChaoYue0307/following{/other_user}", "gists_url": "https://api.github.com/users/ChaoYue0307/gists{/gist_id}", "starred_url": "https://api.github.com/users/ChaoYue0307/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ChaoYue0307/subscriptions", "organizations_url": "https://api.github.com/users/ChaoYue0307/orgs", "repos_url": "https://api.github.com/users/ChaoYue0307/repos", "events_url": "https://api.github.com/users/ChaoYue0307/events{/privacy}", "received_events_url": "https://api.github.com/users/ChaoYue0307/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You should have obtained the 12 layers as well as the embedding output. Are you sure you're not mistaking the output of the forward call (which is a tuple as well) with the hidden states?\r\n\r\nJust to make sure, this is the correct way to obtain the hidden states:\r\n\r\n```py\r\nfrom transformers import BertModel, BertConfig\r\n\r\nconfig = BertConfig.from_pretrained(\"xxx\", output_hidden_states=True)\r\nmodel = BertModel.from_pretrained(\"xxx\", config=config)\r\n\r\noutputs = model(inputs)\r\nprint(len(outputs)) # 3\r\n\r\nhidden_states = outputs[2]\r\nprint(len(hidden_states)) # 13\r\n\r\nembedding_output = hidden_states[0]\r\nattention_hidden_states = hidden_states[1:]\r\n```", "> You should have obtained the 12 layers as well as the embedding output. Are you sure you're not mistaking the output of the forward call (which is a tuple as well) with the hidden states?\r\n> \r\n> Just to make sure, this is the correct way to obtain the hidden states:\r\n> \r\n> ```python\r\n> from transformers import BertModel, BertConfig\r\n> \r\n> config = BertConfig.from_pretrained(\"xxx\", output_hidden_states=True)\r\n> model = BertModel.from_pretrained(\"xxx\", config=config)\r\n> \r\n> outputs = model(inputs)\r\n> print(len(outputs)) # 3\r\n> \r\n> hidden_states = outputs[2]\r\n> print(len(hidden_states)) # 13\r\n> \r\n> embedding_output = hidden_states[0]\r\n> attention_hidden_states = hidden_states[1:]\r\n> ```\r\n\r\nThanks a lot, I think I just not find and realized the hidden states are stored at index 2 in the outputs.\r\nBy the way, where can I find the docs about the meaning of stored vectors at each index of the tuples? ", "In the doc for the `outputs` of `BertModel.\r\nit's here: https://huggingface.co/transformers/model_doc/bert.html#transformers.BertModel", "But for BERT model there is two input pooled_output and sequence_output.\r\n pooled_output, sequence_output = bert_layer([input_word_id, input_mask, segment_id])\r\n From here how can I get last 3 hidden layer outputs?", "Hidden states will be returned if you will specify it in bert config, as noted above:\r\n```\r\nconfig = BertConfig.from_pretrained(\"xxx\", output_hidden_states=True)\r\nmodel = BertModel.from_pretrained(\"xxx\", config=config)\r\n```", "@LysandreJik In ur code, what are `output[0]` and` output[1]`?", "As it is mentioned in the [documentation](https://huggingface.co/transformers/model_doc/bert.html#bertmodel), the returns of the BERT model are `(last_hidden_state, pooler_output, hidden_states[optional], attentions[optional])`\r\n\r\n`output[0]` is therefore the last hidden state and `output[1]` is the pooler output.", "@LysandreJik What exactly pooler output is?", "It's written in the documentation:\r\n\r\n> Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pre-training.\r\n> \r\n> This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence.", "@LysandreJik Sorry but I don't understand what this means. If I want to get a vector of a sentence - I use the hidden state (`output[0]`) right?\r\nWhat could pooler output be used for?", "`pooler: (batch, hidden_dim)`: Can be used when you want to have a representation for the whole sequence, like the last state of a RNN would give you. It is used for instance in Text Classification task where the predicted label doesn't depend on each token in the input.", "@mfuntowicz So `output[0]` is for a separate representation of each word in the sequence, and the pooler is for a joint representation of the entire sequence?", "Exactly 😊", "@mfuntowicz Great thanks!\r\nTwo question please:\r\n1. When taking hidden state, I can also access per-token representation of intermediate layers by adding config. Is it possible to do access pooler_output of an intermediate layer?\r\n2. So if I want to analyse sentence similarity (so the sentence \"this desk is green\" will be more similar to \"this chair is yellow\" than to \"We ate pizza\") - Is it better to take pooler output or to average token representation in the hidden states?", "@mfuntowicz Can you please help?", "Hi @orko19,\r\n\r\n1. No this is not possible to do so because the \"pooler\" is a layer in itself in BERT that depends [on the last representation](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L437).\r\n\r\n2. The best would be to finetune the pooling representation for you task and use the pooler then. Using either the pooling layer or the averaged representation of the tokens **as it**, might be too biased towards the training objective it was initially trained for. These layers directly linked to the loss so very prone to high bias.", "@mfuntowicz Great thanks!", "如果是使用的trainer.train()这个是怎么获取它的中间层数据呢", "Then, How to convert the output tensor to numpy?", "> 如果是使用的trainer.train()这个是怎么获取它的中间层数据呢\r\n\r\n老哥,我也遇到这个问题了,请问你最后解决了吗,怎么弄的呢。" ]
1,573
1,689
1,575
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I tried to set the output_hidden_states=True, but only got 3 layers of the hidden states of model outputs for BERT, but theoricaly it should be 12, how can I get that?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1827/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1827/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1826
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1826/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1826/comments
https://api.github.com/repos/huggingface/transformers/issues/1826/events
https://github.com/huggingface/transformers/issues/1826
522,660,745
MDU6SXNzdWU1MjI2NjA3NDU=
1,826
Regarding Fine-Tuning for Abstractive Summarization
{ "login": "tahmedge", "id": 15964236, "node_id": "MDQ6VXNlcjE1OTY0MjM2", "avatar_url": "https://avatars.githubusercontent.com/u/15964236?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tahmedge", "html_url": "https://github.com/tahmedge", "followers_url": "https://api.github.com/users/tahmedge/followers", "following_url": "https://api.github.com/users/tahmedge/following{/other_user}", "gists_url": "https://api.github.com/users/tahmedge/gists{/gist_id}", "starred_url": "https://api.github.com/users/tahmedge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tahmedge/subscriptions", "organizations_url": "https://api.github.com/users/tahmedge/orgs", "repos_url": "https://api.github.com/users/tahmedge/repos", "events_url": "https://api.github.com/users/tahmedge/events{/privacy}", "received_events_url": "https://api.github.com/users/tahmedge/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,573
1,579
1,579
NONE
null
Hi, What result did you obtain with your Abstractive Summarization code for CNN-DM? Also, is it implemented based on this paper: https://arxiv.org/abs/1908.08345 ? It will be great if you provide some details regarding it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1826/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 3 }
https://api.github.com/repos/huggingface/transformers/issues/1826/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1825
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1825/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1825/comments
https://api.github.com/repos/huggingface/transformers/issues/1825/events
https://github.com/huggingface/transformers/pull/1825
522,659,984
MDExOlB1bGxSZXF1ZXN0MzQwODE1NjYx
1,825
合并
{ "login": "mofengboy", "id": 13678171, "node_id": "MDQ6VXNlcjEzNjc4MTcx", "avatar_url": "https://avatars.githubusercontent.com/u/13678171?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mofengboy", "html_url": "https://github.com/mofengboy", "followers_url": "https://api.github.com/users/mofengboy/followers", "following_url": "https://api.github.com/users/mofengboy/following{/other_user}", "gists_url": "https://api.github.com/users/mofengboy/gists{/gist_id}", "starred_url": "https://api.github.com/users/mofengboy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mofengboy/subscriptions", "organizations_url": "https://api.github.com/users/mofengboy/orgs", "repos_url": "https://api.github.com/users/mofengboy/repos", "events_url": "https://api.github.com/users/mofengboy/events{/privacy}", "received_events_url": "https://api.github.com/users/mofengboy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Is this related to this repository?" ]
1,573
1,573
1,573
NONE
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1825/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1825/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1825", "html_url": "https://github.com/huggingface/transformers/pull/1825", "diff_url": "https://github.com/huggingface/transformers/pull/1825.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1825.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/1824
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1824/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1824/comments
https://api.github.com/repos/huggingface/transformers/issues/1824/events
https://github.com/huggingface/transformers/issues/1824
522,627,457
MDU6SXNzdWU1MjI2Mjc0NTc=
1,824
Issue testing run_squad.py example
{ "login": "Santosh-Gupta", "id": 5524261, "node_id": "MDQ6VXNlcjU1MjQyNjE=", "avatar_url": "https://avatars.githubusercontent.com/u/5524261?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Santosh-Gupta", "html_url": "https://github.com/Santosh-Gupta", "followers_url": "https://api.github.com/users/Santosh-Gupta/followers", "following_url": "https://api.github.com/users/Santosh-Gupta/following{/other_user}", "gists_url": "https://api.github.com/users/Santosh-Gupta/gists{/gist_id}", "starred_url": "https://api.github.com/users/Santosh-Gupta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Santosh-Gupta/subscriptions", "organizations_url": "https://api.github.com/users/Santosh-Gupta/orgs", "repos_url": "https://api.github.com/users/Santosh-Gupta/repos", "events_url": "https://api.github.com/users/Santosh-Gupta/events{/privacy}", "received_events_url": "https://api.github.com/users/Santosh-Gupta/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Try adding `--version_2_with_negative \\` to your `run_squad.py `script.", "That fixed it, thanks!" ]
1,573
1,573
1,573
CONTRIBUTOR
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I am trying to run the squad example here https://github.com/huggingface/transformers/blob/master/examples/run_squad.py Using the Squad data from here https://rajpurkar.github.io/SQuAD-explorer/ But I get this error >ValueError: For training, each question should have exactly 1 answer. Here is my code ``` !pip install transformers import urllib.request url = 'https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json' urllib.request.urlretrieve(url, 'train-v2.0.json') url = 'https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json' urllib.request.urlretrieve(url, 'dev-v2.0.json') !wget 'https://raw.githubusercontent.com/huggingface/transformers/master/examples/run_squad.py' !wget 'https://raw.githubusercontent.com/huggingface/pytorch-transformers/master/examples/utils_squad.py' !wget 'https://raw.githubusercontent.com/huggingface/pytorch-transformers/master/examples/utils_squad_evaluate.py' # $SQUAD_DIR/train-v1.1.json SQUAD_Train = '/content/train-v2.0.json' SQUAD_Dev = '/content/dev-v2.0.json' !python run_squad.py \ --model_type bert \ --model_name_or_path bert-base-cased \ --do_train \ --do_eval \ --do_lower_case \ --train_file '$SQUAD_Train' \ --predict_file '$SQUAD_Dev' \ --per_gpu_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ ``` Here is the error message ``` 11/14/2019 05:01:20 - WARNING - __main__ - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False 11/14/2019 05:01:20 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-config.json from cache at /root/.cache/torch/transformers/b945b69218e98b3e2c95acf911789741307dec43c698d35fad11c1ae28bda352.d7a3af18ce3a2ab7c0f48f04dc8daff45ed9a3ed333b9e9a79d012a0dedf87a6 11/14/2019 05:01:20 - INFO - transformers.configuration_utils - Model config { "attention_probs_dropout_prob": 0.1, "finetuning_task": null, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "num_attention_heads": 12, "num_hidden_layers": 12, "num_labels": 2, "output_attentions": false, "output_hidden_states": false, "output_past": true, "pruned_heads": {}, "torchscript": false, "type_vocab_size": 2, "use_bfloat16": false, "vocab_size": 28996 } 11/14/2019 05:01:21 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-vocab.txt from cache at /root/.cache/torch/transformers/5e8a2b4893d13790ed4150ca1906be5f7a03d6c4ddf62296c383f6db42814db2.e13dbb970cb325137104fb2e5f36fe865f27746c6b526f6352861b1980eb80b1 11/14/2019 05:01:21 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-pytorch_model.bin from cache at /root/.cache/torch/transformers/35d8b9d36faaf46728a0192d82bf7d00137490cd6074e8500778afed552a67e5.3fadbea36527ae472139fe84cddaa65454d7429f12d543d80bfc3ad70de55ac2 11/14/2019 05:01:24 - INFO - transformers.modeling_utils - Weights of BertForQuestionAnswering not initialized from pretrained model: ['qa_outputs.weight', 'qa_outputs.bias'] 11/14/2019 05:01:24 - INFO - transformers.modeling_utils - Weights from pretrained model not used in BertForQuestionAnswering: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias'] 11/14/2019 05:01:27 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, cache_dir='', config_name='', device=device(type='cuda'), do_eval=True, do_lower_case=True, do_train=True, doc_stride=128, eval_all_checkpoints=False, evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=3e-05, local_rank=-1, logging_steps=50, max_answer_length=30, max_grad_norm=1.0, max_query_length=64, max_seq_length=384, max_steps=-1, model_name_or_path='bert-base-cased', model_type='bert', n_best_size=20, n_gpu=1, no_cuda=False, null_score_diff_threshold=0.0, num_train_epochs=2.0, output_dir='/tmp/debug_squad/', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=8, per_gpu_train_batch_size=12, predict_file='/content/dev-v2.0.json', save_steps=50, seed=42, server_ip='', server_port='', tokenizer_name='', train_file='/content/train-v2.0.json', verbose_logging=False, version_2_with_negative=False, warmup_steps=0, weight_decay=0.0) 11/14/2019 05:01:27 - INFO - __main__ - Creating features from dataset file at /content/train-v2.0.json Traceback (most recent call last): File "run_squad.py", line 569, in <module> main() File "run_squad.py", line 514, in main train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False, output_examples=False) File "run_squad.py", line 307, in load_and_cache_examples version_2_with_negative=args.version_2_with_negative) File "/content/utils_squad.py", line 152, in read_squad_examples "For training, each question should have exactly 1 answer.") ValueError: For training, each question should have exactly 1 answer. ``` For convenience, here's a colab notebook with the code that you can run https://colab.research.google.com/drive/1tNisXX5siuNnkuEQ-X_XdEdDTtdJ0qeL
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1824/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1824/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1823
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1823/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1823/comments
https://api.github.com/repos/huggingface/transformers/issues/1823/events
https://github.com/huggingface/transformers/issues/1823
522,589,963
MDU6SXNzdWU1MjI1ODk5NjM=
1,823
how to use convert_pytorch_checkpoint_to_tf2.py
{ "login": "chiyuzhang94", "id": 33407613, "node_id": "MDQ6VXNlcjMzNDA3NjEz", "avatar_url": "https://avatars.githubusercontent.com/u/33407613?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chiyuzhang94", "html_url": "https://github.com/chiyuzhang94", "followers_url": "https://api.github.com/users/chiyuzhang94/followers", "following_url": "https://api.github.com/users/chiyuzhang94/following{/other_user}", "gists_url": "https://api.github.com/users/chiyuzhang94/gists{/gist_id}", "starred_url": "https://api.github.com/users/chiyuzhang94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chiyuzhang94/subscriptions", "organizations_url": "https://api.github.com/users/chiyuzhang94/orgs", "repos_url": "https://api.github.com/users/chiyuzhang94/repos", "events_url": "https://api.github.com/users/chiyuzhang94/events{/privacy}", "received_events_url": "https://api.github.com/users/chiyuzhang94/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, you can show the help with the following command:\r\n\r\n```\r\npython transformers/convert_pytorch_checkpoint_to_tf2.py --help \r\n```\r\n\r\nWhat is your use-case?", "> Hi, you can show the help with the following command:\r\n> \r\n> ```\r\n> python transformers/convert_pytorch_checkpoint_to_tf2.py --help \r\n> ```\r\n> \r\n> What is your use-case?\r\n\r\nThanks for your suggestion. I used convert_bert_pytorch_checkpoint_to_original_tf. It works for me." ]
1,573
1,574
1,574
NONE
null
## ❓ Questions & Help I am wondering how to use convert_pytorch_checkpoint_to_tf2.py to convert pytorch checkpoint to tensorflow checkpoint. I found Comparing-PT-and-TF-models.ipynb have an example to use pytorch_pretrained_bert. But It doesn't work for convert_pytorch_checkpoint_to_tf2. Could you please give me any help? <!-- A clear and concise description of the question. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1823/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/huggingface/transformers/issues/1823/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1822
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1822/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1822/comments
https://api.github.com/repos/huggingface/transformers/issues/1822/events
https://github.com/huggingface/transformers/pull/1822
522,561,595
MDExOlB1bGxSZXF1ZXN0MzQwNzM5NTUx
1,822
CamemBERT
{ "login": "louismartin", "id": 12654189, "node_id": "MDQ6VXNlcjEyNjU0MTg5", "avatar_url": "https://avatars.githubusercontent.com/u/12654189?v=4", "gravatar_id": "", "url": "https://api.github.com/users/louismartin", "html_url": "https://github.com/louismartin", "followers_url": "https://api.github.com/users/louismartin/followers", "following_url": "https://api.github.com/users/louismartin/following{/other_user}", "gists_url": "https://api.github.com/users/louismartin/gists{/gist_id}", "starred_url": "https://api.github.com/users/louismartin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/louismartin/subscriptions", "organizations_url": "https://api.github.com/users/louismartin/orgs", "repos_url": "https://api.github.com/users/louismartin/repos", "events_url": "https://api.github.com/users/louismartin/events{/privacy}", "received_events_url": "https://api.github.com/users/louismartin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1822?src=pr&el=h1) Report\n> Merging [#1822](https://codecov.io/gh/huggingface/transformers/pull/1822?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/155c782a2ccd103cf63ad48a2becd7c76a7d2115?src=pr&el=desc) will **decrease** coverage by `0.14%`.\n> The diff coverage is `62.5%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1822/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1822?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1822 +/- ##\n==========================================\n- Coverage 84.16% 84.02% -0.15% \n==========================================\n Files 94 97 +3 \n Lines 14185 14281 +96 \n==========================================\n+ Hits 11939 11999 +60 \n- Misses 2246 2282 +36\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1822?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/1822/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2NhbWVtYmVydC5weQ==) | `100% <100%> (ø)` | |\n| [transformers/configuration\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/1822/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fY2FtZW1iZXJ0LnB5) | `100% <100%> (ø)` | |\n| [transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1822/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9hdXRvLnB5) | `45.94% <33.33%> (-1.12%)` | :arrow_down: |\n| [transformers/tokenization\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/1822/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9jYW1lbWJlcnQucHk=) | `36.53% <36.53%> (ø)` | |\n| [transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1822/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYXV0by5weQ==) | `59.45% <66.66%> (+0.63%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1822/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `92.95% <0%> (ø)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1822/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `97.08% <0%> (ø)` | :arrow_up: |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1822/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `71.76% <0%> (ø)` | :arrow_up: |\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1822/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `73.61% <0%> (ø)` | :arrow_up: |\n| [transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1822/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `96.46% <0%> (ø)` | :arrow_up: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/1822/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1822?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1822?src=pr&el=footer). Last update [155c782...65549e3](https://codecov.io/gh/huggingface/transformers/pull/1822?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Hi @louismartin many thanks for adding this :heart: \r\n\r\nI tested the implementation a bit, and I got the following error message when I try to save the tokenizer object:\r\n\r\nFor RoBERTa this is working correctly:\r\n\r\n```python\r\nIn [2]: roberta_tokenizer = RobertaTokenizer.from_pretrained(\"roberta-base\")\r\n\r\nIn [3]: roberta_tokenizer.save_pretrained(\"/tmp\")\r\nOut[3]: \r\n('/tmp/vocab.json',\r\n '/tmp/merges.txt',\r\n '/tmp/special_tokens_map.json',\r\n '/tmp/added_tokens.json')\r\n```\r\n\r\nBut for CamemBERT:\r\n\r\n```python\r\nIn [6]: camembert_tokenizer = CamembertTokenizer.from_pretrained(\"camembert-base\")\r\n\r\nIn [7]: camembert_tokenizer.save_pretrained(\"/tmp\")\r\n---------------------------------------------------------------------------\r\nNotImplementedError Traceback (most recent call last)\r\n<ipython-input-7-417ff9326aaf> in <module>\r\n----> 1 camembert_tokenizer.save_pretrained(\"/tmp\")\r\n\r\n/mnt/transformers-camembert/transformers/tokenization_utils.py in save_pretrained(self, save_directory)\r\n 463 f.write(out_str)\r\n 464\r\n--> 465 vocab_files = self.save_vocabulary(save_directory)\r\n 466\r\n 467 return vocab_files + (special_tokens_map_file, added_tokens_file)\r\n\r\n/mnt/transformers-camembert/transformers/tokenization_utils.py in save_vocabulary(self, save_directory)\r\n 474 Please use :func:`~transformers.PreTrainedTokenizer.save_pretrained` `()` to save the full Tokenizer state if you want to reload it using the :func:`~transformers.PreTrainedTokenizer.from_pretrained` class method.\r\n 475 \"\"\"\r\n--> 476 raise NotImplementedError\r\n 477\r\n 478\r\n```\r\n\r\n`.save_vocabulary()` is used e.g. in the NER example script :)", "> Hi @louismartin many thanks for adding this ❤️\r\n> \r\n> I tested the implementation a bit, and I got the following error message when I try to save the tokenizer object:\r\n> \r\n> For RoBERTa this is working correctly:\r\n> \r\n> ```python\r\n> In [2]: roberta_tokenizer = RobertaTokenizer.from_pretrained(\"roberta-base\")\r\n> \r\n> In [3]: roberta_tokenizer.save_pretrained(\"/tmp\")\r\n> Out[3]: \r\n> ('/tmp/vocab.json',\r\n> '/tmp/merges.txt',\r\n> '/tmp/special_tokens_map.json',\r\n> '/tmp/added_tokens.json')\r\n> ```\r\n> \r\n> But for CamemBERT:\r\n> \r\n> ```python\r\n> In [6]: camembert_tokenizer = CamembertTokenizer.from_pretrained(\"camembert-base\")\r\n> \r\n> In [7]: camembert_tokenizer.save_pretrained(\"/tmp\")\r\n> ---------------------------------------------------------------------------\r\n> NotImplementedError Traceback (most recent call last)\r\n> <ipython-input-7-417ff9326aaf> in <module>\r\n> ----> 1 camembert_tokenizer.save_pretrained(\"/tmp\")\r\n> \r\n> /mnt/transformers-camembert/transformers/tokenization_utils.py in save_pretrained(self, save_directory)\r\n> 463 f.write(out_str)\r\n> 464\r\n> --> 465 vocab_files = self.save_vocabulary(save_directory)\r\n> 466\r\n> 467 return vocab_files + (special_tokens_map_file, added_tokens_file)\r\n> \r\n> /mnt/transformers-camembert/transformers/tokenization_utils.py in save_vocabulary(self, save_directory)\r\n> 474 Please use :func:`~transformers.PreTrainedTokenizer.save_pretrained` `()` to save the full Tokenizer state if you want to reload it using the :func:`~transformers.PreTrainedTokenizer.from_pretrained` class method.\r\n> 475 \"\"\"\r\n> --> 476 raise NotImplementedError\r\n> 477\r\n> 478\r\n> ```\r\n> \r\n> `.save_vocabulary()` is used e.g. in the NER example script :)\r\n\r\nHi Stefan, \r\n\r\nThanks for trying our model!\r\nThe reason is that I haven't implemented saving the tokenizer.\r\nTwo solutions: either don't use `tokenizer.save_pretrained`, or implement it by adapting the necessary methods from [tokenization_xlnet.py](https://github.com/huggingface/transformers/blob/master/transformers/tokenization_xlnet.py) to implement `save_pretrained()`. It is probably only necessary to copy and paste `__setstate__` and `__getstate__` methods :)\r\nI don't have much time to do it in the next few weeks, but don't hesitate if you have more questions!\r\n\r\nLouis\r\n", "Hi @louismartin!!\r\n\r\n### **#1844**\r\n\r\nI rebased on master, did the few discussed tweaks, and merged this: https://github.com/huggingface/transformers/pull/1844", "Great work! \r\n\r\n@stefan-it @julien-c @louismartin \r\n\r\nCould you show how to get the embedding vector of a sentence please?\r\n\r\n```python\r\nfrom transformers import CamembertTokenizer\r\nimport torch\r\n\r\ncamembert_tokenizer = CamembertTokenizer.from_pretrained(\"camembert-base\")\r\n\r\ncamembert_tokenizer.encode(\"Salut, ça va ?\") # How to get embedding of this sentence not just the ids of tokens ? \r\n```", "@stefan-it @julien-c @louismartin @hzitoun\r\nQuestion: the following weights I've extracted with the following code are the weights of the pre-trained CamemBERT model or the sentence embeddings?\r\n\r\n```\r\nfrom transformers import CamembertTokenizer\r\nfrom transformers import CamembertModel\r\n\r\ntext=\"J'aime le camembert !\"\r\ntokenizer = CamembertTokenizer.from_pretrained('camembert-base')\r\nmodel = CamembertModel.from_pretrained('camembert-base', output_hidden_states=True) # add output_hidden_states in order to retrieve ALL hidden states of the CamemBERT model (so the embeddings layer too!)\r\ninput_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)\r\noutput = model(input_ids)\r\n\r\nembeddings = output[2][0]\r\nprint('embeddings: \\n{}'.format(embeddings))\r\n>>> tensor([[[ 0.0399, -0.0788, 0.0552, ..., -0.0855, -0.0394, 0.0222],\r\n [-0.1678, 0.1954, -0.2219, ..., 0.1590, -0.2967, -0.1723],\r\n [ 0.0699, 0.2307, -0.0723, ..., 0.2294, 0.0463, -0.0167],\r\n ...,\r\n [ 0.0766, 0.2548, 0.0690, ..., 0.0172, -0.3713, -0.0262],\r\n [ 0.1681, 0.0253, -0.0386, ..., 0.1626, -0.1203, -0.2415],\r\n [ 0.4876, -0.0714, -0.0020, ..., -0.0628, -0.2701, 0.2210]]],\r\n grad_fn=<NativeLayerNormBackward>)\r\n```\r\n\r\nIt has been tested by comparing with the official Camembert model found [here](https://camembert-model.fr/#contact).\r\n```\r\nimport torch\r\ncamembert = torch.hub.load('pytorch/fairseq', 'camembert.v0')\r\ncamembert.eval() # disable dropout (or leave in train mode to finetune)\r\n\r\nline = \"J'aime le camembert!\"\r\ntokens = camembert.encode(line)\r\n\r\nall_layers = camembert.extract_features(tokens, return_all_hiddens=True)\r\nembeddings = all_layers[0]\r\nprint('embeddings: \\n{}'.format(embeddings)\r\n>>> tensor([[[ 0.0399, -0.0788, 0.0552, ..., -0.0855, -0.0394, 0.0222],\r\n [-0.1678, 0.1954, -0.2219, ..., 0.1590, -0.2967, -0.1723],\r\n [ 0.0699, 0.2307, -0.0723, ..., 0.2294, 0.0463, -0.0167],\r\n ...,\r\n [ 0.0766, 0.2548, 0.0690, ..., 0.0172, -0.3713, -0.0262],\r\n [ 0.1461, 0.0414, -0.0877, ..., -0.0577, -0.2219, -0.3685],\r\n [ 0.4876, -0.0714, -0.0020, ..., -0.0628, -0.2701, 0.2210]]],\r\n grad_fn=<TransposeBackward0>)\r\n```\r\n\r\nMost values correspond, but there are some values different (e.g. 0.1681 vs 0.1461 or 0.2415 vs -0.3685)..\r\n\r\n> Great work!\r\n> \r\n> @stefan-it @julien-c @louismartin\r\n> \r\n> Could you show how to get the embedding vector of a sentence please?\r\n> \r\n> ```python\r\n> from transformers import CamembertTokenizer\r\n> import torch\r\n> \r\n> camembert_tokenizer = CamembertTokenizer.from_pretrained(\"camembert-base\")\r\n> \r\n> camembert_tokenizer.encode(\"Salut, ça va ?\") # How to get embedding of this sentence not just the ids of tokens ? \r\n> ```", "> @stefan-it @julien-c @louismartin @hzitoun\r\n> Question: the following weights I've extracted with the following code are the weights of the pre-trained CamemBERT model or the sentence embeddings?\r\n> \r\n> ```\r\n> from transformers import CamembertTokenizer\r\n> from transformers import CamembertModel\r\n> \r\n> text=\"J'aime le camembert !\"\r\n> tokenizer = CamembertTokenizer.from_pretrained('camembert-base')\r\n> model = CamembertModel.from_pretrained('camembert-base', output_hidden_states=True) # add output_hidden_states in order to retrieve ALL hidden states of the CamemBERT model (so the embeddings layer too!)\r\n> input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)\r\n> output = model(input_ids)\r\n> \r\n> embeddings = output[2][0]\r\n> print('embeddings: \\n{}'.format(embeddings))\r\n> >>> tensor([[[ 0.0399, -0.0788, 0.0552, ..., -0.0855, -0.0394, 0.0222],\r\n> [-0.1678, 0.1954, -0.2219, ..., 0.1590, -0.2967, -0.1723],\r\n> [ 0.0699, 0.2307, -0.0723, ..., 0.2294, 0.0463, -0.0167],\r\n> ...,\r\n> [ 0.0766, 0.2548, 0.0690, ..., 0.0172, -0.3713, -0.0262],\r\n> [ 0.1681, 0.0253, -0.0386, ..., 0.1626, -0.1203, -0.2415],\r\n> [ 0.4876, -0.0714, -0.0020, ..., -0.0628, -0.2701, 0.2210]]],\r\n> grad_fn=<NativeLayerNormBackward>)\r\n> ```\r\n> \r\n> It has been tested by comparing with the official Camembert model found [here](https://camembert-model.fr/#contact).\r\n> \r\n> ```\r\n> import torch\r\n> camembert = torch.hub.load('pytorch/fairseq', 'camembert.v0')\r\n> camembert.eval() # disable dropout (or leave in train mode to finetune)\r\n> \r\n> line = \"J'aime le camembert!\"\r\n> tokens = camembert.encode(line)\r\n> \r\n> all_layers = camembert.extract_features(tokens, return_all_hiddens=True)\r\n> embeddings = all_layers[0]\r\n> print('embeddings: \\n{}'.format(embeddings)\r\n> >>> tensor([[[ 0.0399, -0.0788, 0.0552, ..., -0.0855, -0.0394, 0.0222],\r\n> [-0.1678, 0.1954, -0.2219, ..., 0.1590, -0.2967, -0.1723],\r\n> [ 0.0699, 0.2307, -0.0723, ..., 0.2294, 0.0463, -0.0167],\r\n> ...,\r\n> [ 0.0766, 0.2548, 0.0690, ..., 0.0172, -0.3713, -0.0262],\r\n> [ 0.1461, 0.0414, -0.0877, ..., -0.0577, -0.2219, -0.3685],\r\n> [ 0.4876, -0.0714, -0.0020, ..., -0.0628, -0.2701, 0.2210]]],\r\n> grad_fn=<TransposeBackward0>)\r\n> ```\r\n> \r\n> Most values correspond, but there are some values different (e.g. 0.1681 vs 0.1461 or 0.2415 vs -0.3685)..\r\n> \r\n> > Great work!\r\n> > @stefan-it @julien-c @louismartin\r\n> > Could you show how to get the embedding vector of a sentence please?\r\n> > ```python\r\n> > from transformers import CamembertTokenizer\r\n> > import torch\r\n> > \r\n> > camembert_tokenizer = CamembertTokenizer.from_pretrained(\"camembert-base\")\r\n> > \r\n> > camembert_tokenizer.encode(\"Salut, ça va ?\") # How to get embedding of this sentence not just the ids of tokens ? \r\n> > ```\r\n\r\n@TheEdoardo93 \r\nThanks for showing how to do the same thing with native `torch` and for pointing out the difference in embedding for some values! \r\n\r\nDoes any one have an idea how to get a fixed shape of embedding vector regardless the number of tokens in a sentence? (in order to do cosine distance for example between two sentences even if they have different tokens sizes) \r\n\r\nCalculating the `mean on axis=1` for example ? I've asked the question on SOF too if you could answer https://stackoverflow.com/questions/59030907/nlp-transformers-how-to-get-a-fixed-embedding-vector-size :) " ]
1,573
1,574
1,573
CONTRIBUTOR
null
Code for the CamemBERT model, adapted from https://github.com/pytorch/fairseq/tree/master/examples/camembert
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1822/reactions", "total_count": 15, "+1": 9, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 4, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1822/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1822", "html_url": "https://github.com/huggingface/transformers/pull/1822", "diff_url": "https://github.com/huggingface/transformers/pull/1822.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1822.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/1821
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1821/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1821/comments
https://api.github.com/repos/huggingface/transformers/issues/1821/events
https://github.com/huggingface/transformers/issues/1821
522,548,301
MDU6SXNzdWU1MjI1NDgzMDE=
1,821
Generated text makes no sense. Trying to auto-generate sentences like https://transformer.huggingface.co/doc/distil-gpt2
{ "login": "connecteev", "id": 64816, "node_id": "MDQ6VXNlcjY0ODE2", "avatar_url": "https://avatars.githubusercontent.com/u/64816?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connecteev", "html_url": "https://github.com/connecteev", "followers_url": "https://api.github.com/users/connecteev/followers", "following_url": "https://api.github.com/users/connecteev/following{/other_user}", "gists_url": "https://api.github.com/users/connecteev/gists{/gist_id}", "starred_url": "https://api.github.com/users/connecteev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connecteev/subscriptions", "organizations_url": "https://api.github.com/users/connecteev/orgs", "repos_url": "https://api.github.com/users/connecteev/repos", "events_url": "https://api.github.com/users/connecteev/events{/privacy}", "received_events_url": "https://api.github.com/users/connecteev/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Have you ever tried to finetune some hyper-paremeters, such as **temperature** or **seed**? You can also modify the **length** of the generated text by OpenAI GPT-2.\r\n\r\n_Tip_: if you want to make OpenAI GPT-2 model more comfortable and sure about its results, please set the temperature value lower than 0,5 (e.g. 0,2 or 0,3).\r\n\r\n> I would like to replicate the text-generation behavior of https://transformer.huggingface.co/doc/distil-gpt2 on my MacOS. I went to that URL and typed \"A doula is a person\" and kept tabbing for auto-text generation, and got this:\r\n> \r\n> ```\r\n> A doula is a person who cares for a child or infant. A doula is not a midwife. But the two are not entirely independent. Why did I chose a doula and not myself? There are two good reasons: First, since my daughter was still growing , I had the luxury of time to learn as much knowledge about breastfeeding as I could. Second, I have the opportunity\r\n> ```\r\n> \r\n> Quite impressive. But now, I want to replicate this in Python on my system so I can auto-generate text that makes sense. I did the following:\r\n> \r\n> 1. Installed tensor flow\r\n> 2. installed pytorch\r\n> 3. installed huggingface :D\r\n> 4. Based on the recommendation in the [Examples section here](https://huggingface.co/transformers/examples.html#language-generation), I then ran:\r\n> \r\n> ```\r\n> python run_generation.py --model_type=gpt2 --model_name_or_path=gpt2\r\n> ```\r\n> \r\n> and gave it the same sample input:\r\n> \r\n> ```\r\n> Model prompt >>> A doula\r\n> 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:01<00:00, 10.24it/s]\r\n> hatter with a broad laugh and a guffaw over her stomach are eminently adorable. In\r\n> Model prompt >>> A doula is a person\r\n> 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00, 8.94it/s]\r\n> who wishes to be free and independent while having her resources donated to society.\r\n> \r\n> Gender Non-\r\n> ```\r\n> \r\n> As you can see, I get gibberish back, the sentences and context makes no sense. What am I missing?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,573
1,579
1,579
NONE
null
I would like to replicate the text-generation behavior of https://transformer.huggingface.co/doc/distil-gpt2 on my MacOS. I went to that URL and typed "A doula is a person" and kept tabbing for auto-text generation, and got this: ``` A doula is a person who cares for a child or infant. A doula is not a midwife. But the two are not entirely independent. Why did I chose a doula and not myself? There are two good reasons: First, since my daughter was still growing , I had the luxury of time to learn as much knowledge about breastfeeding as I could. Second, I have the opportunity ``` Quite impressive. But now, I want to replicate this in Python on my system so I can auto-generate text that makes sense. I did the following: 1. Installed tensor flow 2. installed pytorch 3. installed huggingface :D 4. Based on the recommendation in the [Examples section here](https://huggingface.co/transformers/examples.html#language-generation), I then ran: ``` python run_generation.py --model_type=gpt2 --model_name_or_path=gpt2 ``` and gave it the same sample input: ``` Model prompt >>> A doula 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:01<00:00, 10.24it/s] hatter with a broad laugh and a guffaw over her stomach are eminently adorable. In Model prompt >>> A doula is a person 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00, 8.94it/s] who wishes to be free and independent while having her resources donated to society. Gender Non- ``` As you can see, I get gibberish back, the sentences and context makes no sense. What am I missing?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1821/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1821/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1820
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1820/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1820/comments
https://api.github.com/repos/huggingface/transformers/issues/1820/events
https://github.com/huggingface/transformers/issues/1820
522,490,984
MDU6SXNzdWU1MjI0OTA5ODQ=
1,820
Is there a way of finetuning DistilGPT2?
{ "login": "aditya1702", "id": 15054664, "node_id": "MDQ6VXNlcjE1MDU0NjY0", "avatar_url": "https://avatars.githubusercontent.com/u/15054664?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aditya1702", "html_url": "https://github.com/aditya1702", "followers_url": "https://api.github.com/users/aditya1702/followers", "following_url": "https://api.github.com/users/aditya1702/following{/other_user}", "gists_url": "https://api.github.com/users/aditya1702/gists{/gist_id}", "starred_url": "https://api.github.com/users/aditya1702/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aditya1702/subscriptions", "organizations_url": "https://api.github.com/users/aditya1702/orgs", "repos_url": "https://api.github.com/users/aditya1702/repos", "events_url": "https://api.github.com/users/aditya1702/events{/privacy}", "received_events_url": "https://api.github.com/users/aditya1702/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Yes you can, call `run_lm_finetuning` via these parameters:\r\n\r\n```\r\n--model_type gpt2 \\\r\n--model_name_or_path distilgpt2 \r\n```", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I can't find the file `run_lm_finetuning.py` under this repo. Could you please give the full file path please? Thanks a lot.", "> I can't find the file `run_lm_finetuning.py` under this repo. Could you please give the full file path please? Thanks a lot.\r\n\r\nGot it . it has been renamed to [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) now." ]
1,573
1,599
1,579
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Hello! I am trying to finetune GPT2 for a project of mine but my aim is to deploy the model on a server and hence, I would like the final model file to be small. I was thinking of using the Distil* models especially the DistilGPT2 but couldnt find it in the run_lm_finetuning.py script. Currently, is finetuning distilGPT2 supported? If yes, then how should I do it?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1820/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1819
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1819/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1819/comments
https://api.github.com/repos/huggingface/transformers/issues/1819/events
https://github.com/huggingface/transformers/issues/1819
522,464,584
MDU6SXNzdWU1MjI0NjQ1ODQ=
1,819
CUDA out of memory on loss.backward when fine-tuning GPT2 (117M)
{ "login": "cppntn", "id": 26765504, "node_id": "MDQ6VXNlcjI2NzY1NTA0", "avatar_url": "https://avatars.githubusercontent.com/u/26765504?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cppntn", "html_url": "https://github.com/cppntn", "followers_url": "https://api.github.com/users/cppntn/followers", "following_url": "https://api.github.com/users/cppntn/following{/other_user}", "gists_url": "https://api.github.com/users/cppntn/gists{/gist_id}", "starred_url": "https://api.github.com/users/cppntn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cppntn/subscriptions", "organizations_url": "https://api.github.com/users/cppntn/orgs", "repos_url": "https://api.github.com/users/cppntn/repos", "events_url": "https://api.github.com/users/cppntn/events{/privacy}", "received_events_url": "https://api.github.com/users/cppntn/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "If your gpu is out of memory, try decreasing the batch size (this will save memory). In order to retain the same effective batch size, use gradient accumulation. \r\n\r\nBasically, do `loss.backward()` for each step, but only every, say, 10 steps, do `optimizer.step()` and `optimizer.backward()`.", "Hi @aced125, batch size is already set to one and I have tried values from 1 to 1024 for gradient accumulation, it gives me always CUDA out of memory error.", "Hijacking this issue to say that I have the exact same problem with `xlm-mlm-17-1280`. Even with `batch size = 1`, Apex enabled and two 1080Ti's it always give memory error in the first `loss.backward()` call.\r\n\r\n", "Which optimizer are you using? If you're using the default (AdamW) that may be part of your problem. Different optimizers have different memory requirements. Adam is one of the worst offenders. Give RMSProp a try since it has much less memory overhead. Every additional feature like using momentum will increase memory overhead.", "Hi @dvaltchanov thanks but using RMSprop led me to the same errors", "@antocapp Which block_size are you using? The default (512) or something else? Using a smaller block size (e.g. 256) will also use up less memory. On a smaller card like yours, you basically need to use batch size of 1 with small block size and memory efficient optimizer for it to fit into GPU memory. An alternative is to try running your test on Google Colab or another cloud service where you can get 12+ GB of GPU memory.", "> Hijacking this issue to say that I have the exact same problem with `xlm-mlm-17-1280`. Even with `batch size = 1`, Apex enabled and two 1080Ti's it always give memory error in the first `loss.backward()` call.\r\n\r\nhi,I have the exact same problem with `xlm-mlm-17-1280`.Have you solved this issue?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,573
1,593
1,581
NONE
null
## ❓ Questions & Help File "run_lm_finetuning.py", line 551, in <module> main() File "run_lm_finetuning.py", line 503, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File "run_lm_finetuning.py", line 240, in train loss.backward() File "/home/antonio/anaconda3/lib/python3.7/site-packages/torch/tensor.py", line 150, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/antonio/anaconda3/lib/python3.7/site-packages/torch/autograd/__init__.py", line 99, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: CUDA out of memory. Tried to allocate 198.00 MiB (GPU 0; 5.93 GiB total capacity; 4.64 GiB already allocated; 54.94 MiB free; 233.05 MiB cached) I encounter the above error with my **1060 GTX 6GB Nvidia, on the GPT-2 small model. The training configs are: batch size = 1 gradient accumulation steps = 1024** (I've started without gradient accumulation, then tried accumulation based on an old issue from this repo, then from small values I went up to this value, but the error always occurs). **If i run with no gradient accumulation, I get this instead:** File "run_lm_finetuning.py", line 228, in train outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels) File "/home/antonio/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/antonio/anaconda3/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 549, in forward inputs_embeds=inputs_embeds) File "/home/antonio/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/antonio/anaconda3/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 460, in forward head_mask=head_mask[i]) File "/home/antonio/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/antonio/anaconda3/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 232, in forward head_mask=head_mask) File "/home/antonio/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/antonio/anaconda3/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 182, in forward x = self.c_attn(x) File "/home/antonio/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/home/antonio/anaconda3/lib/python3.7/site-packages/transformers/modeling_utils.py", line 488, in forward x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 5.93 GiB total capacity; 4.77 GiB already allocated; 12.81 MiB free; 154.93 MiB cached) Can you please give me a little hint on how to overcome this error, or a little hope to run gpt-2 small on 6 GB of GPU. Thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1819/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1819/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1818
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1818/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1818/comments
https://api.github.com/repos/huggingface/transformers/issues/1818/events
https://github.com/huggingface/transformers/issues/1818
522,355,444
MDU6SXNzdWU1MjIzNTU0NDQ=
1,818
Trouble running fine tuned language model script
{ "login": "Khev", "id": 7317798, "node_id": "MDQ6VXNlcjczMTc3OTg=", "avatar_url": "https://avatars.githubusercontent.com/u/7317798?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Khev", "html_url": "https://github.com/Khev", "followers_url": "https://api.github.com/users/Khev/followers", "following_url": "https://api.github.com/users/Khev/following{/other_user}", "gists_url": "https://api.github.com/users/Khev/gists{/gist_id}", "starred_url": "https://api.github.com/users/Khev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Khev/subscriptions", "organizations_url": "https://api.github.com/users/Khev/orgs", "repos_url": "https://api.github.com/users/Khev/repos", "events_url": "https://api.github.com/users/Khev/events{/privacy}", "received_events_url": "https://api.github.com/users/Khev/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, could you specify the versions on which you are running? (especially PyTorch).\r\n\r\n`bool` was introduced in PyTorch 1.2 so, unfortunately, this example will crash if using an anterior version.", "Sorry, forgot to give that info. As you preempted, I'm running torch 1.1.0. Guess I should update?", "The core library should run on PyTorch 1.0.1+, but examples usually require torch 1.2+. It would be better to upgrade if you want to try out the examples, indeed!", "Excellent, will do, thanks! And Kudos on the fast response time BTW :-)", "No problem, glad to help!", "Ah, now I'm getting the following error. Any insight?\r\n\r\nFYI: I'm now at pytorch 1.3.1 -- think I should downgrade to 1.2.0?\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\n~/research/transformers/examples/run_lm_finetuning.py in <module>\r\n 546 \r\n 547 if __name__ == \"__main__\":\r\n--> 548 main()\r\n\r\n~/research/transformers/examples/run_lm_finetuning.py in main()\r\n 498 torch.distributed.barrier()\r\n 499 \r\n--> 500 global_step, tr_loss = train(args, train_dataset, model, tokenizer)\r\n 501 logger.info(\" global_step = %s, average loss = %s\", global_step, tr_loss)\r\n 502 \r\n\r\n~/research/transformers/examples/run_lm_finetuning.py in train(args, train_dataset, model, tokenizer)\r\n 227 labels = labels.to(args.device)\r\n 228 model.train()\r\n--> 229 outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)\r\n 230 loss = outputs[0] # model outputs are always tuple in transformers (see doc)\r\n 231 \r\n\r\n~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)\r\n 539 result = self._slow_forward(*input, **kwargs)\r\n 540 else:\r\n--> 541 result = self.forward(*input, **kwargs)\r\n 542 for hook in self._forward_hooks.values():\r\n 543 hook_result = hook(self, input, result)\r\n\r\n~/anaconda3/lib/python3.7/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, masked_lm_labels)\r\n 768 token_type_ids=token_type_ids,\r\n 769 position_ids=position_ids,\r\n--> 770 head_mask=head_mask)\r\n 771 \r\n 772 sequence_output = outputs[0]\r\n\r\n~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)\r\n 539 result = self._slow_forward(*input, **kwargs)\r\n 540 else:\r\n--> 541 result = self.forward(*input, **kwargs)\r\n 542 for hook in self._forward_hooks.values():\r\n 543 hook_result = hook(self, input, result)\r\n\r\n~/anaconda3/lib/python3.7/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask)\r\n 622 head_mask = [None] * self.config.num_hidden_layers\r\n 623 \r\n--> 624 embedding_output = self.embeddings(input_ids, position_ids=position_ids, token_type_ids=token_type_ids)\r\n 625 encoder_outputs = self.encoder(embedding_output,\r\n 626 extended_attention_mask,\r\n\r\n~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)\r\n 539 result = self._slow_forward(*input, **kwargs)\r\n 540 else:\r\n--> 541 result = self.forward(*input, **kwargs)\r\n 542 for hook in self._forward_hooks.values():\r\n 543 hook_result = hook(self, input, result)\r\n\r\n~/anaconda3/lib/python3.7/site-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids)\r\n 165 token_type_ids = torch.zeros_like(input_ids)\r\n 166 \r\n--> 167 words_embeddings = self.word_embeddings(input_ids)\r\n 168 position_embeddings = self.position_embeddings(position_ids)\r\n 169 token_type_embeddings = self.token_type_embeddings(token_type_ids)\r\n\r\n~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)\r\n 539 result = self._slow_forward(*input, **kwargs)\r\n 540 else:\r\n--> 541 result = self.forward(*input, **kwargs)\r\n 542 for hook in self._forward_hooks.values():\r\n 543 hook_result = hook(self, input, result)\r\n\r\n~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/sparse.py in forward(self, input)\r\n 112 return F.embedding(\r\n 113 input, self.weight, self.padding_idx, self.max_norm,\r\n--> 114 self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n 115 \r\n 116 def extra_repr(self):\r\n\r\n~/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)\r\n 1482 # remove once script supports set_grad_enabled\r\n 1483 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)\r\n-> 1484 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\n 1485 \r\n 1486 \r\n\r\nRuntimeError: index out of range: Tried to access index 38889 out of table with 30521 rows. at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:418\r\n\r\n```", "Hmm that error should not be linked to a version problem. Did you use the same command:\r\n\r\n```\r\nrun_lm_finetuning.py \\\r\n --output_dir=output \\\r\n --model_type=bert \\\r\n --model_name_or_path=bert-base-uncased\\\r\n --do_train \\\r\n --train_data_file=$TRAIN_FILE \\\r\n --do_eval \\\r\n --eval_data_file=$TEST_FILE \\\r\n --mlm\r\n```\r\n\r\nor did you use a different checkpoint?", "I used the same command...", "I also have same issue while finetuning distilgpt2:\r\n\r\n```\r\n File \"train_docker/train\", line 229, in train\r\n outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)\r\n File \"/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 547, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.7/site-packages/transformers/modeling_gpt2.py\", line 533, in forward\r\n head_mask=head_mask)\r\n File \"/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 547, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.7/site-packages/transformers/modeling_gpt2.py\", line 420, in forward\r\n inputs_embeds = self.wte(input_ids)\r\n File \"/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 547, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.7/site-packages/torch/nn/modules/sparse.py\", line 114, in forward\r\n self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n File \"/usr/local/lib/python3.7/site-packages/torch/nn/functional.py\", line 1467, in embedding\r\n return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\nRuntimeError: index out of range: Tried to access index 50257 out of table with 50256 rows. at ../aten/src/TH/generic/THTensorEvenMoreMath.cpp:237\r\n```", "Update: the following two commands, for GTP-2 and roberto, are working. So I will stick to these for now. Thanks for your help!\r\n\r\n```\r\n%run run_lm_finetuning.py \\\r\n --output_dir=output \\\r\n --model_type=gpt2 \\\r\n --model_name_or_path=gpt2 \\\r\n --do_train \\\r\n --train_data_file=$TRAIN_FILE \\\r\n --do_eval \\\r\n --eval_data_file=$TEST_FILE\r\n```\r\n\r\n```\r\nTRAIN_FILE='/home/khev/research/transformers/data/wikitext-2-raw/wiki.train.raw'\r\nTEST_FILE='/home/khev/research/transformers/data/wikitext-2-raw/wiki.test.raw'\r\n\r\n%run run_lm_finetuning.py \\\r\n --output_dir=output \\\r\n --model_type=roberta \\\r\n --model_name_or_path=roberta-base \\\r\n --do_train \\\r\n --train_data_file=$TRAIN_FILE \\\r\n --do_eval \\\r\n --eval_data_file=$TEST_FILE \\\r\n --mlm\r\n```", "Thanks for reporting those bugs @Khev and @iedmrc, I'll check what's going on for BERT and DistilGPT-2.", "I found that It occurs when we have a \"cached feature file\".", "I have one last question. I want to use the fine tuning script on a custom dataset: the titles of scientific papers. So my dataset has form: \r\n\r\n[title1, title2, ..., ]\r\n\r\nwhere title_i = 'this is a paper title'\r\n\r\nI'm wondering how this dataset should be structured. I assume it should mimic the wikitext dataset, but I'm not entirely sure of their structure. It looks like sections are delimited by \" ==\". For exampl, in the wiki.test.raw file, the first few lines are:\r\n\r\n\"\"\"\r\n= Robert Boulter =\r\n\r\nTEXT\r\n\r\n== Career ==\r\n\"\"\"\r\n\r\nShould I put my dataset in this form? So like:\r\n\r\n== Paper1 ==\r\ntitle 1\r\n\r\n== Paper2\r\ntitle 2?\r\n", "To me, you should use <|endoftext|> as a delimiter in the gpt2 case. Check page 4 of the original gpt paper openai published. \r\nhttps://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf", "As @iedmrc said, you should probably mimic the way the individual models were pre-trained instead. \r\n\r\nFor example, for GPT-2 you might want to add the suffix `<|endoftext|>` (available via the `tokenizer.eos_token` attribute) to indicate to the model you're joining different texts. \r\n\r\nWe're not doing that in the `run_lm_finetuning.py` script as we want to keep the example simple and the texts are long enough in the WikiText-2 dataset for it not to affect performance too much.", "I understand. Thanks!", "I just pushed a fix on `master` that should correct the second issue you faced. Feel free to re-open if it did not solve your issue so that I may investigate further." ]
1,573
1,573
1,573
NONE
null
## 🐛 Bug I'm having trouble running the finetuning script. When I run ``` TRAIN_FILE='/home/khev/research/transformers/data/wikitext-2-raw/wiki.train.raw' TEST_FILE='/home/khev/research/transformers/data/wikitext-2-raw/wiki.test.raw' %run run_lm_finetuning.py \ --output_dir=output \ --model_type=bert \ --model_name_or_path=bert-base-uncased\ --do_train \ --train_data_file=$TRAIN_FILE \ --do_eval \ --eval_data_file=$TEST_FILE \ --mlm ``` I get the following error: ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) ~/research/transformers/examples/run_lm_finetuning.py in <module> 545 546 if __name__ == "__main__": --> 547 main() ~/research/transformers/examples/run_lm_finetuning.py in main() 497 torch.distributed.barrier() 498 --> 499 global_step, tr_loss = train(args, train_dataset, model, tokenizer) 500 logger.info(" global_step = %s, average loss = %s", global_step, tr_loss) 501 ~/research/transformers/examples/run_lm_finetuning.py in train(args, train_dataset, model, tokenizer) 222 epoch_iterator = tqdm(train_dataloader, desc="Iteration", disable=args.local_rank not in [-1, 0]) 223 for step, batch in enumerate(epoch_iterator): --> 224 inputs, labels = mask_tokens(batch, tokenizer, args) if args.mlm else (batch, batch) 225 inputs = inputs.to(args.device) 226 labels = labels.to(args.device) ~/research/transformers/examples/run_lm_finetuning.py in mask_tokens(inputs, tokenizer, args) 147 probability_matrix = torch.full(labels.shape, args.mlm_probability) 148 special_tokens_mask = [tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in labels.tolist()] --> 149 probability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.bool), value=0.0) 150 masked_indices = torch.bernoulli(probability_matrix).bool() 151 labels[~masked_indices] = -1 # We only compute loss on masked tokens RuntimeError: Expected object of scalar type Byte but got scalar type Bool for argument #2 'mask' ``` Info: Python 3.7.3 Torch 1.1.0 Ubuntu 18.04.3 LTS
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1818/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1818/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1817
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1817/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1817/comments
https://api.github.com/repos/huggingface/transformers/issues/1817/events
https://github.com/huggingface/transformers/issues/1817
522,275,626
MDU6SXNzdWU1MjIyNzU2MjY=
1,817
Using multiple inputs for GPT-2
{ "login": "giterdun345", "id": 46477620, "node_id": "MDQ6VXNlcjQ2NDc3NjIw", "avatar_url": "https://avatars.githubusercontent.com/u/46477620?v=4", "gravatar_id": "", "url": "https://api.github.com/users/giterdun345", "html_url": "https://github.com/giterdun345", "followers_url": "https://api.github.com/users/giterdun345/followers", "following_url": "https://api.github.com/users/giterdun345/following{/other_user}", "gists_url": "https://api.github.com/users/giterdun345/gists{/gist_id}", "starred_url": "https://api.github.com/users/giterdun345/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/giterdun345/subscriptions", "organizations_url": "https://api.github.com/users/giterdun345/orgs", "repos_url": "https://api.github.com/users/giterdun345/repos", "events_url": "https://api.github.com/users/giterdun345/events{/privacy}", "received_events_url": "https://api.github.com/users/giterdun345/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,573
1,579
1,579
NONE
null
## ❓ Questions & Help A current project I am working on, to utilize a tuned approach in text generation, is creating a cover letter based off of a job description and some personal qualifications. From researching, I found the convAI blog that you used and thought to apply the concept with OpenAIGPTDoubleHeadsModel. I added special tokens to the vocabulary for delimiters and segment indicators to include word, position and segment embeddings. I have the persona ('some personal qualifications'), turned history into job description and the reply was "I am qualified for this position because...". From my understanding, the attention will help relate the job description and persona to the text generation. Now I am stuck when it comes to fine tuning the model. I have a bunch of cover letters to use scraped from the web. I have fine tuned a base model and it outputs some coherent cover letters but I want to incorporate the skills needed in the job description for a more accurate an specific cover letter. Am I on the right track and if so how do I fine tune the model on the cover letters? Any suggestions or ideas are welcomed. <!-- A clear and concise description of the question. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1817/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1817/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1816
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1816/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1816/comments
https://api.github.com/repos/huggingface/transformers/issues/1816/events
https://github.com/huggingface/transformers/issues/1816
522,074,509
MDU6SXNzdWU1MjIwNzQ1MDk=
1,816
Best way to fine tune GPT-2 in order to create a custom text generator?
{ "login": "cppntn", "id": 26765504, "node_id": "MDQ6VXNlcjI2NzY1NTA0", "avatar_url": "https://avatars.githubusercontent.com/u/26765504?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cppntn", "html_url": "https://github.com/cppntn", "followers_url": "https://api.github.com/users/cppntn/followers", "following_url": "https://api.github.com/users/cppntn/following{/other_user}", "gists_url": "https://api.github.com/users/cppntn/gists{/gist_id}", "starred_url": "https://api.github.com/users/cppntn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cppntn/subscriptions", "organizations_url": "https://api.github.com/users/cppntn/orgs", "repos_url": "https://api.github.com/users/cppntn/repos", "events_url": "https://api.github.com/users/cppntn/events{/privacy}", "received_events_url": "https://api.github.com/users/cppntn/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi, you can use a combination of the scripts `run_lm_finetuning.py` and `run_generation.py` to accomplish what you want:\r\n\r\n- Fine-tune GPT-2 to your dataset using `run_lm_finetuning.py`. The default parameters should work well enough, I usually use three epochs (rather than the default 1) when training on small datasets. I have had success with datasets as small as a few 10s of MBs, but I have never tried with less.\r\n\r\n- Generate text by using `run_generation.py` and specifying your custom checkpoint. Specify a `--length` of 200 or 300.", "Thanks @LysandreJik !\r\n\r\nCan you point me out how to organize my dataset file(s) or where to look within the repository? Moreover, does fine-tuning handle OOV words?\r\n\r\nThanks again", "Merge your files into one by splitting them via <|endoftext|> token. You could also split the dataset into two files in order to have a dataset for evaluation. I use %90 and %10 for training and evaluation, respectively. Don't remove any stopwords etc. GPT2 will do the rest. Don't forget that GPT2 is so powerful to learn from your dataset so it may slightly overfit if you have not enough data. For example, train GPT2 with just 10MB data and you'll see it won't generate anything other than learnt from the dataset.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Did you use a special token to separate the title and the corpus?", "I think this would depend on what specifically you are inputting. If it's a title that you want to be part of the body (part of the first sentence) then you wouldn't want to break that sentence up with a separate token. If it's a title that you want the document to derive the topic from but not include as part of the body then a separate token might be helpful to prevent the model from expanding the title to form the body. :)", "> If it's a title that you want the document to derive the topic from but not include as part of the body then a separate token might be helpful to prevent the model from expanding the title to form the body. :)\r\n\r\nSo in essence, if you want to have a title should be used as context but not include as part of the body, you should structure data as:\r\n```\r\nTitle: This is a great title\r\n<|endoftext|>\r\nTitles are one of the greatest inventions of humanity. \r\nWell crafted titles continue to save countless man-years by not requiring readers to actually read the article.\r\n<|endoftext|>\r\nTitle: ...\r\n```\r\n\r\nIs that what you mean? Because intuitively I would assume that this wouldn't work as intended: since the Title is separated in a different token, it should not influence the next token. Or am I missing something?", "Same question. Is there different token for separating the title and the body?", "You could add a special token to the tokenizer and train the dataset with that." ]
1,573
1,595
1,579
NONE
null
Hello to everyone, and thanks for this wonderful work. I am new to this library and I would appreciate an help for a task that i want to accomplish, just to know if I am acting right, to create a custom english text generator, such that giving it an input (title/sentence) it would generate 200-300 words based on that input. My questions are: 1) I have prepared my dataset (each input is composed basically by the title and the corpus), what file should I look for to fine-tune GPT-2: run_lm_finetuning.py ? How many epochs/iterations do you suggest to run to fine tune? How large the dataset should be? 2) Once I have fine-tuned GPT-2, how to generate my custom text giving as input a title/sentence and using the fine-tuned model? Thanks a lot
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1816/reactions", "total_count": 5, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 5 }
https://api.github.com/repos/huggingface/transformers/issues/1816/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1815
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1815/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1815/comments
https://api.github.com/repos/huggingface/transformers/issues/1815/events
https://github.com/huggingface/transformers/issues/1815
521,939,024
MDU6SXNzdWU1MjE5MzkwMjQ=
1,815
computing self-attention for tokens in a sentence
{ "login": "vr25", "id": 22553367, "node_id": "MDQ6VXNlcjIyNTUzMzY3", "avatar_url": "https://avatars.githubusercontent.com/u/22553367?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vr25", "html_url": "https://github.com/vr25", "followers_url": "https://api.github.com/users/vr25/followers", "following_url": "https://api.github.com/users/vr25/following{/other_user}", "gists_url": "https://api.github.com/users/vr25/gists{/gist_id}", "starred_url": "https://api.github.com/users/vr25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vr25/subscriptions", "organizations_url": "https://api.github.com/users/vr25/orgs", "repos_url": "https://api.github.com/users/vr25/repos", "events_url": "https://api.github.com/users/vr25/events{/privacy}", "received_events_url": "https://api.github.com/users/vr25/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "To get the attention score of each heads, refer to the [documentation](https://huggingface.co/transformers/model_doc/bert.html#transformers.BertModel) : see `outputs`.\r\n\r\n---\r\n\r\n> how do I know which heads give the best scores ?\r\n\r\nYou can't really know. Each heads will compute some kind of attention. It can be attention for `this token refer to that token`, or it can be attention for `this token is similar to that token` or even if can be any kind of relation, even something, we human, can't interpret. \r\n\r\nIf you want to extract some specific kind of attention, I see no other solution than manually inspect all the heads' attention and empirically choose one (or several). \r\n\r\nSome papers did it : [Contrastive Attention Mechanism for Abstractive Sentence Summarization](https://arxiv.org/abs/1910.13114)", "Thanks for your reply.\r\n\r\nPlease refer to [here](https://github.com/huggingface/transformers/issues/2054) for more details.\r\n\r\nThanks!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,573
1,581
1,581
NONE
null
## ❓ Questions & Help Hi, Please refer to this [issue](https://github.com/google-research/bert/issues/914). Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1815/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1815/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1814
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1814/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1814/comments
https://api.github.com/repos/huggingface/transformers/issues/1814/events
https://github.com/huggingface/transformers/pull/1814
521,913,516
MDExOlB1bGxSZXF1ZXN0MzQwMjAzNDgy
1,814
Sample a constant number of tokens for masking in LM finetuning
{ "login": "rakeshchada", "id": 2664691, "node_id": "MDQ6VXNlcjI2NjQ2OTE=", "avatar_url": "https://avatars.githubusercontent.com/u/2664691?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rakeshchada", "html_url": "https://github.com/rakeshchada", "followers_url": "https://api.github.com/users/rakeshchada/followers", "following_url": "https://api.github.com/users/rakeshchada/following{/other_user}", "gists_url": "https://api.github.com/users/rakeshchada/gists{/gist_id}", "starred_url": "https://api.github.com/users/rakeshchada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rakeshchada/subscriptions", "organizations_url": "https://api.github.com/users/rakeshchada/orgs", "repos_url": "https://api.github.com/users/rakeshchada/repos", "events_url": "https://api.github.com/users/rakeshchada/events{/privacy}", "received_events_url": "https://api.github.com/users/rakeshchada/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi @rakeshchada, we've thought about this. It is interesting and we might merge it, but will probably will be behind a flag :)\r\n\r\nThanks for contributing!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,573
1,583
1,583
CONTRIBUTOR
null
@thomwolf @LysandreJik @julien-c Re-creating this PR as the original PR https://github.com/huggingface/transformers/pull/1555 that I created about a month ago has become quite stale. Please review. For Masked LM fine-tuning, I think both the original BERT and RoBERTa implementations uniformly sample x number of tokens in *each* sequence for masking (where x = mlm_probability * 100 * sequence_length) However, The current logic in run_lm_finetuning.py does an indepdendent sampling (from bernoulli distribution) for each token in the sequence. This leads to variance in the number of masked tokens (with the average number still close to x%). The below example illustrates an extreme case, of the current logic, where no token in the input sequence is masked. ``` In [1]: import numpy as np ...: import torch ...: from transformers import BertTokenizer ...: ...: mlm_probability = 0.15 ...: tokenizer = BertTokenizer.from_pretrained('bert-large-uncased') ...: ...: tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode('please mask me, o lord!', add_special_tokens=True)) ...: ...: input_ids = tokenizer.convert_tokens_to_ids(tokens) ...: ...: inputs = torch.Tensor([input_ids]) ...: ...: labels = inputs.clone() ...: ...: probability_matrix = torch.full(labels.shape, mlm_probability) ...: ...: special_tokens_mask = [tokenizer.get_special_tokens_mask(val, already_has_special_tokens=True) for val in labels.tolist()] ...: probability_matrix.masked_fill_(torch.tensor(special_tokens_mask, dtype=torch.bool), value=0.0) ...: masked_indices = torch.bernoulli(probability_matrix).bool() ...: ...: In [2]: masked_indices Out[2]: tensor([[False, False, False, False, False, False, False, False, False]]) ``` This PR modifies the logic so the percentage of masked tokens is constant (at x). Separately, the existing and the new masking logic both rely on boolean tensors of pytorch. So, this also updates README to include the minimum pytorch version needed. (1.2.0)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1814/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1814/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1814", "html_url": "https://github.com/huggingface/transformers/pull/1814", "diff_url": "https://github.com/huggingface/transformers/pull/1814.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1814.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/1813
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1813/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1813/comments
https://api.github.com/repos/huggingface/transformers/issues/1813/events
https://github.com/huggingface/transformers/issues/1813
521,832,057
MDU6SXNzdWU1MjE4MzIwNTc=
1,813
LM Fine-tuning for XLNET?
{ "login": "kevinmandich", "id": 8010735, "node_id": "MDQ6VXNlcjgwMTA3MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/8010735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kevinmandich", "html_url": "https://github.com/kevinmandich", "followers_url": "https://api.github.com/users/kevinmandich/followers", "following_url": "https://api.github.com/users/kevinmandich/following{/other_user}", "gists_url": "https://api.github.com/users/kevinmandich/gists{/gist_id}", "starred_url": "https://api.github.com/users/kevinmandich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kevinmandich/subscriptions", "organizations_url": "https://api.github.com/users/kevinmandich/orgs", "repos_url": "https://api.github.com/users/kevinmandich/repos", "events_url": "https://api.github.com/users/kevinmandich/events{/privacy}", "received_events_url": "https://api.github.com/users/kevinmandich/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I'm also looking forward to it! ", "I might be naive here, but will adding xlnet's config, LMHeadModel, and tokenizer to MODEL_CLASSES work?\r\nhttps://github.com/huggingface/transformers/blob/74ce8de7d8e0375a9123f9542f3483f46cc8df9b/examples/run_lm_finetuning.py#L61", "I don't think adding the Python classes for XLNet is enough. The code distinguishes between two modes of fine-tuning a language model: 1. masked language models (MLM), i.e. BERT and company 2. Sequential left-to-right LM, GPT and GPT2. \r\n\r\nCheck this, for example:\r\nhttps://github.com/huggingface/transformers/blob/74ce8de7d8e0375a9123f9542f3483f46cc8df9b/examples/run_lm_finetuning.py#L224\r\n\r\nXLNet is neither. Its LM task is to predict factorization permutations (check paper for more details) of a given sequence. In simple words, it splits in a sequence into two sequential sub-sequences in all possible ways and it uses the first one as input for predicting the second one.\r\n\r\nThe original implementation has sample code on how to produce the factorization permutation:\r\nhttps://github.com/zihangdai/xlnet/blob/5cd50bc451436e188a8e7fea15358d5a8c916b72/data_utils.py#L579", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Pretty much looking forward to it" ]
1,573
1,583
1,580
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> Hi there, is there any plan to add support for fine-tuning XLNET? This one currently isn't available in run_lm_finetuning.py. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1813/reactions", "total_count": 5, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 5 }
https://api.github.com/repos/huggingface/transformers/issues/1813/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1812
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1812/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1812/comments
https://api.github.com/repos/huggingface/transformers/issues/1812/events
https://github.com/huggingface/transformers/pull/1812
521,824,315
MDExOlB1bGxSZXF1ZXN0MzQwMTMwNTYz
1,812
Update conversion script to convert XLM-R
{ "login": "Tiiiger", "id": 19514537, "node_id": "MDQ6VXNlcjE5NTE0NTM3", "avatar_url": "https://avatars.githubusercontent.com/u/19514537?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tiiiger", "html_url": "https://github.com/Tiiiger", "followers_url": "https://api.github.com/users/Tiiiger/followers", "following_url": "https://api.github.com/users/Tiiiger/following{/other_user}", "gists_url": "https://api.github.com/users/Tiiiger/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tiiiger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tiiiger/subscriptions", "organizations_url": "https://api.github.com/users/Tiiiger/orgs", "repos_url": "https://api.github.com/users/Tiiiger/repos", "events_url": "https://api.github.com/users/Tiiiger/events{/privacy}", "received_events_url": "https://api.github.com/users/Tiiiger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1812?src=pr&el=h1) Report\n> Merging [#1812](https://codecov.io/gh/huggingface/transformers/pull/1812?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/155c782a2ccd103cf63ad48a2becd7c76a7d2115?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1812/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1812?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1812 +/- ##\n=======================================\n Coverage 84.16% 84.16% \n=======================================\n Files 94 94 \n Lines 14185 14185 \n=======================================\n Hits 11939 11939 \n Misses 2246 2246\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1812?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1812?src=pr&el=footer). Last update [155c782...6d2a5bf](https://codecov.io/gh/huggingface/transformers/pull/1812?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This is great indeed but we are losing backward compatibility to convert the original RoBERTa weights aren't we?\r\n\r\nMaybe we can add a flag to control which version of fairseq RoBERTa the original weight come from? For instance a flag as a string selected between `roberta` and `XLM-R`.", "I think the situation is a bit more complicated than I thought. We also need to change the vocabulary size, etc. I think it would be better to have a separate treatment for XLM-R. So I am closing this for now. " ]
1,573
1,573
1,573
NONE
null
It requires a few changes to the existing RoBERTa conversion script because `fairseq` has updated the RoBERTa model definition. Specifically, the definition of multiheaded attention is changed at https://github.com/pytorch/fairseq/blob/2a9b4ec2374574cd0315f95e126788e4fe795f0d/fairseq/modules/multihead_attention.py#L42 With this new script, you should be able to convert the XLM-R model released at https://github.com/pytorch/fairseq/tree/master/examples/xlmr and requested by #1769. I have successfully converted the two released model weights.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1812/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1812/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1812", "html_url": "https://github.com/huggingface/transformers/pull/1812", "diff_url": "https://github.com/huggingface/transformers/pull/1812.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1812.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/1811
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1811/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1811/comments
https://api.github.com/repos/huggingface/transformers/issues/1811/events
https://github.com/huggingface/transformers/pull/1811
521,786,021
MDExOlB1bGxSZXF1ZXN0MzQwMDk4NzM1
1,811
Fix special tokens addition in decoder #1807
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1811?src=pr&el=h1) Report\n> Merging [#1811](https://codecov.io/gh/huggingface/transformers/pull/1811?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/155c782a2ccd103cf63ad48a2becd7c76a7d2115?src=pr&el=desc) will **decrease** coverage by `0.08%`.\n> The diff coverage is `100%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1811/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1811?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1811 +/- ##\n==========================================\n- Coverage 84.16% 84.08% -0.09% \n==========================================\n Files 94 94 \n Lines 14185 14047 -138 \n==========================================\n- Hits 11939 11811 -128 \n+ Misses 2246 2236 -10\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1811?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tests/tokenization\\_tests\\_commons.py](https://codecov.io/gh/huggingface/transformers/pull/1811/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl90ZXN0c19jb21tb25zLnB5) | `100% <100%> (ø)` | :arrow_up: |\n| [transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1811/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `92.12% <100%> (+0.92%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1811/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3JvYmVydGEucHk=) | `89.9% <0%> (-0.53%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1811/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbmV0LnB5) | `87.82% <0%> (-0.37%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/1811/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3RyYW5zZm9feGwucHk=) | `92.21% <0%> (-0.33%)` | :arrow_down: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1811/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `96.8% <0%> (-0.28%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1811/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3V0aWxzLnB5) | `92.4% <0%> (-0.28%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1811/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2N0cmwucHk=) | `97.75% <0%> (-0.11%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/1811/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbS5weQ==) | `90.39% <0%> (-0.08%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1811/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2Rpc3RpbGJlcnQucHk=) | `98.59% <0%> (-0.07%)` | :arrow_down: |\n| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/1811/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1811?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1811?src=pr&el=footer). Last update [155c782...74d0bcb](https://codecov.io/gh/huggingface/transformers/pull/1811?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Nice fix, a lot cleaner as well" ]
1,573
1,573
1,573
MEMBER
null
Fixes the issue detailed in #1807 Added special tokens should not be present when decoding with the `skip_special_tokens` flag set to `True`. Using the `convert_tokens_to_ids` checks whether those added tokens are present and removes them if necessary.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1811/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1811/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1811", "html_url": "https://github.com/huggingface/transformers/pull/1811", "diff_url": "https://github.com/huggingface/transformers/pull/1811.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1811.patch", "merged_at": 1573766245000 }